modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-31 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-31 18:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jinghuanHuggingface/q-FrozenLake-v1-4x4-noSlippery | jinghuanHuggingface | 2024-02-19T01:51:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-11T09:59:11Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jinghuanHuggingface/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alaa33/Elsafah | Alaa33 | 2024-02-19T01:26:38Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-02-19T01:26:38Z | ---
license: bigscience-bloom-rail-1.0
license_name: banha-university
license_link: LICENSE
---
|
serpdotai/sparsetral-16x7B-v2-SPIN_iter1 | serpdotai | 2024-02-19T01:24:30Z | 10 | 13 | transformers | [
"transformers",
"safetensors",
"sparsetral",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:argilla/dpo-mix-7k",
"arxiv:2401.01335",
"arxiv:2402.09353",
"arxiv:2106.09685",
"arxiv:2401.02731",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-19T01:10:37Z | ---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- jondurbin/truthy-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- argilla/dpo-mix-7k
language:
- en
---
This model is [sparsetral-16x7B-v2](https://huggingface.co/serpdotai/sparsetral-16x7B-v2) further tuned utilizing [SPIN](https://arxiv.org/abs/2401.01335) on [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) mixed with traditional DPO samples. This is iteration_1, temporarily pausing further training runs in favor of utilizing [DoRA](https://arxiv.org/pdf/2402.09353.pdf) over [LoRA](https://arxiv.org/abs/2106.09685). May also start from the beginning with v3 for proper chat token support, also debating adding function tokens + function calling. If you have any tasks that Sparsetral has been weak at, feel free to send us some prompts/chats + desired completions and we will see about making sure your task is supported!

Kuru~ Kuru~

## Training
- 8x A6000s
- Base model is [sparsetral-16x7B-v2-SPIN_iter0](https://huggingface.co/serpdotai/sparsetral-16x7B-v2-SPIN_iter0)
- [Forked version of unsloth](https://github.com/serp-ai/unsloth) for efficient training
- Sequence Length: 4096
- Effective batch size: 64
- Learning Rate: 5e-7 with linear decay (0.1 warmup ratio)
- Epochs: 2
- 100k samples (50k new SPIN + 50k from iter_0)
- QLoRA:
- 256 r and 256 alpha
- ```python
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
"adapter_down",
"adapter_up",
]
```
## Prompt Format
```
<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{message}<|im_end|>\n<|im_start|>assistant\n
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("serpdotai/sparsetral-16x7B-v2-SPIN_iter0", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("serpdotai/sparsetral-16x7B-v2-SPIN_iter0", device_map="auto", trust_remote_code=True).eval()
system_str = "<|im_start|>system\n{message}<|im_end|>\n"
user_str = "<|im_start|>user\n{message}<|im_end|>\n"
assistant_str = "<|im_start|>assistant\n{message}<|im_end|>\n"
def construct_prompt(messages):
prompt = ""
for message in messages:
if message["from"] in ["human", "user"]:
prompt += user_str.format(
message=message["value"]
)
elif message["from"] in ["gpt", "assistant"]:
prompt += assistant_str.format(
message=message["value"]
)
elif message["from"] in ["system", "instruction"]:
prompt += system_str.format(
message=message["value"]
)
else:
raise ValueError(
f"Unknown message type: {message['from']}"
)
return prompt + "<|im_start|>assistant\n"
system = "You are a helpful assistant who will help the user to the best of their ability. If you don't know something, say \"I don't know\""
user = "Are you sentient?"
messages = [
{"from": "system", "value": system},
{"from": "user", "value": user},
]
prompt = construct_prompt(messages)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = inputs.to(model.device)
pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Other Information
Paper reference: [Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731)
[Original Paper repo](https://github.com/wuhy68/Parameter-Efficient-MoE)
[Forked repo with mistral support (sparsetral)](https://github.com/serp-ai/Parameter-Efficient-MoE)
If you are interested in faster inferencing, check out our [fork of vLLM](https://github.com/serp-ai/vllm) that adds sparsetral support |
Hatsu2004/q-FrozenLake-v1-4x4-noSlippery | Hatsu2004 | 2024-02-19T01:15:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-19T01:00:28Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hatsu2004/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vda1708/gsm8k-llama2-13b-anti-cl | vda1708 | 2024-02-19T01:03:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2024-02-16T16:32:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
CultriX/MonaTrix-v2 | CultriX | 2024-02-19T01:02:32Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/AlphaMonarch-7B",
"eren23/ogno-monarch-jaskier-merge-7b",
"liminerity/Omningotex-7b-slerp",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:liminerity/Omningotex-7b-slerp",
"base_model:merge:liminerity/Omningotex-7b-slerp",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:merge:mlabonne/AlphaMonarch-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-19T00:58:38Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/AlphaMonarch-7B
- eren23/ogno-monarch-jaskier-merge-7b
- liminerity/Omningotex-7b-slerp
base_model:
- mlabonne/AlphaMonarch-7B
- eren23/ogno-monarch-jaskier-merge-7b
- liminerity/Omningotex-7b-slerp
---
# MonaTrix-v2
MonaTrix-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
* [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp)
## 🧩 Configuration
```yaml
models:
- model: CultriX/NeuralTrix-7B-v1
# no parameters necessary for base model
- model: mlabonne/AlphaMonarch-7B
parameters:
density: 0.65
weight: 0.4
- model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
density: 0.6
weight: 0.35
- model: liminerity/Omningotex-7b-slerp
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: CultriX/NeuralTrix-7B-v1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MonaTrix-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
shivanandmn/customer_care_dialog_summary_phi_2 | shivanandmn | 2024-02-19T00:57:56Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-18T22:19:29Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: customer_care_dialog_summary_phi_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# customer_care_dialog_summary_phi_2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0697 | 1.0 | 110 | 1.7888 |
| 1.7135 | 2.0 | 220 | 1.7248 |
| 1.6432 | 3.0 | 330 | 1.7010 |
| 1.6017 | 4.0 | 440 | 1.6930 |
| 1.5717 | 5.0 | 550 | 1.6772 |
| 1.5476 | 6.0 | 660 | 1.6707 |
| 1.525 | 7.0 | 770 | 1.6555 |
| 1.5091 | 8.0 | 880 | 1.6692 |
| 1.4935 | 9.0 | 990 | 1.6602 |
| 1.4816 | 10.0 | 1100 | 1.6666 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
DrNicefellow/Qwen1.5-72B-Chat-4bpw-exl2 | DrNicefellow | 2024-02-19T00:55:22Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-17T20:50:59Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
---
# Qwen1.5-72B-Chat-4.0bpw-exl2
This is a 4.0bpw quantized version of [Qwen/Qwen1.5-72B-Chat](https://huggingface.co/Qwen/Qwen1.5-72B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
jdorairaj/Bert-uncased-adapter-wnli | jdorairaj | 2024-02-19T00:54:56Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"dataset:wnli",
"region:us"
] | null | 2024-02-19T00:47:36Z | ---
tags:
- adapter-transformers
- bert
datasets:
- wnli
---
# Adapter `jdorairaj/Bert-uncased-adapter-wnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnli](https://huggingface.co/datasets/wnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-wnli", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
DrNicefellow/Qwen1.5-14B-Chat-4bpw-exl2 | DrNicefellow | 2024-02-19T00:54:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T20:26:11Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
---
# Qwen1.5-14B-Chat-4.0bpw-exl2
This is a 4.0bpw quantized version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen1.5-14B-Chat-5bpw-exl2 | DrNicefellow | 2024-02-19T00:53:50Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T20:28:29Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
---
# Qwen1.5-14B-Chat-5.0bpw-exl2
This is a 5.0bpw quantized version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
dzagardo/quickstart_newdp_eps5 | dzagardo | 2024-02-19T00:52:03Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-19T00:49:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
misaza/vit_model_miguel_esteban_isaza | misaza | 2024-02-19T00:48:27Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-19T00:33:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit_model_miguel_esteban_isaza
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model_miguel_esteban_isaza
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0601
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1467 | 3.85 | 500 | 0.0601 | 0.9850 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
nishantyadav/emb_crossenc_msmarco_miniLM | nishantyadav | 2024-02-19T00:46:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-19T00:19:29Z | This is a cross-encoder model with dot-product based scoring mechanism trained on MS-MARCO dataset.
The parameters of the cross-encoder are initialized using a 6-layer [minilm model](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
and is trained via distillation using scores from three different teacher models --
[model 1](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_1_albert),
[model 2](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_2_bert_base), and
[model 3](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_3_bert_large_wwm).
This model is used in experiments of our [EMNLP 2023](https://aclanthology.org/2023.findings-emnlp.544/) and [ICLR 2024](https://openreview.net/forum?id=1CPta0bfN2) papers.
See our EMNLP 2022 paper titled "Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix Factorization" for more details on the dot-product based scoring mechanism.
---
license: apache-2.0
---
|
rparasa/segformer_400samples_13of20epochs_4batch | rparasa | 2024-02-19T00:35:07Z | 1 | 0 | transformers | [
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T00:35:05Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_keras_callback
model-index:
- name: segformer_400samples_13of20epochs_4batch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# segformer_400samples_13of20epochs_4batch
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1547
- Validation Loss: 0.2291
- Epoch: 12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4764 | 0.2749 | 0 |
| 0.3092 | 0.3437 | 1 |
| 0.2745 | 0.3007 | 2 |
| 0.2509 | 0.2527 | 3 |
| 0.2402 | 0.2469 | 4 |
| 0.2357 | 0.1991 | 5 |
| 0.2091 | 0.1949 | 6 |
| 0.2095 | 0.1833 | 7 |
| 0.1968 | 0.1662 | 8 |
| 0.1612 | 0.1446 | 9 |
| 0.1680 | 0.1658 | 10 |
| 0.1486 | 0.2235 | 11 |
| 0.1547 | 0.2291 | 12 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
jdorairaj/Bert-uncased-adapter-mrpc | jdorairaj | 2024-02-19T00:34:20Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"dataset:mrpc",
"region:us"
] | null | 2024-02-19T00:34:19Z | ---
tags:
- adapter-transformers
- bert
datasets:
- mrpc
---
# Adapter `jdorairaj/Bert-uncased-adapter-mrpc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [mrpc](https://huggingface.co/datasets/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-mrpc", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
nishantyadav/emb_crossenc_msmarco_teacher_1_albert | nishantyadav | 2024-02-19T00:33:02Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-19T00:04:37Z | This is a cross-encoder model with dot-product based scoring mechanism trained on MS-MARCO dataset.
The parameters of the cross-encoder are initialized using [albert-large-v2](https://huggingface.co/albert/albert-base-v2).
This model is used as a teacher model for training a [MiniLM-based cross-encoder model](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_miniLM)
which is used in experiments of our [EMNLP 2023](https://aclanthology.org/2023.findings-emnlp.544/) and [ICLR 2024](https://openreview.net/forum?id=1CPta0bfN2) papers.
See our EMNLP 2022 paper titled "Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix Factorization" for more details on the dot-product based scoring mechanism.
---
license: apache-2.0
---
|
cnbeining/sentence-segmentation-dpo | cnbeining | 2024-02-19T00:24:00Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-02-19T00:23:59Z | ---
license: cc-by-nc-nd-4.0
---
|
Kukedlc/Mistral-FT-Code-Adapter | Kukedlc | 2024-02-19T00:20:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-02-19T00:19:49Z | ---
license: apache-2.0
---
Peft & LoRA fine tuning
Adapter for Kukedlc/NeuralMaxime-7B-slerp |
yanex0/xxMix-9realistic | yanex0 | 2024-02-19T00:17:34Z | 0 | 1 | null | [
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-24T20:30:25Z | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
### model XXMix 9Realistic
The model was developed by <a href="https://civitai.com/user/Zyx_xx/models">Zyx_xx</a> and It is important to comply with the applicable license and copyright policies when using this model
<p>...</p>
preview v4
<img src="https://yanex0.mywebdev66.repl.co/img-v40.png" width="256" height="256">
preview v3
<img src="https://yanex0.mywebdev66.repl.co/img-v30.png" width="256" height="256">
preview v2.6
<img src="https://yanex0.mywebdev66.repl.co/img-v26.png" width="256" height="256">
### License and Copyright Policy
- The AI model uploaded in this project is subject to the license and copyright terms set by its original owner. Prior to using this model, it is important to understand and comply with the applicable terms and conditions.
- Please note that we only provide this model within the scope of this project and are not responsible for the usage of the model beyond the limitations set by the applicable license and copyright.
<p>please check new version on <a href="https://civitai.com/models/47274?modelVersionId=102222">CivitAi</a>...</p> |
davidataka/summary_resume_keywords | davidataka | 2024-02-19T00:16:58Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:d0rj/rut5-base-summ",
"base_model:finetune:d0rj/rut5-base-summ",
"region:us"
] | null | 2024-02-19T00:16:53Z | ---
base_model: d0rj/rut5-base-summ
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summary_resume_keywords
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary_resume_keywords
This model is a fine-tuned version of [d0rj/rut5-base-summ](https://huggingface.co/d0rj/rut5-base-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9737
- Rouge1: 0.2285
- Rouge2: 0.1524
- Rougel: 0.2285
- Rougelsum: 0.2285
- Gen Len: 51.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 90 | 2.7766 | 0.2485 | 0.1111 | 0.2485 | 0.2485 | 52.0 |
| No log | 2.0 | 180 | 2.7734 | 0.2556 | 0.1404 | 0.2389 | 0.2389 | 53.6667 |
| No log | 3.0 | 270 | 2.7763 | 0.2882 | 0.1368 | 0.2557 | 0.2557 | 51.6667 |
| No log | 4.0 | 360 | 2.7921 | 0.2722 | 0.1404 | 0.2389 | 0.2389 | 58.3333 |
| No log | 5.0 | 450 | 2.8146 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 6.0 | 540 | 2.8387 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 7.0 | 630 | 2.8569 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 8.0 | 720 | 2.8736 | 0.2538 | 0.1524 | 0.2538 | 0.2538 | 55.3333 |
| 2.1351 | 9.0 | 810 | 2.8883 | 0.2538 | 0.1524 | 0.2538 | 0.2538 | 55.3333 |
| 2.1351 | 10.0 | 900 | 2.9025 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 2.1351 | 11.0 | 990 | 2.9161 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 1.7131 | 12.0 | 1080 | 2.9269 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 1.7131 | 13.0 | 1170 | 2.9354 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.7131 | 14.0 | 1260 | 2.9427 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.7131 | 15.0 | 1350 | 2.9471 | 0.2272 | 0.1524 | 0.2272 | 0.2272 | 53.6667 |
| 1.7131 | 16.0 | 1440 | 2.9509 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.5914 | 17.0 | 1530 | 2.9558 | 0.2272 | 0.1524 | 0.2272 | 0.2272 | 53.6667 |
| 1.5914 | 18.0 | 1620 | 2.9589 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.5914 | 19.0 | 1710 | 2.9636 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.0 |
| 1.5914 | 20.0 | 1800 | 2.9660 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.0 |
| 1.5914 | 21.0 | 1890 | 2.9687 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5914 | 22.0 | 1980 | 2.9709 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 23.0 | 2070 | 2.9736 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 24.0 | 2160 | 2.9742 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 25.0 | 2250 | 2.9737 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.3333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
jdorairaj/Bert-uncased-adapter-sst2 | jdorairaj | 2024-02-19T00:13:09Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"dataset:sst2",
"region:us"
] | null | 2024-02-19T00:13:08Z | ---
tags:
- adapter-transformers
- bert
datasets:
- sst2
---
# Adapter `jdorairaj/Bert-uncased-adapter-sst2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sst2](https://huggingface.co/datasets/sst2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-sst2", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
maywell/kiqu-70b | maywell | 2024-02-19T00:07:07Z | 114 | 28 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-17T13:03:20Z | ---
license: cc-by-sa-4.0
language:
- ko
- en
---
# **kiqu-70b** [(Arena Leaderboard)](https://huggingface.co/spaces/instructkr/ko-chatbot-arena-leaderboard)
<img src="./kiqu.webp" alt="kiqu-70B" width="390"/>
**kiqu-70b** is a SFT+DPO trained model based on Miqu-70B-Alpaca-DPO using **Korean** datasets.
Since this model is finetune of miqu-1-70b using it on commercial purposes is at your own risk. — leaked early version Mistral-Medium
본 모델 **kiqu-70b**는 Miqu-70B-Alpaca-DPO 모델을 기반으로 **한국어** 데이터셋을 사용하여 SFT+DPO 훈련을 진행하여 제작되었습니다.
베이스 모델인 miqu-1-70b 모델이 미스트랄-미디움의 초기 유출 버전이기에 상업적 사용에 대한 risk는 본인에게 있습니다.
Beside that this model follows **cc-by-sa-4.0**
본 모델 자체로서는 **cc-by-sa-4.0**을 따릅니다.
# **Model Details**
**Base Model**
miqu-1-70b (Early Mistral-Medium)
**Instruction format**
It follows **Mistral** format.
Giving few-shots to model is highly recommended
본 모델은 미스트랄 포맷을 따릅니다.
few-shot 사용을 적극 권장합니다.
```
[INST] {instruction}
[/INST] {output}
```
Multi-shot
```
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
.
.
.
```
**Recommended Template** - 1-shot with system prompt
```
너는 kiqu-70B라는 한국어에 특화된 언어모델이야. 깔끔하고 자연스럽게 대답해줘!
[INST] 안녕?
[/INST] 안녕하세요! 무엇을 도와드릴까요? 질문이나 궁금한 점이 있다면 언제든지 말씀해주세요.
[INST] {instruction}
[/INST]
```
Trailing space after [/INST] can affect models performance in significant margin. So, when doing inference it is recommended to not include trailing space in chat template.
[/INST] 뒤에 띄어쓰기는 모델 성능에 유의미한 영향을 미칩니다. 따라서, 인퍼런스(추론)과정에서는 챗 템플릿에 띄어쓰기를 제외하는 것을 적극 권장합니다.
# **Model Benchmark**
TBD
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
[Discord Server](https://discord.gg/MrBt3PXdXc)
Contact Me on Discord - is.maywell
Follow me on twitter - https://twitter.com/stablefluffy |
deepaknh/falcon7b-FineTuningQLORA_FullTrainDataset | deepaknh | 2024-02-18T23:56:39Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-02-17T03:23:29Z | ---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
dzagardo/quickstart_newdp_eps2.5 | dzagardo | 2024-02-18T23:40:43Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T23:38:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sddavicillo/t5-french_simplification | sddavicillo | 2024-02-18T23:31:39Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T22:57:17Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-french_simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-french_simplification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0147
- Bleu: 31.1397
- Gen Len: 17.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 34 | 1.0234 | 30.6474 | 17.806 |
| No log | 2.0 | 68 | 1.0147 | 31.1397 | 17.8806 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
arbitropy/bert-finetuned-ner-bangla | arbitropy | 2024-02-18T23:06:47Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-18T00:41:07Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner-bangla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-bangla
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1194 | 0.84 | 500 | 0.1120 |
| 0.1027 | 1.68 | 1000 | 0.1048 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bbunijieun/ft_results | bbunijieun | 2024-02-18T23:00:28Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-16T02:32:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
graceneutrality/a2c-PandaReachDense-v3 | graceneutrality | 2024-02-18T22:38:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-18T22:33:51Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Iceman08/model | Iceman08 | 2024-02-18T22:31:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-18T22:20:01Z | pip install 'langchain[llms]' huggingface-hub langchain transformers
|
rama-comcast/Reinforce-CartPole | rama-comcast | 2024-02-18T22:28:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-18T22:28:17Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 477.40 +/- 47.55
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
iestynmullinor/entailment_classification_llama2_13b_fever | iestynmullinor | 2024-02-18T22:27:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-02-18T22:25:14Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
jdorairaj/Bert-Adapters | jdorairaj | 2024-02-18T22:25:05Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"dataset:cola",
"region:us"
] | null | 2024-02-18T22:17:59Z | ---
tags:
- adapter-transformers
- bert
datasets:
- cola
---
# Adapter `jdorairaj/Bert-Adapters` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [cola](https://huggingface.co/datasets/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-Adapters", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
sddavicillo/mbart-neutralization | sddavicillo | 2024-02-18T22:24:07Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T22:10:39Z | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0181
- Bleu: 98.7341
- Gen Len: 18.4896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 440 | 0.0307 | 91.2911 | 18.25 |
| 0.2343 | 2.0 | 880 | 0.0181 | 98.7341 | 18.4896 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kouki13/mistral | kouki13 | 2024-02-18T22:21:27Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T17:35:17Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
mi-rei/clinical_trial_prediction_LLaMA | mi-rei | 2024-02-18T22:20:00Z | 0 | 0 | peft | [
"peft",
"pytorch",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"base_model:adapter:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | null | 2024-02-12T17:34:49Z | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
betterMateusz/long_llama_2_SAT_WRITING | betterMateusz | 2024-02-18T22:18:03Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T21:28:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: long_llama_3b_v1_1_SAT_WRITING
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long_llama_3b_v1_1_SAT_WRITING
This model is a fine-tuned version of [syzymon/long_llama_3b_v1_1](https://huggingface.co/syzymon/long_llama_3b_v1_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3 |
dsx09/corgy_dog_LoRA | dsx09 | 2024-02-18T22:18:01Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-18T22:13:11Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - dsx09/corgy_dog_LoRA
<Gallery />
## Model description
These are dsx09/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](dsx09/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
joelwigton/t5-xsum | joelwigton | 2024-02-18T22:11:55Z | 1 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:joelwigton/t5-xsum",
"base_model:finetune:joelwigton/t5-xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-16T22:18:23Z | ---
license: apache-2.0
base_model: joelwigton/t5-xsum
tags:
- generated_from_keras_callback
model-index:
- name: joelwigton/t5-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# joelwigton/t5-xsum
This model is a fine-tuned version of [joelwigton/t5-xsum](https://huggingface.co/joelwigton/t5-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4201
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 51012, 'end_learning_rate': 1e-05, 'power': 1, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.5308 | 0 |
| 2.4621 | 1 |
| 2.4201 | 2 |
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.1
|
vishwa27/BERT_NewsNLI | vishwa27 | 2024-02-18T22:08:39Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:vishruthnath/Calc_BERT_ep20",
"base_model:finetune:vishruthnath/Calc_BERT_ep20",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-11-18T02:58:52Z | ---
base_model: vishruthnath/Calc_BERT_ep20
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: BERT_NewsNLI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NewsNLI
This model is a fine-tuned version of [vishruthnath/Calc_BERT_ep20](https://huggingface.co/vishruthnath/Calc_BERT_ep20) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7262
- F1: {'f1': 0.20879156215833816}
- Accuracy: {'accuracy': 0.20833333333333334}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------:|:---------------------------------:|
| No log | 1.0 | 28 | 0.7071 | {'f1': 0.44863239132902055} | {'accuracy': 0.4861111111111111} |
| No log | 2.0 | 56 | 0.7148 | {'f1': 0.3611111111111111} | {'accuracy': 0.3611111111111111} |
| No log | 3.0 | 84 | 0.7216 | {'f1': 0.29746179746179746} | {'accuracy': 0.3055555555555556} |
| No log | 4.0 | 112 | 0.7247 | {'f1': 0.21315721315721317} | {'accuracy': 0.2222222222222222} |
| No log | 5.0 | 140 | 0.7262 | {'f1': 0.20879156215833816} | {'accuracy': 0.20833333333333334} |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Skhaled99/po-mistral7b-ghc | Skhaled99 | 2024-02-18T21:52:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T21:48:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
irp1999/t5-reformulation | irp1999 | 2024-02-18T21:49:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T21:17:26Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-reformulation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-reformulation
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3981
- Bleu: 0.8291
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 121 | 1.4544 | 0.9156 | 19.0 |
| No log | 2.0 | 242 | 1.3981 | 0.8291 | 19.0 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Buseak/spellcorrector_18_02_050_qwerty_v6 | Buseak | 2024-02-18T21:45:19Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"canine",
"token-classification",
"generated_from_trainer",
"base_model:Buseak/spellcorrector_17_02_050_qwerty",
"base_model:finetune:Buseak/spellcorrector_17_02_050_qwerty",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-18T19:26:18Z | ---
license: apache-2.0
base_model: Buseak/spellcorrector_17_02_050_qwerty
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spellcorrector_18_02_050_qwerty_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spellcorrector_18_02_050_qwerty_v6
This model is a fine-tuned version of [Buseak/spellcorrector_17_02_050_qwerty](https://huggingface.co/Buseak/spellcorrector_17_02_050_qwerty) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0028
- Precision: 0.9968
- Recall: 0.9941
- F1: 0.9954
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0887 | 1.0 | 967 | 0.0551 | 0.9876 | 0.9801 | 0.9838 | 0.9842 |
| 0.0684 | 2.0 | 1934 | 0.0415 | 0.9930 | 0.9844 | 0.9887 | 0.9881 |
| 0.0581 | 3.0 | 2901 | 0.0343 | 0.9924 | 0.9855 | 0.9890 | 0.9899 |
| 0.0487 | 4.0 | 3868 | 0.0280 | 0.9925 | 0.9882 | 0.9903 | 0.9917 |
| 0.0425 | 5.0 | 4835 | 0.0241 | 0.9930 | 0.9882 | 0.9906 | 0.9930 |
| 0.0382 | 6.0 | 5802 | 0.0209 | 0.9946 | 0.9882 | 0.9914 | 0.9940 |
| 0.0333 | 7.0 | 6769 | 0.0168 | 0.9951 | 0.9909 | 0.9930 | 0.9950 |
| 0.0294 | 8.0 | 7736 | 0.0148 | 0.9941 | 0.9909 | 0.9925 | 0.9957 |
| 0.0265 | 9.0 | 8703 | 0.0121 | 0.9946 | 0.9909 | 0.9927 | 0.9964 |
| 0.0238 | 10.0 | 9670 | 0.0103 | 0.9952 | 0.9919 | 0.9935 | 0.9970 |
| 0.0216 | 11.0 | 10637 | 0.0090 | 0.9978 | 0.9930 | 0.9954 | 0.9974 |
| 0.0193 | 12.0 | 11604 | 0.0076 | 0.9952 | 0.9930 | 0.9941 | 0.9979 |
| 0.0175 | 13.0 | 12571 | 0.0065 | 0.9973 | 0.9936 | 0.9954 | 0.9982 |
| 0.016 | 14.0 | 13538 | 0.0055 | 0.9973 | 0.9936 | 0.9954 | 0.9985 |
| 0.0137 | 15.0 | 14505 | 0.0045 | 0.9968 | 0.9936 | 0.9952 | 0.9988 |
| 0.0127 | 16.0 | 15472 | 0.0039 | 0.9973 | 0.9941 | 0.9957 | 0.9990 |
| 0.0118 | 17.0 | 16439 | 0.0034 | 0.9978 | 0.9941 | 0.9960 | 0.9991 |
| 0.0111 | 18.0 | 17406 | 0.0030 | 0.9968 | 0.9941 | 0.9954 | 0.9992 |
| 0.0104 | 19.0 | 18373 | 0.0029 | 0.9968 | 0.9941 | 0.9954 | 0.9993 |
| 0.0099 | 20.0 | 19340 | 0.0028 | 0.9968 | 0.9941 | 0.9954 | 0.9993 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
feilongfl/Mistral-7B-cn_news-v0.1-q4 | feilongfl | 2024-02-18T21:38:28Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"mistral",
"summarization",
"zh",
"dataset:feilongfl/ChineseNewsSummary",
"arxiv:1910.09700",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | summarization | 2024-02-18T21:38:47Z | ---
license: apache-2.0
datasets:
- feilongfl/ChineseNewsSummary
language:
- zh
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: summarization
widget:
- text: "概括新闻\n来源:经济参考报\n 国家邮政局监测数据显示,今年春节长假期间(2月10日至17日),全国邮政快递业总体运行安全平稳,寄递渠道畅通有序。具体来看,揽投快递包裹超17亿件,其中,揽收快递包裹10.79亿件,日均揽收量与2023年春节假期相比增长145.2%;投递快递包裹6.41亿件,日均投递量与2023年春节假期相比增长82.1%。\n 国家邮政局相关负责人表示,快递包裹量的良好增长态势,进一步凸显邮政快递业务保通保畅、递送温暖的积极作用,也折射出中国消费新活力。数据显示,自春运开始以来(1月26日至2月17日),全国邮政快递业揽收快递包裹56.55亿件,较2023年春运同期增长30.8%;投递快递包裹59.94亿件,较2023年春运同期增长21%。\n 上述负责人表示,今年春节期间,特色产品和节庆寄递需求旺盛,农产品继续保持增长态势。邮政管理部门坚决守牢安全生产底线,统筹做好春节期间安全生产和寄递服务保障工作。各寄递企业克服低温雨雪天气带来的不利影响,健全完善工作预案,加强人员、运力与物资储备,切实保障在岗快递员合法权益,全力提升年货寄递服务能力。\n 随着春节假期的结束,国内主要快递企业陆续结束春节模式,网络运营转为正常模式。分拨中心、网点相继开工,快递小哥、货车司机和客服人员等回到工作岗位,迎战即将到来的返工返学所带来的业务高峰。\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n .appendQr_wrap{border:1px solid #E6E6E6;padding:8px;}\n .appendQr_normal{float:left;}\n .appendQr_normal img{width:74px;}\n .appendQr_normal_txt{float:left;font-size:20px;line-height:74px;padding-left:20px;color:#333;}\n \n \n \n\n \n\n\n\n责任编辑:王许宁"
output:
text: "{\"title\": \"全国邮政快递业总体运行安全平稳,寄递渠道畅通有序\", \"class\": \"财经\", \"summary\": \"国家邮政局监测数据显示,今年春节长假期间,全国邮政快递业总体运行安全平稳,寄递渠道畅通有序。揽投快递包裹超17亿件,其中,揽收快递包裹10.79亿件,日均揽收量与2023年春节假期相比增长145.2%;投递快递包裹6.41亿件,日均投递量与2023年春节假期相比增长82.1%。数据显示,自春运开始以来,全国邮政快递业揽收快递包裹56.55亿件,较2023年春运同期增长30.8%;投递快递包裹59.94亿件,较2023年春运同期增长21%。国家邮政局相关负责人表示,快递包裹量的良好增长态势,进一步凸显邮政快递业务保通保畅、递送温暖的积极作用,也折射出中国消费新活力。\"}"
---
# Model Card for feilongfl/Mistral-7B-cn_news-v0.1
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Chinese news summary

# Table of Contents
- [Model Card for feilongfl/Mistral-7B-cn_news-v0.1](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary [optional]](#glossary-optional)
- [More Information [optional]](#more-information-optional)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Chinese news summary
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** zh
- **License:** apache-2.0
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
feilongfl
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details>
|
shubham80patil/ujc-elon-musk | shubham80patil | 2024-02-18T21:30:31Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-18T21:25:59Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### ujc-Elon-Musk Dreambooth model trained by shubham80patil following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 1132210757
Sample pictures of this concept:

|
mfenner/distilbert-base-uncased-finetuned-squad | mfenner | 2024-02-18T21:23:13Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-14T00:33:46Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2167 | 1.0 | 5533 | 1.1503 |
| 0.9542 | 2.0 | 11066 | 1.1196 |
| 0.7408 | 3.0 | 16599 | 1.1560 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Gordon119/TAT-openai-whisper-large-v2-special-tag-v1-epoch5-total5epoch | Gordon119 | 2024-02-18T21:20:04Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-18T21:19:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Luna_Test_10.7B | jeiku | 2024-02-18T21:11:26Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Luna_LoRA_SOLAR",
"base_model:merge:jeiku/Luna_LoRA_SOLAR",
"base_model:jeiku/Re-Host_Limarp_Mistral",
"base_model:merge:jeiku/Re-Host_Limarp_Mistral",
"base_model:jeiku/Theory_of_Mind_Mistral",
"base_model:merge:jeiku/Theory_of_Mind_Mistral",
"base_model:jeiku/Theory_of_Mind_Roleplay_Mistral",
"base_model:merge:jeiku/Theory_of_Mind_Roleplay_Mistral",
"base_model:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"base_model:merge:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T20:26:24Z | ---
base_model:
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- jeiku/Theory_of_Mind_Mistral
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- jeiku/Re-Host_Limarp_Mistral
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- jeiku/Luna_LoRA_SOLAR
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- jeiku/Theory_of_Mind_Roleplay_Mistral
library_name: transformers
tags:
- mergekit
- merge
---
# SolarTest
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored) as a base.
### Models Merged
The following models were included in the merge:
* [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
* [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored) + [jeiku/Re-Host_Limarp_Mistral](https://huggingface.co/jeiku/Re-Host_Limarp_Mistral)
* [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored) + [jeiku/Luna_LoRA_SOLAR](https://huggingface.co/jeiku/Luna_LoRA_SOLAR)
* [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored) + [jeiku/Theory_of_Mind_Roleplay_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Roleplay_Mistral)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
parameters:
normalize: true
models:
- model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Luna_LoRA_SOLAR
parameters:
weight: 0.65
- model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Theory_of_Mind_Mistral
parameters:
weight: 1
- model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Theory_of_Mind_Roleplay_Mistral
parameters:
weight: 0.8
- model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Re-Host_Limarp_Mistral
parameters:
weight: 0.55
dtype: float16
```
|
sordonia/new-test-library | sordonia | 2024-02-18T21:11:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-18T17:48:00Z | Number of experts present in the library: 2
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| adversarial_qa_dbert_answer_the_following_q | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_answer_the_following_q | lora |
| abstract_algebra | phi-2 | sordonia/qa-flat-mmlu/abstract_algebra | lora |
Last updated on: 2024-02-18 17:48:00+00:00
|
szymonrucinski/Curie-7B-v1 | szymonrucinski | 2024-02-18T21:06:35Z | 1,776 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"polish",
"nlp",
"pl",
"arxiv:2402.09759",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-11T09:04:29Z | ---
license: apache-2.0
language:
- pl
library_name: transformers
tags:
- polish
- nlp
---
<style>
@import url('https://fonts.googleapis.com/css2?family=Pacifico&display=swap')
.markdown-custom-font {
font-family: "Pacifico", cursive;
font-weight: 400;
font-style: normal;
}
</style>
<div class="markdown-custom-font" align="center">
<img src="logo.png" alt="Logo" width="300">
Curie-7B-v1
</div>
## Introduction
This research demonstrates the potential of fine-tuning English Large Language Models (LLMs) for Polish text generation. By employing Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB (276 million Polish tokens) and subsequent fine-tuning on the [KLEJ challenges](https://klejbenchmark.com), the `Curie-7B-v1` model achieves remarkable performance. It not only generates Polish text with the lowest perplexity of 3.02 among decoder-based models but also rivals the best Polish encoder-decoder models closely, with a minimal performance gap on 8 out of 9 tasks. This was accomplished using about 2-3% of the dataset size typically required, showcasing the method's efficiency. The model is now open-source, contributing to the community's collaborative progress.
### Language Adaptive Pre-training Dataset
The LAPT phase utilized the [SpeakLeash dataset](http://speakleash.org/en/), a comprehensive collection of Polish texts, focusing on the highest quality extract of approximately 2 GB from the original 1TB.
## Hardware and Software Stack
Experiments were conducted on a server featuring an [NVIDIA RTX A6000 ADA GPU](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx-6000/proviz-print-rtx6000-datasheet-web-2504660.pdf) with 48GB of VRAM, AMD Epyc 7742 processor, and running Ubuntu with Pytorch 2.0 and CUDA 12.2.
## The Adaptive Pre-training
The model was trained using AdamW optimizer, with specific hyperparameters aimed at optimizing performance. Training completed in one epoch, taking a total of 106 hours, demonstrating the onset of overfitting beyond this point.
### Hyperparameters
- **lora_rank:** 32
- **lora_dropout:** 0.05
- **lora_alpha:** 16
- **warmup_steps:** 0.1
- **learning_rate:** 2.5 x 10^-5
- **neftune_noise_alpha:** 2
- **batch_size:** 128
- **max_seq_len:** 128
## Fine-tuning for KLEJ Downstream Tasks
`Curie-7B-v1` was exceptionally close to the best baseline models on 8 of 9 KLEJ tasks by using significantly less data, showcasing its efficiency and capability in handling a variety of NLP tasks in Polish.
### Performance Highlights
- **NKJP-NER:** 93.4
- **CDSC-E:** 92.2
- **CDSC-R:** 94.9
- **CBD:** 49.0 (Demonstrating room for improvement)
- **PolEmo2.0-IN:** 92.7
- **PolEmo2.0-OUT:** 80.0
- **DYK:** 76.2
- **PSC:** 98.6
- **AR:** 86.8
## Conclusions
The `Curie-7B-v1` model, through LAPT, matches foundational models on eight downstream tasks with significantly less data. Its versatility in generating Polish text and the ability to be transformed into classifiers, regressors, and AI assistants highlights the method's effectiveness. This open-source Polish LLM provides a foundation for developing efficient business solutions.
## Research Paper
Work and details regarding this model are described in the reserach paper [Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish](https://arxiv.org/abs/2402.09759) by Szymon Ruciński.
|
madmarc/autotrain-3pi8q-cm2ra | madmarc | 2024-02-18T20:50:13Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-18T20:50:10Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <Shae Vizla>
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ramirces/blindnessdataset | ramirces | 2024-02-18T20:31:53Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-02-14T19:18:05Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
liamvbetts/bart-news-summary-v1 | liamvbetts | 2024-02-18T20:25:58Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T20:24:32Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-news-summary-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-news-summary-v1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5666
- Rouge1: 43.4876
- Rouge2: 20.5281
- Rougel: 30.427
- Rougelsum: 40.5702
- Gen Len: 76.261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4645 | 0.4 | 500 | 1.6301 | 41.9531 | 19.5988 | 29.3991 | 39.1894 | 84.099 |
| 1.4492 | 0.8 | 1000 | 1.5666 | 43.4876 | 20.5281 | 30.427 | 40.5702 | 76.261 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ShekDass/donut-base-cord-test3-CMS30SYN85AUG | ShekDass | 2024-02-18T20:19:32Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base-finetuned-cord-v2",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-cord-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-02-18T18:22:04Z | ---
license: mit
base_model: naver-clova-ix/donut-base-finetuned-cord-v2
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-cord-test3-CMS30SYN85AUG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-cord-test3-CMS30SYN85AUG
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 26
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
McGill-NLP/codellm_1b_nope | McGill-NLP | 2024-02-18T20:07:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"custom_decoder_only_t5",
"text-generation",
"custom_code",
"en",
"dataset:bigcode/starcoderdata",
"arxiv:2305.19466",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-02-18T17:25:15Z | ---
license: apache-2.0
datasets:
- bigcode/starcoderdata
language:
- en
---
# McGill-NLP/codellm_1b_nope
This model is a 1B-scale decoder-only transformer designed to explore the impact of positional encoding on length generalization, specifically trained without positional encoding (**NoPE**) to assess its effectiveness in length generalization tasks.
## Usage Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "McGill-NLP/codellm_1b_nope"
# Important: `trust_remote_code=True` is required due to
# the custom architecture supporting different positional encodings,
# necessitating the download of the model implementation from Huggingface
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(model.config.position_encoding_type)
# Outputs: `none`
prompt = "def print_hello_world():"
input_ids = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids
input_ids = torch.cat([
torch.tensor([[tokenizer.bos_token_id]], device="cuda"), input_ids
], dim=1) # Prepend <bos> token
output = model.generate(input_ids, do_sample=True, temperature=0.2, max_length=16)
print(tokenizer.decode(output[0]))
```
## Model Details
### Model Description
- **Developed by:** McGill NLP Group
- **Model type:** Decoder-only transformer
- **Language(s) (NLP):** Primarily English, with potential application across various programming languages as demonstrated by its training on a code dataset.
- **License:** Apache 2.0
- **Finetuned from model:** This model is pretrained from scratch.
### Model Sources
- **Repository:** [McGill-NLP/Length-Generalization GitHub Repository](https://github.com/McGill-NLP/length-generalization)
- **Paper:** [The Impact of Positional Encoding on Length Generalization in Transformers](https://arxiv.org/abs/2305.19466)
## Uses
### Direct Use
The model is designed for direct application in NLP tasks that require understanding and generating text. It's especially suited for working with source code, making it a valuable tool for tasks such as code completion, bug fixing, or even code generation.
## Bias, Risks, and Limitations
Given the model's training on source code, it might inherit biases present in the underlying dataset, including but not limited to, biases towards more commonly used programming languages or coding styles. Users should be cautious when applying this model to diverse or underrepresented coding languages and contexts.
This model has not undergone safety training and it is only produced for research purposes. The user is soley responsible for outputs of this model.
### Recommendations
Users should consider the context and diversity of the application domain when employing this model, especially in critical systems. Further evaluation and fine-tuning might be necessary to mitigate any potential biases or limitations for specific use cases.
## How to Get Started with the Model
Use the example provided in the README to get started with generating text or code. Ensure you have the necessary dependencies installed, including `torch` and `transformers`, and follow the guidelines for setting up your environment.
## Training Details
### Training Data
The model was pretrained on a dataset comprising 30M source code files from the StarCoder corpus, amounting to 30B token. The training data mix:
- 40% Python
- 25% Java
- 25% JavaScript
- 5% GitHub issues
- 5% GitHub commits
### Training Procedure
The model follows a decoder-only architecture with 1.3 billion parameters and was trained to predict the next token in the sequence. For more detailed information on the training procedure, refer to the paper linked above.
## Technical Specifications
### Model Architecture and Objective
The model leverages a decoder-only transformer architecture without explicit positional encoding.
## Citation
Please cite the following paper if you use this model in your work:
```bibtex
@inproceedings{kazemnejad2023:ImpactOfPeOnLengthGen,
title={The Impact of Positional Encoding on Length Generalization in Transformers},
author={Amirhossein Kazemnejad and Inkit Padhi and Karthikeyan Natesan and Payel Das and Siva Reddy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Drrl2gcjzl}
}
```
## More Information
For further details about the model's architecture, training, and applications, please refer to the paper and the GitHub repository linked above. |
McGill-NLP/codellm_1b_rotary | McGill-NLP | 2024-02-18T20:07:09Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"custom_decoder_only_t5",
"text-generation",
"custom_code",
"en",
"dataset:bigcode/starcoderdata",
"arxiv:2305.19466",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-02-18T18:19:27Z | ---
license: apache-2.0
datasets:
- bigcode/starcoderdata
language:
- en
---
# McGill-NLP/codellm_1b_rotary
This model is a 1B-scale decoder-only transformer designed to explore the impact of positional encoding on length generalization, specifically trained with **Rotary** positional encoding to assess its effectiveness in length generalization tasks.
## Usage Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "McGill-NLP/codellm_1b_rotary"
# Important: `trust_remote_code=True` is required due to
# the custom architecture supporting different positional encodings,
# necessitating the download of the model implementation from Huggingface
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(model.config.position_encoding_type)
# Outputs: `rotary`
prompt = "def print_hello_world():"
input_ids = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids
input_ids = torch.cat([
torch.tensor([[tokenizer.bos_token_id]], device="cuda"), input_ids
], dim=1) # Prepend <bos> token
output = model.generate(input_ids, do_sample=True, temperature=0.2, max_length=16)
print(tokenizer.decode(output[0]))
```
## Model Details
### Model Description
- **Developed by:** McGill NLP Group
- **Model type:** Decoder-only transformer
- **Language(s) (NLP):** Primarily English, with potential application across various programming languages as demonstrated by its training on a code dataset.
- **License:** Apache 2.0
- **Finetuned from model:** This model is pretrained from scratch.
### Model Sources
- **Repository:** [McGill-NLP/Length-Generalization GitHub Repository](https://github.com/McGill-NLP/length-generalization)
- **Paper:** [The Impact of Positional Encoding on Length Generalization in Transformers](https://arxiv.org/abs/2305.19466)
## Uses
### Direct Use
The model is designed for direct application in NLP tasks that require understanding and generating text. It's especially suited for working with source code, making it a valuable tool for tasks such as code completion, bug fixing, or even code generation.
## Bias, Risks, and Limitations
Given the model's training on source code, it might inherit biases present in the underlying dataset, including but not limited to, biases towards more commonly used programming languages or coding styles. Users should be cautious when applying this model to diverse or underrepresented coding languages and contexts.
This model has not undergone safety training and it is only produced for research purposes. The user is soley responsible for outputs of this model.
### Recommendations
Users should consider the context and diversity of the application domain when employing this model, especially in critical systems. Further evaluation and fine-tuning might be necessary to mitigate any potential biases or limitations for specific use cases.
## How to Get Started with the Model
Use the example provided in the README to get started with generating text or code. Ensure you have the necessary dependencies installed, including `torch` and `transformers`, and follow the guidelines for setting up your environment.
## Training Details
### Training Data
The model was pretrained on a dataset comprising 30M source code files from the StarCoder corpus, amounting to 30B token. The training data mix:
- 40% Python
- 25% Java
- 25% JavaScript
- 5% GitHub issues
- 5% GitHub commits
### Training Procedure
The model follows a decoder-only architecture with 1.3 billion parameters and was trained to predict the next token in the sequence. For more detailed information on the training procedure, refer to the paper linked above.
## Technical Specifications
### Model Architecture and Objective
The model leverages a decoder-only transformer architecture with Rotary positional encoding.
## Citation
Please cite the following paper if you use this model in your work:
```bibtex
@inproceedings{kazemnejad2023:ImpactOfPeOnLengthGen,
title={The Impact of Positional Encoding on Length Generalization in Transformers},
author={Amirhossein Kazemnejad and Inkit Padhi and Karthikeyan Natesan and Payel Das and Siva Reddy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Drrl2gcjzl}
}
```
## More Information
For further details about the model's architecture, training, and applications, please refer to the paper and the GitHub repository linked above. |
amazonaws-la/sdxl | amazonaws-la | 2024-02-18T19:59:30Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:diffusers/dog-example",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-18T19:26:37Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: TOK
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- diffusers/dog-example
---
# LoRA DreamBooth - amazonaws-la/sdxl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'amazonaws-la/sdxl',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
Ketan3101/chatbot_model | Ketan3101 | 2024-02-18T19:54:43Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T19:54:22Z | ---
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: chatbot_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chatbot_model
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.3724 | 0.06 | 50 | 6.1788 |
| 6.286 | 0.12 | 100 | 6.0336 |
| 6.1512 | 0.18 | 150 | 5.9263 |
| 6.0683 | 0.24 | 200 | 5.8277 |
| 5.986 | 0.3 | 250 | 5.7590 |
| 5.9205 | 0.36 | 300 | 5.7009 |
| 5.8265 | 0.42 | 350 | 5.6524 |
| 5.7699 | 0.47 | 400 | 5.6017 |
| 5.8097 | 0.53 | 450 | 5.5629 |
| 5.7624 | 0.59 | 500 | 5.5347 |
| 5.689 | 0.65 | 550 | 5.5032 |
| 5.7271 | 0.71 | 600 | 5.4836 |
| 5.6464 | 0.77 | 650 | 5.4660 |
| 5.6965 | 0.83 | 700 | 5.4632 |
| 5.5684 | 0.89 | 750 | 5.4594 |
| 5.6917 | 0.95 | 800 | 5.4551 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MrOvkill/deepseek-ai-deepseek-math-7b-rl-GGUF-inference-endpoint-handler-llama-cpp | MrOvkill | 2024-02-18T19:52:42Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-02-18T19:50:45Z | ---
{}
---
# Model Card for deepseek-ai-deepseek-math-7b-rl-GGUF-inference-endpoint-handler-llama-cpp
<!-- Provide a quick summary of what the model is/does. -->
This is just an inference endpoint handler for using LLama-Cpp-Python to run the GGUF model.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jennny/sft_llama7b | Jennny | 2024-02-18T19:38:15Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-18T19:34:49Z | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
Kingkoltrom/Pau | Kingkoltrom | 2024-02-18T19:36:06Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:latent-consistency/lcm-lora-sdv1-5",
"base_model:adapter:latent-consistency/lcm-lora-sdv1-5",
"license:openrail",
"region:us"
] | text-to-image | 2024-02-18T19:34:08Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Blue eyed girl, acne on face
parameters:
negative_prompt: >-
(worst quality, greyscale), ac_neg2, zip2d_neg, ziprealism_neg,
watermark, username, signature, text, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, bad
feet, extra fingers, mutated hands, poorly drawn hands, bad proportions,
extra limbs, disfigured, bad anatomy, gross proportions, malformed limbs,
missing arms, missing legs, extra arms, extra legs, mutated hands, fused
fingers, too many fingers, long neck
output:
url: images/Pau_20231221095635_e000007_02.png
base_model: latent-consistency/lcm-lora-sdv1-5
instance_prompt: Acne, blue eyes
license: openrail
---
# Pau
<Gallery />
## Model description
Lora model for my use
## Trigger words
You should use `Acne` to trigger the image generation.
You should use `blue eyes` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Kingkoltrom/Pau/tree/main) them in the Files & versions tab.
|
amitishah07/AmitsHotModel | amitishah07 | 2024-02-18T18:51:47Z | 0 | 0 | null | [
"dataset:HuggingFaceM4/WebSight",
"region:us"
] | null | 2024-02-18T18:51:03Z | ---
datasets:
- HuggingFaceM4/WebSight
--- |
bartowski/Quyen-Plus-v0.1-exl2 | bartowski | 2024-02-18T18:39:10Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T18:23:04Z | ---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
quantized_by: bartowski
---
## Exllama v2 Quantizations of Quyen-Plus-v0.1
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/vilm/Quyen-Plus-v0.1
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Quyen-Plus-v0.1-exl2 Quyen-Plus-v0.1-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Quyen-Plus-v0.1-exl2`:
```shell
mkdir Quyen-Plus-v0.1-exl2
huggingface-cli download bartowski/Quyen-Plus-v0.1-exl2 --local-dir Quyen-Plus-v0.1-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Quyen-Plus-v0.1-exl2-6_5
huggingface-cli download bartowski/Quyen-Plus-v0.1-exl2 --revision 6_5 --local-dir Quyen-Plus-v0.1-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Quyen-Plus-v0.1-exl2-6.5
huggingface-cli download bartowski/Quyen-Plus-v0.1-exl2 --revision 6_5 --local-dir Quyen-Plus-v0.1-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
bill1886/lab1_finetuning | bill1886 | 2024-02-18T18:36:35Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T00:04:20Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: lab1_finetuning
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.390127099565355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_finetuning
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9637
- Bleu: 50.3901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
terryyz/starcoderbase-7b-codecot | terryyz | 2024-02-18T18:32:54Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-7b",
"base_model:adapter:bigcode/starcoderbase-7b",
"region:us"
] | null | 2024-02-18T18:32:47Z | ---
library_name: peft
base_model: bigcode/starcoderbase-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Gordon119/TAT-openai-whisper-large-v2-special-tag-v1-epoch4-total5epoch | Gordon119 | 2024-02-18T18:30:18Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-18T18:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luna-code/codegen-2B-mono-evo-prefix | luna-code | 2024-02-18T18:29:53Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/codegen-2B-mono",
"base_model:adapter:Salesforce/codegen-2B-mono",
"region:us"
] | null | 2024-02-18T18:29:50Z | ---
library_name: peft
base_model: Salesforce/codegen-2B-mono
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Noel-lawrence/qrdqn-SpaceInvadersNoFrameskip-v4 | Noel-lawrence | 2024-02-18T18:29:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-18T18:25:56Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 6961.50 +/- 6523.95
name: mean_reward
verified: false
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Noel-lawrence -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Noel-lawrence -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Noel-lawrence
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
luna-code/codeparrot-small-evo-prefix | luna-code | 2024-02-18T18:28:55Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codeparrot/codeparrot-small",
"base_model:adapter:codeparrot/codeparrot-small",
"region:us"
] | null | 2024-02-18T18:28:52Z | ---
library_name: peft
base_model: codeparrot/codeparrot-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
thenameisfazil/attractive-elephant-nxt | thenameisfazil | 2024-02-18T18:21:42Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-18T18:17:29Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Attractive-Elephant-nxt Dreambooth model trained by thenameisfazil following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 730223104017
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
|
tptodorov/ppo-Huggy | tptodorov | 2024-02-18T18:20:30Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-18T18:20:24Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tptodorov/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
liminding/bert-finetuned-squad | liminding | 2024-02-18T18:16:45Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-18T03:56:22Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
scoris/scoris-mt-lt-en | scoris | 2024-02-18T18:12:50Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"lt",
"en",
"dataset:scoris/en-lt-merged-data",
"license:cc-by-2.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-14T18:11:39Z | ---
license: cc-by-2.5
language:
- lt
- en
datasets:
- scoris/en-lt-merged-data
---
# Overview

This is an Lithuanian-English translation model (Seq2Seq). For English-Lithuanian translation check another model [scoris/scoris-mt-en-lt](https://huggingface.co/scoris/scoris-mt-en-lt)
Original model: [Helsinki-NLP/opus-mt-tc-big-lt-en](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-lt-en)
Fine-tuned on large merged data set: [scoris/en-lt-merged-data](https://huggingface.co/datasets/scoris/en-lt-merged-data) (5.4 million sentence pairs)
Trained on 3 epochs.
Made by [Scoris](https://scoris.lt) team
# Evaluation:
| LT-EN| BLEU |
|-|------|
| scoris/scoris-mt-lt-en| 43.8 |
| Helsinki-NLP/opus-mt-tc-big-en-lt| 36.8 |
| Google Translate| 31.9 |
| Deepl| 36.1 |
_Evaluated on scoris/en-lt-merged-data validation set. Google and Deepl evaluated using a random sample of 1000 sentence pairs._
According to [Google](https://cloud.google.com/translate/automl/docs/evaluate) BLEU score interpretation is following:
| BLEU Score | Interpretation
|----------|---------|
| < 10 | Almost useless
| 10 - 19 | Hard to get the gist
| 20 - 29 | The gist is clear, but has significant grammatical errors
| 30 - 40 | Understandable to good translations
| **40 - 50** | **High quality translations**
| 50 - 60 | Very high quality, adequate, and fluent translations
| > 60 | Quality often better than human
# Usage
You can use the model in the following way:
```python
from transformers import MarianMTModel, MarianTokenizer
# Specify the model identifier on Hugging Face Model Hub
model_name = "scoris/scoris/scoris-mt-lt-en"
# Load the model and tokenizer from Hugging Face
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
src_text = [
"Kartą, senų senovėje, buvo viena mergaitė ir gyveno ji su savo mama mažoje jaukioje trobelėje prie miško. ",
"Mergaitę žmonės vadino Raudonkepuraite, nes ji dažnai dėvėdavo raudoną apsiaustėlį su kapišonu. ",
"Mergaitė mielai gobdavosi šiuo apsiaustėliu, nes jį buvo gavusi iš savo močiutės, kuri gyveno namelyje už miško ir labai mylėjo Raudonkepuraitę. ",
"Vieną dieną mama priruošė Raudonkepuraitei pilną krepšelį įvairiausių gėrybių.",
"Pridėjo obuoliukų, kriaušaičių, braškių, taip pat skanių pyragėlių, kuriuos pati buvo iškepusi, sūrio ir gabalėlį mėsos bei didelį išdabintą tortą."
]
# Tokenize the text and generate translations
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
# Print out the translations
for t in translated:
print(tokenizer.decode(t, skip_special_tokens=True))
#Once upon a time there was a girl, and she lived with her mother in a small cozy hut by the forest.
#The girl was called the Red cape because she often wore a red cape.
#The girl would gladly wear this coat, because she had it from her grandmother, who lived in a house outside the forest and loved Redcape very much.
#One day my mother prepared a basket full of all kinds of good things for the Red cape.
#He added apples, pears, strawberries, as well as delicious cakes that he had baked, cheese and a piece of meat, and a large cake.
``` |
Davlan/afro-xlmr-large-76L_script | Davlan | 2024-02-18T18:10:39Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"en",
"am",
"ar",
"so",
"sw",
"pt",
"af",
"fr",
"zu",
"mg",
"ha",
"sn",
"arz",
"ny",
"ig",
"xh",
"yo",
"st",
"rw",
"tn",
"ti",
"ts",
"om",
"run",
"nso",
"ee",
"ln",
"tw",
"pcm",
"gaa",
"loz",
"lg",
"guw",
"bem",
"efi",
"lue",
"lua",
"toi",
"ve",
"tum",
"tll",
"iso",
"kqn",
"zne",
"umb",
"mos",
"tiv",
"lu",
"ff",
"kwy",
"bci",
"rnd",
"luo",
"wal",
"ss",
"lun",
"wo",
"nyk",
"kj",
"ki",
"fon",
"bm",
"cjk",
"din",
"dyu",
"kab",
"kam",
"kbp",
"kr",
"kmb",
"kg",
"nus",
"sg",
"taq",
"tzm",
"nqo",
"arxiv:2309.07445",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-18T15:53:37Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-76L_script
results: []
language:
- en
- am
- ar
- so
- sw
- pt
- af
- fr
- zu
- mg
- ha
- sn
- arz
- ny
- ig
- xh
- yo
- st
- rw
- tn
- ti
- ts
- om
- run
- nso
- ee
- ln
- tw
- pcm
- gaa
- loz
- lg
- guw
- bem
- efi
- lue
- lua
- toi
- ve
- tum
- tll
- iso
- kqn
- zne
- umb
- mos
- tiv
- lu
- ff
- kwy
- bci
- rnd
- luo
- wal
- ss
- lun
- wo
- nyk
- kj
- ki
- fon
- bm
- cjk
- din
- dyu
- kab
- kam
- kbp
- kr
- kmb
- kg
- nus
- sg
- taq
- tzm
- nqo
---
# afro-xlmr-large-76L_script
AfroXLMR-large was created by first augmenting the XLM-R-large model with missing scripts (N'Ko and Tifinagh), followed by an MLM adaptation of the expanded XLM-R-large model on 76 languages widely spoken in Africa
including 4 high-resource languages.
### Pre-training corpus
A mix of mC4, Wikipedia and OPUS data
### Languages
There are 75 languages available :
- English (eng)
- Amharic (amh)
- Arabic (ara)
- Somali (som)
- Kiswahili (swa)
- Portuguese (por)
- Afrikaans (afr)
- French (fra)
- isiZulu (zul)
- Malagasy (mlg)
- Hausa (hau)
- chiShona (sna)
- Egyptian Arabic (arz)
- Chichewa (nya)
- Igbo (ibo)
- isiXhosa (xho)
- Yorùbá (yor)
- Sesotho (sot)
- Kinyarwanda (kin)
- Tigrinya (tir)
- Tsonga (tso)
- Oromo (orm)
- Rundi (run)
- Northern Sotho (nso)
- Ewe (ewe)
- Lingala (lin)
- Twi (twi)
- Nigerian Pidgin (pcm)
- Ga (gaa)
- Lozi (loz)
- Luganda (lug)
- Gun (guw)
- Bemba (bem)
- Efik (efi)
- Luvale (lue)
- Luba-Lulua (lua)
- Tonga (toi)
- Tshivenḓa (ven)
- Tumbuka (tum)
- Tetela (tll)
- Isoko (iso)
- Kaonde (kqn)
- Zande (zne)
- Umbundu (umb)
- Mossi (mos)
- Tiv (tiv)
- Luba-Katanga (lub)
- Fula (fuv)
- San Salvador Kongo (kwy)
- Baoulé (bci)
- Ruund (rnd)
- Luo (luo)
- Wolaitta (wal)
- Swazi (ssw)
- Lunda (lun)
- Wolof (wol)
- Nyaneka (nyk)
- Kwanyama (kua)
- Kikuyu (kik)
- Fon (fon)
- Bambara (bam)
- Chokwe (cjk)
- Dinka (dik)
- Dyula (dyu)
- Kabyle (kab)
- Kamba (kam)
- Kabiyè (kbp)
- Kanuri (knc)
- Kimbundu (kmb)
- Kikongo (kon)
- Nuer (nus)
- Sango (sag)
- Tamasheq (taq)
- Tamazight (tzm)
- N'ko (nqo)
### Acknowledgment
### BibTeX entry and citation info.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Davlan/afro-xlmr-base-76L_script | Davlan | 2024-02-18T18:10:21Z | 7 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"en",
"am",
"ar",
"so",
"sw",
"pt",
"af",
"fr",
"zu",
"mg",
"ha",
"sn",
"arz",
"ny",
"ig",
"xh",
"yo",
"st",
"rw",
"tn",
"ti",
"ts",
"om",
"run",
"nso",
"ee",
"ln",
"tw",
"pcm",
"gaa",
"loz",
"lg",
"guw",
"bem",
"efi",
"lue",
"lua",
"toi",
"ve",
"tum",
"tll",
"iso",
"kqn",
"zne",
"umb",
"mos",
"tiv",
"lu",
"ff",
"kwy",
"bci",
"rnd",
"luo",
"wal",
"ss",
"lun",
"wo",
"nyk",
"kj",
"ki",
"fon",
"bm",
"cjk",
"din",
"dyu",
"kab",
"kam",
"kbp",
"kr",
"kmb",
"kg",
"nus",
"sg",
"taq",
"tzm",
"nqo",
"arxiv:2309.07445",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-18T14:43:10Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-base-76L_script
results: []
language:
- en
- am
- ar
- so
- sw
- pt
- af
- fr
- zu
- mg
- ha
- sn
- arz
- ny
- ig
- xh
- yo
- st
- rw
- tn
- ti
- ts
- om
- run
- nso
- ee
- ln
- tw
- pcm
- gaa
- loz
- lg
- guw
- bem
- efi
- lue
- lua
- toi
- ve
- tum
- tll
- iso
- kqn
- zne
- umb
- mos
- tiv
- lu
- ff
- kwy
- bci
- rnd
- luo
- wal
- ss
- lun
- wo
- nyk
- kj
- ki
- fon
- bm
- cjk
- din
- dyu
- kab
- kam
- kbp
- kr
- kmb
- kg
- nus
- sg
- taq
- tzm
- nqo
---
# afro-xlmr-base-76L_script
AfroXLMR-large was created by first augmenting the XLM-R-base model with missing scripts (N'Ko and Tifinagh), followed by an MLM adaptation of the expanded XLM-R-base model on 76 languages widely spoken in Africa
including 4 high-resource languages.
### Pre-training corpus
A mix of mC4, Wikipedia and OPUS data
### Languages
There are 75 languages available :
- English (eng)
- Amharic (amh)
- Arabic (ara)
- Somali (som)
- Kiswahili (swa)
- Portuguese (por)
- Afrikaans (afr)
- French (fra)
- isiZulu (zul)
- Malagasy (mlg)
- Hausa (hau)
- chiShona (sna)
- Egyptian Arabic (arz)
- Chichewa (nya)
- Igbo (ibo)
- isiXhosa (xho)
- Yorùbá (yor)
- Sesotho (sot)
- Kinyarwanda (kin)
- Tigrinya (tir)
- Tsonga (tso)
- Oromo (orm)
- Rundi (run)
- Northern Sotho (nso)
- Ewe (ewe)
- Lingala (lin)
- Twi (twi)
- Nigerian Pidgin (pcm)
- Ga (gaa)
- Lozi (loz)
- Luganda (lug)
- Gun (guw)
- Bemba (bem)
- Efik (efi)
- Luvale (lue)
- Luba-Lulua (lua)
- Tonga (toi)
- Tshivenḓa (ven)
- Tumbuka (tum)
- Tetela (tll)
- Isoko (iso)
- Kaonde (kqn)
- Zande (zne)
- Umbundu (umb)
- Mossi (mos)
- Tiv (tiv)
- Luba-Katanga (lub)
- Fula (fuv)
- San Salvador Kongo (kwy)
- Baoulé (bci)
- Ruund (rnd)
- Luo (luo)
- Wolaitta (wal)
- Swazi (ssw)
- Lunda (lun)
- Wolof (wol)
- Nyaneka (nyk)
- Kwanyama (kua)
- Kikuyu (kik)
- Fon (fon)
- Bambara (bam)
- Chokwe (cjk)
- Dinka (dik)
- Dyula (dyu)
- Kabyle (kab)
- Kamba (kam)
- Kabiyè (kbp)
- Kanuri (knc)
- Kimbundu (kmb)
- Kikongo (kon)
- Nuer (nus)
- Sango (sag)
- Tamasheq (taq)
- Tamazight (tzm)
- N'ko (nqo)
### Acknowledgment
### BibTeX entry and citation info.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
qyopy/21savage | qyopy | 2024-02-18T18:08:39Z | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T18:06:09Z | ---
license: apache-2.0
language:
- en
--- |
Mitrofazotron/mistral-7b-500-tpt06_20e | Mitrofazotron | 2024-02-18T18:01:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T17:28:39Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-7b-500-tpt06_20e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-500-tpt06_20e
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.1 |
LoneStriker/34b-beta-8.0bpw-h8-exl2 | LoneStriker | 2024-02-18T18:00:23Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T17:46:20Z | ---
license: gpl-3.0
---
# CausalLM 34B β
## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.
**Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version.
To be fixed in the upcoming next version update.
**no repetition_penalty!**
Please do not use wikitext for quantization calibration because all wikitext have been re-aligned on synthetic dataset, and its distribution differs significantly from the original wikitext.
## MT-Bench: 8.5

## Some contamination detection if you want to check:
| Models | MMLU (ref: llama7b) | TBA |
| ------------------------- | ------------------- | ---- |
| microsoft/Orca-2-7b | 0.77 | |
| mistralai/Mistral-7B-v0.1 | 0.46 | |
| **CausalLM/34b-beta** | **0.38** | |
| 01-ai/Yi-6B-200K | 0.3 | |
data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector
It should be *safe*. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints. |
raoulmago/doc_classification | raoulmago | 2024-02-18T17:56:54Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-16T14:20:53Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: doc_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# doc_classification
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0056
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 8.33 | 100 | 0.3533 | 0.4147 | 0.3516 | 0.3805 | 0.8964 |
| No log | 16.67 | 200 | 0.0993 | 0.884 | 0.8633 | 0.8735 | 0.9782 |
| No log | 25.0 | 300 | 0.0338 | 0.9882 | 0.9805 | 0.9843 | 0.9977 |
| No log | 33.33 | 400 | 0.0173 | 0.9961 | 0.9922 | 0.9941 | 0.9992 |
| 0.238 | 41.67 | 500 | 0.0109 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.238 | 50.0 | 600 | 0.0081 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.238 | 58.33 | 700 | 0.0068 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.238 | 66.67 | 800 | 0.0061 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.238 | 75.0 | 900 | 0.0057 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0136 | 83.33 | 1000 | 0.0056 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
FINNUMBER/Yi-Ko-6B-Finch-NQA-EXT-FULL-NEW-epoch3 | FINNUMBER | 2024-02-18T17:52:21Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T16:19:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/34b-beta-6.0bpw-h6-exl2 | LoneStriker | 2024-02-18T17:46:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T17:35:33Z | ---
license: gpl-3.0
---
# CausalLM 34B β
## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.
**Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version.
To be fixed in the upcoming next version update.
**no repetition_penalty!**
Please do not use wikitext for quantization calibration because all wikitext have been re-aligned on synthetic dataset, and its distribution differs significantly from the original wikitext.
## MT-Bench: 8.5

## Some contamination detection if you want to check:
| Models | MMLU (ref: llama7b) | TBA |
| ------------------------- | ------------------- | ---- |
| microsoft/Orca-2-7b | 0.77 | |
| mistralai/Mistral-7B-v0.1 | 0.46 | |
| **CausalLM/34b-beta** | **0.38** | |
| 01-ai/Yi-6B-200K | 0.3 | |
data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector
It should be *safe*. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints. |
juliajoanna/lora-trained-xl | juliajoanna | 2024-02-18T17:43:42Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-10-22T23:34:49Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of sks goddess sculpture
widget:
- text: A photo of sks goddess sculpture under a tree
output:
url: image_0.png
- text: A photo of sks goddess sculpture under a tree
output:
url: image_1.png
- text: A photo of sks goddess sculpture under a tree
output:
url: image_2.png
- text: A photo of sks goddess sculpture under a tree
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - juliajoanna/lora-trained-xl
<Gallery />
## Model description
These are juliajoanna/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use A photo of sks goddess sculpture to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](juliajoanna/lora-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
HongminXiao/Reinforce-111 | HongminXiao | 2024-02-18T17:42:31Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-18T17:42:22Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-111
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
urbija/cer_model-i | urbija | 2024-02-18T17:32:27Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.1",
"base_model:finetune:dmis-lab/biobert-base-cased-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-18T14:48:14Z | ---
base_model: dmis-lab/biobert-base-cased-v1.1
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cer_model-i
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cer_model-i
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5760
- Precision: 0.6022
- Recall: 0.6740
- F1: 0.6361
- Accuracy: 0.7627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5369 | 1.0 | 4841 | 0.5760 | 0.6022 | 0.6740 | 0.6361 | 0.7627 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
jimboHsueh/llama2-finetune-13b-relation | jimboHsueh | 2024-02-18T17:27:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-02-18T16:55:21Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Mitrofazotron/mistral-7b-500-tpt06 | Mitrofazotron | 2024-02-18T17:24:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T17:11:24Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-7b-500-tpt06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-500-tpt06
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.1 |
terryyz/starcoderbase-3b-codecot | terryyz | 2024-02-18T17:16:10Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-3b",
"base_model:adapter:bigcode/starcoderbase-3b",
"region:us"
] | null | 2024-02-18T17:16:02Z | ---
library_name: peft
base_model: bigcode/starcoderbase-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
nearby/sponge2 | nearby | 2024-02-18T17:13:30Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-02-18T17:13:30Z | ---
license: openrail
license_name: public
license_link: LICENSE
---
|
nearby/sponge | nearby | 2024-02-18T17:12:03Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-02-18T17:12:03Z | ---
license: other
license_name: public
license_link: LICENSE
---
|
KipperDev/bart_summarizer_model | KipperDev | 2024-02-18T17:11:49Z | 30 | 3 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"summarizer",
"text summarization",
"abstractive summarization",
"en",
"dataset:big_patent",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-01-25T21:19:16Z | ---
license: mit
datasets:
- big_patent
language:
- en
metrics:
- rouge
tags:
- summarization
- summarizer
- text summarization
- abstractive summarization
pipeline_tag: summarization
---
[](https://shields.io/)
[](https://colab.research.google.com/drive/1TWasAT17zU90CqgbK98ouDuBXXHtwbVL?usp=sharing)
# Table of Contents
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Training Details](#training-details)
4. [Training Results](#training-results)
5. [Citation](#citation)
6. [Author](#model-card-authors)
# Model Details
This variant of the [facebook/bart-base](https://huggingface.co/facebook/bart-base) model, is fine-tuned specifically for the task of text summarization. This model aims to generate concise, coherent, and informative summaries from extensive text documents, leveraging the power of the BART bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder approach.
# Usage
This model is intended for use in summarizing long-form texts into concise, informative abstracts. It's particularly useful for professionals and researchers who need to quickly grasp the essence of detailed reports, research papers, or articles without reading the entire text.
## Get Started
Install with `pip`:
```bash
pip install transformers
```
Use in python:
```python
from transformers import pipeline
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
model_name = "KipperDev/bart_summarizer_model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
# Example usage
prefix = "summarize: "
input_text = "Your input text here."
input_ids = tokenizer.encode(prefix + input_text, return_tensors="pt")
summary_ids = model.generate(input_ids)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)
```
**NOTE THAT FOR THE MODEL TO WORK AS INTENDED, YOU NEED TO APPEND THE 'summarize:' PREFIX BEFORE THE INPUT DATA**
# Training Details
## Training Data
The model was trained using the [Big Patent Dataset](https://huggingface.co/datasets/big_patent), comprising 1.3 million US patent documents and their corresponding human-written summaries. This dataset was chosen for its rich language and complex structure, representative of the challenging nature of document summarization tasks.
Training involved multiple subsets of the dataset to ensure broad coverage and robust model performance across varied document types.
## Training Procedure
Training was conducted over three rounds, with initial settings including a learning rate of 0.00002, batch size of 8, and 4 epochs. Subsequent rounds adjusted these parameters to refine model performance further, for respectively 0.0003, 8 and 12. As well, a linear decay learning rate schedule was applied to enhance model learning efficiency over time.
# Training results
Model performance was evaluated using the ROUGE metric, highlighting its capability to generate summaries closely aligned with human-written abstracts.
| **Metric** | **Value** |
|-----------------------------------------|------------|
| Evaluation Loss (Eval Loss) | 1.9244 |
| Rouge-1 | 0.5007 |
| Rouge-2 | 0.2704 |
| Rouge-L | 0.3627 |
| Rouge-Lsum | 0.3636 |
| Average Generation Length (Gen Len) | 122.1489 |
| Runtime (seconds) | 1459.3826 |
| Samples per Second | 1.312 |
| Steps per Second | 0.164 |
# Citation
**BibTeX:**
```bibtex
@article{kipper_t5_summarizer,
// SOON
}
```
# Authors
This model card was written by [Fernanda Kipper](https://www.fernandakipper.com/) |
LoneStriker/34b-beta-4.0bpw-h6-exl2 | LoneStriker | 2024-02-18T17:11:04Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T17:03:41Z | ---
license: gpl-3.0
---
# CausalLM 34B β
## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.
**Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version.
To be fixed in the upcoming next version update.
**no repetition_penalty!**
Please do not use wikitext for quantization calibration because all wikitext have been re-aligned on synthetic dataset, and its distribution differs significantly from the original wikitext.
## MT-Bench: 8.5

## Some contamination detection if you want to check:
| Models | MMLU (ref: llama7b) | TBA |
| ------------------------- | ------------------- | ---- |
| microsoft/Orca-2-7b | 0.77 | |
| mistralai/Mistral-7B-v0.1 | 0.46 | |
| **CausalLM/34b-beta** | **0.38** | |
| 01-ai/Yi-6B-200K | 0.3 | |
data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector
It should be *safe*. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints. |
lkntrp/ppo-LunarLander-v2 | lkntrp | 2024-02-18T17:03:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-18T17:02:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.67 +/- 14.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
terryyz/starcoder-codecot | terryyz | 2024-02-18T16:44:07Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"region:us"
] | null | 2024-02-18T16:43:57Z | ---
library_name: peft
base_model: bigcode/starcoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
jan-hq/stealth-finance-v4 | jan-hq | 2024-02-18T16:38:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T16:31:50Z | ---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training detail
You can read [here](https://huggingface.co/jan-hq/stealth-finance-v1-adapter).
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. |
duraad/nep-spell-mt5-small-00 | duraad | 2024-02-18T16:34:20Z | 20 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-18T15:35:15Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: nep-spell-mt5-small-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nep-spell-mt5-small-00
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
sudoLife/tst-summarization | sudoLife | 2024-02-18T16:28:21Z | 71 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-07T10:20:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: tst-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: cnn_dailymail 3.0.0
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 41.607
language:
- en
library_name: transformers
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6418
- Rouge1: 41.607
- Rouge2: 19.2272
- Rougel: 29.4514
- Rougelsum: 38.8228
- Gen Len: 73.8731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3 |
danielhanchen/lora_19022024 | danielhanchen | 2024-02-18T16:27:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-18T16:27:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenhaodev/smaug-34b-v0.1-onc-v1 | chenhaodev | 2024-02-18T16:25:29Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:abacusai/Smaug-34B-v0.1",
"base_model:adapter:abacusai/Smaug-34B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-18T16:19:54Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: abacusai/Smaug-34B-v0.1
model-index:
- name: model-update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-update
This model is a fine-tuned version of [abacusai/Smaug-34B-v0.1](https://huggingface.co/abacusai/Smaug-34B-v0.1) on the oncc_medqa_instruct dataset.
## Training procedure
```
git clone https://github.com/chenhaodev/LLaMA-Factory; cd LLaMA-Factory; pip install -r requirements.txt;
python create_pods.py 'Qwen/Qwen-72B' 'NVIDIA A100 80GB PCIe' 1 xxx xxx xxx 6rhltjf914
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.0.1+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2 |
yurujaja/DGInStyle | yurujaja | 2024-02-18T16:24:59Z | 3 | 7 | diffusers | [
"diffusers",
"license:cc-by-sa-4.0",
"diffusers:StableDiffusionControlNetRefinePipeline",
"region:us"
] | null | 2023-12-13T11:25:33Z | ---
license: cc-by-sa-4.0
---
# DGInStyle Model Weights
- Stable Diffusion model weights
- Source-domain(GTA) fine-tuned weights
- ControlNet model weights
- Initialized and fine-tuned with source-domain fine-tuned Stable Diffusion
- SegFormer(MiT-B5 backbone) model weights
- DAFormer (+DGInStyle) model weights
- HRDA (+DGInStyle) model weights |
DHEIVER/finetuned-BreastCancer-Classification | DHEIVER | 2024-02-18T16:13:10Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-large-patch16-224",
"base_model:finetune:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-18T16:05:17Z | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0146
- Accuracy: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.5847 | 1.0 | 199 | 0.8030 | 0.4640 |
| 0.2856 | 2.0 | 398 | 0.9354 | 0.1753 |
| 0.156 | 3.0 | 597 | 0.9552 | 0.1179 |
| 0.1049 | 4.0 | 796 | 0.9585 | 0.1043 |
| 0.1399 | 5.0 | 995 | 0.9760 | 0.0673 |
| 0.0423 | 6.0 | 1194 | 0.9802 | 0.0455 |
| 0.078 | 7.0 | 1393 | 0.9802 | 0.0554 |
| 0.1769 | 8.0 | 1592 | 0.9764 | 0.0556 |
| 0.0568 | 9.0 | 1791 | 0.9807 | 0.0569 |
| 0.0728 | 10.0 | 1990 | 0.9915 | 0.0234 |
| 0.0229 | 11.0 | 2189 | 0.9910 | 0.0240 |
| 0.0561 | 12.0 | 2388 | 0.9901 | 0.0352 |
| 0.014 | 13.0 | 2587 | 0.9797 | 0.0749 |
| 0.096 | 14.0 | 2786 | 0.9934 | 0.0268 |
| 0.0005 | 15.0 | 2985 | 0.0146 | 0.9958 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits