Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-01 00:42:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-01 00:42:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
guocheng66/q-learning-taxiv3 | guocheng66 | "2023-10-16T01:51:01Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-16T01:50:28Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="guocheng66/q-learning-taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alissonerdx/YuE-s1-7B-anneal-en-icl-nf4 | Alissonerdx | "2025-02-03T10:21:10Z" | 80 | 0 | null | [
"safetensors",
"llama",
"yue",
"music",
"suno",
"base_model:m-a-p/YuE-s1-7B-anneal-en-icl",
"base_model:quantized:m-a-p/YuE-s1-7B-anneal-en-icl",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-30T22:03:41Z" | ---
license: apache-2.0
base_model:
- m-a-p/YuE-s1-7B-anneal-en-icl
tags:
- yue
- music
- suno
---
# YuE Quantized Models
Welcome to the repository for **YuE Quantized Models**! These models are quantized versions of the original YuE models, optimized for efficient inference while maintaining high-quality music generation capabilities. You can use these models directly or through the **YuE Interface**, a user-friendly Docker-based solution for generating music.
## 🚀 YuE Interface
To easily interact with these models, check out the **[YuE Interface](https://github.com/alisson-anjos/YuE-Interface)**, a robust and intuitive Docker-based interface that leverages Gradio for a seamless music generation experience. The interface supports both local deployment and cloud-based solutions like RunPod.
### Key Features of the YuE Interface:
- **Docker Image**: Pre-configured for easy setup.
- **Web UI (Gradio)**: Intuitive interface for configuring and executing music generation tasks.
- **NVIDIA GPU Support**: Accelerated processing for faster results.
- **Model Management**: Download and manage specific YuE models.
- **Real-time Logging**: Monitor generation logs directly from the interface.
- **Audio Playback and Download**: Listen to and download generated audio files.
For detailed instructions on how to use these models with the YuE Interface, please refer to the **[YuE Interface README](https://github.com/alisson-anjos/YuE-Interface)**.
## Available Quantized Models
Below is the list of quantized models available in this repository:
| Model Name | Quantization | Hugging Face Link |
|-------------------------------------|--------------|-----------------------------------------------------------------------------------|
| `YuE-s1-7B-anneal-en-cot-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-en-cot-int8) |
| `YuE-s1-7B-anneal-en-icl-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-en-icl-int8) |
| `YuE-s1-7B-anneal-jp-kr-cot-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-jp-kr-cot-int8) |
| `YuE-s1-7B-anneal-jp-kr-icl-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-jp-kr-icl-int8) |
| `YuE-s1-7B-anneal-zh-cot-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-zh-cot-int8) |
| `YuE-s1-7B-anneal-zh-icl-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-zh-icl-int8) |
| `YuE-s2-1B-general-int8` | INT8 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s2-1B-general-int8) |
| `YuE-s1-7B-anneal-en-cot-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-en-cot-nf4) |
| `YuE-s1-7B-anneal-en-icl-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-en-icl-nf4) |
| `YuE-s1-7B-anneal-jp-kr-cot-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-jp-kr-cot-nf4) |
| `YuE-s1-7B-anneal-jp-kr-icl-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-jp-kr-icl-nf4) |
| `YuE-s1-7B-anneal-zh-cot-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-zh-cot-nf4) |
| `YuE-s1-7B-anneal-zh-icl-nf4` | NF4 | [Model Link](https://huggingface.co/Alissonerdx/YuE-s1-7B-anneal-zh-icl-nf4) |
## 💬 Support
If you encounter any issues or have questions, feel free to open an issue on the **[YuE Interface GitHub repository](https://github.com/alisson-anjos/YuE-Interface)** or contact me via my [CivitAI profile](https://civitai.com/user/alissonerdx).
## 🙏 Acknowledgements
A special thanks to the developers of the official **[YuE repository](https://github.com/multimodal-art-projection/YuE)** for their incredible work and for making this project possible.
---
**Happy Music Generating! 🎶**
--- |
MayaPH/GodziLLa2-70B | MayaPH | "2024-01-12T03:52:58Z" | 1,591 | 38 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"merge",
"mix",
"cot",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2109.07958",
"arxiv:1907.10641",
"arxiv:2110.14168",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-10T17:05:37Z" | ---
pipeline_tag: text-generation
license: llama2
inference: false
tags:
- merge
- mix
- cot
datasets:
- mlabonne/guanaco-llama2-1k
---

Released August 11, 2023
## Model Description
GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023), debuted in the Fall 2023 update at rank #2 (November, 10, 2023), and operates under the Llama 2 license.

## Open LLM Leaderboard Metrics (Fall 2023 update)
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 69.88 |
| ARC (25-shot) | 71.42 |
| HellaSwag (10-shot) | 87.53 |
| TruthfulQA (0-shot) | 61.54 |
| Winogrande (5-shot) | 83.19 |
| GSM8K (5-shot) | 43.21 |
| DROP (3-shot) | 52.31 |
| Average (w/ DROP) | 67.01 |
| Average (w/o DROP) | 69.46 |
Note: As of December 1, 2023, [DROP](https://arxiv.org/abs/1903.00161) is removed from the leaderboard benchmarks.
According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
- [Winogrande](https://arxiv.org/abs/1907.10641) (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- [GSM8k](https://arxiv.org/abs/2110.14168) (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.
- [DROP](https://arxiv.org/abs/1903.00161) (3-shot) - English reading comprehension benchmark requiring Discrete Reasoning Over the content of Paragraphs.
A detailed breakdown of the evaluation can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B). Huge thanks to [@thomwolf](https://huggingface.co/thomwolf).
## Open LLM Leaderboard Metrics (before Fall 2023 update)
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 69.88 |
| ARC (25-shot) | 71.42 |
| HellaSwag (10-shot) | 87.53 |
| TruthfulQA (0-shot) | 61.54 |
| Average | 72.59 |
## Leaderboard Highlights (Fall 2023 update, November 10, 2023)
- Godzilla 2 70B debuts at 2nd place worldwide in the newly updated Open LLM Leaderboard.
- Godzilla 2 70B beats GPT-3.5 (ChatGPT) in terms of average performance and the HellaSwag benchmark (87.53 > 85.5).
- Godzilla 2 70B outperforms GPT-3.5 (ChatGPT) and GPT-4 on the TruthfulQA benchmark (61.54 for G2-70B, 47 for GPT-3.5, 59 for GPT-4).
- Godzilla 2 70B is on par with GPT-3.5 (ChatGPT) on the MMLU benchmark (<0.12%).
*Based on a [leaderboard clone](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) with GPT-3.5 and GPT-4 included.
### Reproducing Evaluation Results
*Instruction template taken from [Platypus 2 70B instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/G270B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/G270B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/G270B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/G270B/truthfulqa_0shot.json --device cuda
```
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Technical Considerations
When using GodziLLa 2 70B, kindly take note of the following:
- The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
- To further save on memory, set the `low_cpu_mem_usage` argument to True.
- If you wish to use a quantized version of GodziLLa2-70B, you can either access TheBloke's [GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ) or [GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML) version of GodziLLa2-70B.
- [GodziLLa2-70B-GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ#description) is available in 4-bit and 3-bit
- [GodziLLa2-70B-GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML#provided-files) is available in 8-bit, 6-bit, 5-bit, 4-bit, 3-bit, and 2-bit
## Ethical Considerations
When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of various datasets and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa 2 70B, please contact the Maya Philippines iOps Team via [email protected].
## Disclaimer
GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k). Last but not least, huge thanks to [TheBloke](https://huggingface.co/TheBloke) for the quantized models, making our model easily accessible to a wider community. |
marvelo2506/dqn-SpaceInvadersNoFrameskip-v4 | marvelo2506 | "2023-12-26T20:30:40Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-26T20:30:10Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 475.50 +/- 83.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marvelo2506 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marvelo2506 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga marvelo2506
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
osllmai-community/Qwen2.5-0.5B | osllmai-community | "2025-01-24T14:47:40Z" | 131 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-16T05:36:50Z" | ---
base_model: Qwen/Qwen2.5-0.5B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
**osllm.ai Models Highlights Program**
**We believe there's no need to pay a token if you have a GPU on your computer.**
Highlighting new and noteworthy models from the community. Join the conversation on Discord.
<p align="center">
<a href="https://osllm.ai">Official Website</a> • <a href="https://docs.osllm.ai/index.html">Documentation</a> • <a href="https://discord.gg/2fftQauwDD">Discord</a>
</p>
<p align="center">
<b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
</p>
Email: [email protected]
**Disclaimers**
[Osllm.ai](https://osllm.ai/) is not the creator, originator, or owner of any model featured in the Community Model Program. Each Community Model is created and provided by third parties. [Osllm.ai](https://osllm.ai/) does not endorse, support, represent, or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate, inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated it. [Osllm.ai](https://osllm.ai/) may not monitor or control the Community Models and cannot take responsibility for them. [Osllm.ai](https://osllm.ai/) disclaims all warranties or guarantees about the accuracy, reliability, or benefits of the Community Models. Furthermore, [Osllm.ai](https://osllm.ai/) disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted, error-free, virus-free, or that any issues will be corrected. You are solely responsible for any damage resulting from your use of or access to the Community Models, downloading of any Community Model, or use of any other Community Model provided by or through [Osllm.ai](https://osllm.ai/).
|
togethercomputer/LLaMA-2-7B-32K | togethercomputer | "2024-03-28T01:14:07Z" | 10,119 | 538 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:togethercomputer/RedPajama-Data-Instruct",
"dataset:EleutherAI/pile",
"dataset:togethercomputer/Long-Data-Collections",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-26T02:19:41Z" | ---
license: llama2
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
- EleutherAI/pile
- togethercomputer/Long-Data-Collections
language:
- en
library_name: transformers
---
# LLaMA-2-7B-32K
## Model Description
LLaMA-2-7B-32K is an open-source, long context language model developed by Together, fine-tuned from Meta's original Llama-2 7B model.
This model represents our efforts to contribute to the rapid progress of the open-source ecosystem for large language models.
The model has been extended to a context length of 32K with position interpolation,
allowing applications on multi-document QA, long text summarization, etc.
## What's new?
This model introduces several improvements and new features:
1. **Extended Context:** The model has been trained to handle context lengths up to 32K, which is a significant improvement over the previous versions.
2. **Pre-training and Instruction Tuning:** We have shared our data recipe, which consists of a mixture of pre-training and instruction tuning data.
3. **Fine-tuning Examples:** We provide examples of how to fine-tune the model for specific applications, including book summarization and long context question and answering.
4. **Software Support:** We have updated both the inference and training stack to allow efficient inference and fine-tuning for 32K context.
## Model Architecture
The model follows the architecture of Llama-2-7B and extends it to handle a longer context. It leverages the recently released FlashAttention-2 and a range of other optimizations to improve the speed and efficiency of inference and training.
## Training and Fine-tuning
The model has been trained using a mixture of pre-training and instruction tuning data.
- In the first training phase of continued pre-training, our data mixture contains 25% RedPajama Book, 25% RedPajama ArXiv (including abstracts), 25% other data from RedPajama, and 25% from the UL2 Oscar Data, which is a part of OIG (Open-Instruction-Generalist), asking the model to fill in missing chunks, or complete the text.
To enhance the long-context ability, we exclude data shorter than 2K word. The inclusion of UL2 Oscar Data is effective in compelling the model to read and utilize long-range context.
- We then fine-tune the model to focus on its few shot capacity under long context, including 20% Natural Instructions (NI), 20% Public Pool of Prompts (P3), 20% the Pile. We decontaminated all data against HELM core scenarios . We teach the model to leverage the in-context examples by packing examples into one 32K-token sequence. To maintain the knowledge learned from the first piece of data, we incorporate 20% RedPajama-Data Book and 20% RedPajama-Data ArXiv.
Next, we provide examples of how to fine-tune the model for specific applications.
The example datasets are placed in [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)
You can use the [OpenChatKit](https://github.com/togethercomputer/OpenChatKit) to fine-tune your own 32K model over LLaMA-2-7B-32K.
Please refer to [OpenChatKit](https://github.com/togethercomputer/OpenChatKit) for step-by-step illustrations.
1. Long Context QA.
We take as an example the multi-document question answering task from the paper “Lost in the Middle: How Language Models Use Long Contexts”. The input for the model consists of (i) a question that requires an answer and (ii) k documents, which are passages extracted from Wikipedia. Notably, only one of these documents contains the answer to the question, while the remaining k − 1 documents, termed as "distractor" documents, do not. To successfully perform this task, the model must identify and utilize the document containing the answer from its input context.
With OCK, simply run the following command to fine-tune:
```
bash training/finetune_llama-2-7b-32k-mqa.sh
```
2. Summarization.
Another example is BookSum, a unique dataset designed to address the challenges of long-form narrative summarization. This dataset features source documents from the literature domain, including novels, plays, and stories, and offers human-written, highly abstractive summaries. We here focus on chapter-level data. BookSum poses a unique set of challenges, necessitating that the model comprehensively read through each chapter.
With OCK, simply run the following command to fine-tune:
```
bash training/finetune_llama-2-7b-32k-booksum.sh
```
## Inference
You can use the [Together API](https://together.ai/blog/api-announcement) to try out LLaMA-2-7B-32K for inference.
The updated inference stack allows for efficient inference.
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
```
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
You can use this model directly from the Hugging Face Model Hub or fine-tune it on your own data using the OpenChatKit.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", trust_remote_code=True, torch_dtype=torch.float16)
input_context = "Your text here"
input_ids = tokenizer.encode(input_context, return_tensors="pt")
output = model.generate(input_ids, max_length=128, temperature=0.7)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
Alternatively, you can set `trust_remote_code=False` if you prefer not to use flash attention.
## Limitations and Bias
As with all language models, LLaMA-2-7B-32K may generate incorrect or biased content. It's important to keep this in mind when using the model.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
vermoney/f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c | vermoney | "2025-01-23T07:55:17Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T07:31:10Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7e8c233e95996edb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e8c233e95996edb_train_data.json
type:
field_input: label
field_instruction: text
field_output: text-english
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vermoney/f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/7e8c233e95996edb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0017 | 10 | nan |
| 0.0 | 0.0025 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jaysnotbad/Llama-3.1-8B-query_tuned | Jaysnotbad | "2024-12-30T23:42:02Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-30T23:38:02Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jaysnotbad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-tc-big-ar-en | Helsinki-NLP | "2023-08-16T12:10:50Z" | 21,191 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ar",
"en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-04-13T15:18:06Z" | ---
language:
- ar
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-ar-en
results:
- task:
name: Translation ara-eng
type: translation
args: ara-eng
dataset:
name: flores101-devtest
type: flores_101
args: ara eng devtest
metrics:
- name: BLEU
type: bleu
value: 42.6
- task:
name: Translation ara-eng
type: translation
args: ara-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ara-eng
metrics:
- name: BLEU
type: bleu
value: 47.3
- task:
name: Translation ara-eng
type: translation
args: ara-eng
dataset:
name: tico19-test
type: tico19-test
args: ara-eng
metrics:
- name: BLEU
type: bleu
value: 44.4
---
# opus-mt-tc-big-ar-en
Neural machine translation model for translating from Arabic (ar) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): afb ara arz
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT ara-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"اتبع قلبك فحسب.",
"وين راهي دّوش؟"
]
model_name = "pytorch-models/opus-mt-tc-big-ar-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Just follow your heart.
# Wayne Rahi Dosh?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-ar-en")
print(pipe("اتبع قلبك فحسب."))
# expected output: Just follow your heart.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ara-eng | tatoeba-test-v2021-08-07 | 0.63477 | 47.3 | 10305 | 76975 |
| ara-eng | flores101-devtest | 0.66987 | 42.6 | 1012 | 24721 |
| ara-eng | tico19-test | 0.68521 | 44.4 | 2100 | 56323 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:17:57 EEST 2022
* port machine: LM0-400-22516.local
|
NBA55/llama2-qlora-finetunined-4-bit-4.14k-dataset-1e4-learning-rate | NBA55 | "2024-01-04T19:50:24Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-01-04T19:50:05Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Nonin/Taxi3_V1_v2 | Nonin | "2023-02-15T07:26:23Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-15T07:26:13Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi3_V1_v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Nonin/Taxi3_V1_v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
camidenecken/RoBERTa-RM1-v2-rm-v10 | camidenecken | "2024-10-23T18:46:19Z" | 179 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-23T18:46:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FunAudioLLM/CosyVoice-300M-SFT | FunAudioLLM | "2024-12-27T11:09:04Z" | 715 | 8 | cosyvoice | [
"cosyvoice",
"onnx",
"text-to-speech",
"region:us"
] | text-to-speech | "2024-07-18T10:16:33Z" | ---
pipeline_tag: text-to-speech
library_name: cosyvoice
---
# CosyVoice
## 👉🏻 [CosyVoice Demos](https://fun-audio-llm.github.io/) 👈🏻
[[CosyVoice Paper](https://fun-audio-llm.github.io/pdf/CosyVoice_v1.pdf)][[CosyVoice Studio](https://www.modelscope.cn/studios/iic/CosyVoice-300M)][[CosyVoice Code](https://github.com/FunAudioLLM/CosyVoice)]
For `SenseVoice`, visit [SenseVoice repo](https://github.com/FunAudioLLM/SenseVoice) and [SenseVoice space](https://www.modelscope.cn/studios/iic/SenseVoice).
## Install
**Clone and install**
- Clone the repo
``` sh
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# If you failed to clone submodule due to network failures, please run following command until success
cd CosyVoice
git submodule update --init --recursive
```
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
- Create Conda env:
``` sh
conda create -n cosyvoice python=3.8
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# If you encounter sox compatibility issues
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
```
**Model download**
We strongly recommend that you download our pretrained `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can skip this step.
``` python
# SDK模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
```
``` sh
# git模型下载,请确保已安装git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
```
Optionaly, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
``` sh
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
```
**Basic Usage**
For zero_shot/cross_lingual inference, please use `CosyVoice-300M` model.
For sft inference, please use `CosyVoice-300M-SFT` model.
For instruct inference, please use `CosyVoice-300M-Instruct` model.
First, add `third_party/Matcha-TTS` to your `PYTHONPATH`.
``` sh
export PYTHONPATH=third_party/Matcha-TTS
```
``` python
from cosyvoice.cli.cosyvoice import CosyVoice
from cosyvoice.utils.file_utils import load_wav
import torchaudio
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT')
# sft usage
print(cosyvoice.list_avaliable_spks())
# change stream=True for chunk stream inference
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], 22050)
# cross_lingual usage
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], 22050)
```
**Start web demo**
You can use our web demo page to get familiar with CosyVoice quickly.
We support sft/zero_shot/cross_lingual/instruct inference in web demo.
Please see the demo website for details.
``` python
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
```
**Advanced Usage**
For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
You can get familiar with CosyVoice following this recipie.
**Build for deployment**
Optionally, if you want to use grpc for service deployment,
you can run following steps. Otherwise, you can just ignore this step.
``` sh
cd runtime/python
docker build -t cosyvoice:v1.0 .
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
# for grpc usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
# for fastapi usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && MODEL_DIR=iic/CosyVoice-300M fastapi dev --port 50000 server.py && sleep infinity"
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
```
## Discussion & Communication
You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
You can also scan the QR code to join our official Dingding chat group.
<img src="./asset/dingding.png" width="250px">
## Acknowledge
1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
## Disclaimer
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
|
Moiz2517/qwen2.5-pythoncoder-lora-adapter | Moiz2517 | "2025-02-21T22:38:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-21T22:31:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baby-dev/d3bb91ce-11b3-4e99-8004-cb2e626ff7e4 | baby-dev | "2025-02-11T21:36:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | "2025-02-11T21:13:11Z" | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d3bb91ce-11b3-4e99-8004-cb2e626ff7e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d3bb91ce-11b3-4e99-8004-cb2e626ff7e4
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
manred1997/xlnet-large_lemon-spell_5k | manred1997 | "2024-10-26T02:52:58Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"gec",
"grammar",
"token-classification",
"en",
"base_model:xlnet/xlnet-large-cased",
"base_model:finetune:xlnet/xlnet-large-cased",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-10-24T08:04:10Z" | ---
library_name: transformers
tags:
- gec
- grammar
language:
- en
metrics:
- accuracy
base_model:
- xlnet/xlnet-large-cased
pipeline_tag: token-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a grammar error correction (GEC) system fine-tuned from the `xlnet/xlnet-large-cased` model, designed to detect and correct grammatical errors in English text. The model focuses on common grammatical mistakes such as verb tense, noun inflection, adjective usage, and more. It is particularly useful for language learners or applications requiring enhanced grammatical precision.
- **Model type:** Token classification with sequence-to-sequence correction
- **Language(s) (NLP):** English
- **Finetuned from model:** `xlnet/xlnet-large-cased`
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be used directly for grammar error detection and correction in English texts. It's ideal for integration into writing assistants, educational software, or proofreading tools.
### Downstream Use
The model can be fine-tuned for specific domains like academic writing, business communication, or informal text correction, ensuring high precision in context-specific grammar errors.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model is not suitable for non-English text, non-grammatical corrections (e.g., style, tone, or logic), or detecting complex errors beyond basic grammar structures.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on general English corpora and may underperform with non-standard dialects (e.g Spoken language), or domain-specific jargon. Users should be cautious when applying it to such contexts, as it might introduce or overlook errors due to the limitations in its training data.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
While the model provides strong general performance, users should manually review corrections, especially in specialized or creative contexts where grammar rules can be more fluid.
## How to Get Started with the Model
Use the code below to get started with the model.
Use the following code to get started with the model:
```python
from dataclasses import dataclass
from typing import Optional, Tuple
import torch
from torch import nn
from torch.nn import CrossEntropyLoss
from transformers import AutoConfig, AutoTokenizer
from transformers.file_utils import ModelOutput
from transformers.models.xlnet.modeling_xlnet import XLNetModel, XLNetPreTrainedModel
@dataclass
class XGECToROutput(ModelOutput):
"""
Output type of `XGECToRForTokenClassification.forward()`.
loss (`torch.FloatTensor`, optional)
logits_correction (`torch.FloatTensor`) : The correction logits for each token.
logits_detection (`torch.FloatTensor`) : The detection logits for each token.
hidden_states (`Tuple[torch.FloatTensor]`, optional)
attentions (`Tuple[torch.FloatTensor]`, optional)
"""
loss: Optional[torch.FloatTensor] = None
logits_correction: torch.FloatTensor = None
logits_detection: torch.FloatTensor = None
hidden_states: Optional[Tuple[torch.FloatTensor]] = None
attentions: Optional[Tuple[torch.FloatTensor]] = None
class XGECToRXLNet(XLNetPreTrainedModel):
"""
This class overrides the GECToR model to include an error detection head in addition to the token classification head.
"""
_keys_to_ignore_on_load_unexpected = [r"pooler"]
_keys_to_ignore_on_load_missing = [r"position_ids"]
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.unk_tag_idx = config.label2id.get("@@UNKNOWN@@", None)
self.transformer = XLNetModel(config)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
if self.unk_tag_idx is not None:
self.error_detector = nn.Linear(config.hidden_size, 3)
else:
self.error_detector = nn.Linear(config.hidden_size, 2)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
"""
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
outputs = self.transformer(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = outputs[0]
logits_correction = self.classifier(sequence_output)
logits_detection = self.error_detector(sequence_output)
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(
logits_correction.view(-1, self.num_labels), labels.view(-1)
)
labels_detection = torch.ones_like(labels)
labels_detection[labels == 0] = 0
labels_detection[labels == -100] = -100 # ignore padding
if self.unk_tag_idx is not None:
labels_detection[labels == self.unk_tag_idx] = 2
loss_detection = loss_fct(
logits_detection.view(-1, 3), labels_detection.view(-1)
)
else:
loss_detection = loss_fct(
logits_detection.view(-1, 2), labels_detection.view(-1)
)
loss += loss_detection
if not return_dict:
output = (
logits_correction,
logits_detection,
) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return XGECToROutput(
loss=loss,
logits_correction=logits_correction,
logits_detection=logits_detection,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
def get_input_embeddings(self):
return self.transformer.get_input_embeddings()
def set_input_embeddings(self, value):
self.transformer.set_input_embeddings(value)
config = AutoConfig.from_pretrained("manred1997/xlnet-large_lemon-spell_5k")
tokenizer = AutoTokenizer.from_pretrained("manred1997/xlnet-large_lemon-spell_5k")
model = XGECToRXLNet.from_pretrained(
"manred1997/xlnet-large_lemon-spell_5k", config=config
)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We trained the model in three stages, each requiring specific datasets. Below is a description of the data used in each stage:
| Stage | Dataset(s) Used | Description |
|--------|--------|--------|
| Stage 1| Shuffled 9 million sentences from the PIE corpus (A1 part only) | 9 million shuffled sentences from the PIE corpus, focusing on A1-level sentences. |
| Stage 2| Shuffled combination of NUCLE, FCE, Lang8, W&I + Locness datasets | Lang8 dataset contained 947,344 sentences, with 52.5% having different source and target sentences. |
| | | If using a newer Lang8 dump, consider sampling. | |
| Stage 3| Shuffled version of W&I + Locness datasets | Final shuffled version of the W&I + Locness datasets. |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
The model was tested on the W&I + Locness test set, a standard benchmark for grammar error correction.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The primary evaluation metric used was F0.5, measuring the model's ability to identify and fix grammatical errors correctly.
### Results
F0.5 = 72.64 |
LoneStriker/Llama-3-Refueled-8.0bpw-h8-exl2 | LoneStriker | "2024-05-09T05:03:17Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"data labeling",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-05-09T04:59:45Z" | ---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
tags:
- data labeling
---
<div style="width: auto; margin-left: auto; margin-right: auto; background-color:black">
<img src="https://assets-global.website-files.com/6423879a8f63c1bb18d74bfa/648818d56d04c3bdf36d71ab_Refuel_rev8-01_ts-p-1600.png" alt="Refuel.ai" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
## Model Details
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
* More details about [RefuelLLM-2 family of models](https://www.refuel.ai/blog-posts/announcing-refuel-llm-2)
* You can also try out the models in our [LLM playground](https://labs.refuel.ai/playground)
**Model developers** - Refuel AI
**Input** - Text only.
**Output** - Text only.
**Architecture** - Llama-3-Refueled is built on top of Llama-3-8B-instruct which is an auto-regressive language model that uses an optimized transformer architecture.
**Release Date** - May 8, 2024.
## How to use
This repository contains weights for Llama-3-Refueled that are compatible for use with HuggingFace. See the snippet below for usage with Transformers:
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> model_id = "refuelai/Llama-3-Refueled"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
>>> messages = [{"role": "user", "content": "Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!"}]
>>> inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
>>> outputs = model.generate(inputs, max_new_tokens=20)
>>> print(tokenizer.decode(outputs[0]))
```
## Training Data
The model was both trained on over 4 Billion tokens, spanning 2750+ NLP tasks. Our training collection consists majorly of:
1. Human annotated datasets like Flan, Task Source, and the Aya collection
2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM
3. Proprietary datasets developed or licensed by Refuel AI
## Benchmarks
In this section, we report the results for Refuel models on our benchmark of labeling tasks. For details on the methodology see [here](https://refuel.ai/blog-posts/announcing-refuel-llm-2).
<table>
<tr></tr>
<tr><th>Provider</th><th>Model</th><th colspan="4" style="text-align: center">LLM Output Quality (by task type)</tr>
<tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2-small (Llama-3-Refueled)</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr>
<tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr>
<tr><td>Mistral</td><td>Mixtral-8x7B-Instruct</td><td>62.87%</td><td>79.11%</td><td>45.56%</td><td>47.08%</td><td>86.52%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Sonnet</td><td>70.99%</td><td>79.91%</td><td>45.44%</td><td>78.10%</td><td>96.34%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Haiku</td><td>69.23%</td><td>77.27%</td><td>50.19%</td><td>84.97%</td><td>54.08%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-3.5-Turbo</td><td>68.13%</td><td>74.39%</td><td>53.21%</td><td>69.40%</td><td>80.41%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-8B-Instruct</td><td>62.30%</td><td>68.52%</td><td>49.16%</td><td>65.09%</td><td>63.61%</td><td></td></tr>
</table>
## Limitations
The Llama-3-Refueled does not have any moderation mechanisms. We're looking forward to engaging with the community
on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
hmyrcmn/cvMentorMatch | hmyrcmn | "2024-06-10T10:30:39Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-06-10T09:06:26Z" | ---
license: mit
---
---
language: en
license: apache-2.0
---
# My BERT Model
This is a BERT model fine-tuned for extracting embeddings from CVs and startup descriptions for matching purposes.
## Model Details
- **Architecture:** BERT-base-uncased
- **Use case:** CV and Startup matching
- **Training data:** Not applicable (pre-trained model used)
## How to use
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("your_username/your_model_name")
model = BertModel.from_pretrained("your_username/your_model_name")
text = "Sample text"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1).detach().numpy() |
Manduzamzam/segformer-finetuned-sidewalk-10k-steps | Manduzamzam | "2023-09-26T07:46:17Z" | 201 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-09-25T23:22:12Z" | ---
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-sidewalk-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Manduzamzam/practice2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6829
- Mean Iou: 0.0140
- Mean Accuracy: 0.0279
- Overall Accuracy: 0.0279
- Accuracy Background: nan
- Accuracy Object: 0.0279
- Iou Background: 0.0
- Iou Object: 0.0279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Object | Iou Background | Iou Object |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:---------------:|:--------------:|:----------:|
| No log | 0.71 | 10 | 0.6829 | 0.0140 | 0.0279 | 0.0279 | nan | 0.0279 | 0.0 | 0.0279 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.14.5
- Tokenizers 0.14.0
|
bthomas/article2KW_test2.0c_barthez-orangesum-title_finetuned_for_mlm_77153 | bthomas | "2022-10-11T07:53:06Z" | 184 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-10-11T07:39:51Z" | ---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2KW_test2.0c_barthez-orangesum-title_finetuned_for_mlm_77153
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2KW_test2.0c_barthez-orangesum-title_finetuned_for_mlm_77153
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4053 | 1.0 | 82 | 0.2412 |
| 0.2734 | 2.0 | 164 | 0.0641 |
| 0.0771 | 3.0 | 246 | 0.0506 |
| 0.0601 | 4.0 | 328 | 0.0474 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
public-data/danbooru-pretrained | public-data | "2022-01-23T23:31:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-03-02T23:29:05Z" | # danbooru-pretrained
- Repo: https://github.com/RF5/danbooru-pretrained
- https://github.com/RF5/danbooru-pretrained/releases/tag/v0.1
- https://github.com/RF5/danbooru-pretrained/releases/download/v0.1/resnet50-13306192.pth
- https://github.com/RF5/danbooru-pretrained/raw/master/config/class_names_6000.json
|
JacksonBrune/3a0ac261-8f7b-4ee8-8bad-79f8b8713365 | JacksonBrune | "2025-01-10T17:14:18Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-10T17:03:24Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3a0ac261-8f7b-4ee8-8bad-79f8b8713365
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 251c7927ff1e9a83_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/251c7927ff1e9a83_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/3a0ac261-8f7b-4ee8-8bad-79f8b8713365
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/251c7927ff1e9a83_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c3658401-8278-4333-8498-b5e8887faba8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c3658401-8278-4333-8498-b5e8887faba8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3a0ac261-8f7b-4ee8-8bad-79f8b8713365
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.7132 | 0.0001 | 1 | 2.6948 |
| 9.8859 | 0.0004 | 3 | 2.6785 |
| 9.7802 | 0.0008 | 6 | 2.4744 |
| 8.2761 | 0.0011 | 9 | 1.8107 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e50_member_shadow31 | FounderOfHuggingface | "2023-12-07T15:16:09Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-07T15:16:07Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
cwyxnr/DeepSeek-R1-Fine-tuned-Medical | cwyxnr | "2025-02-12T06:55:36Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T06:38:54Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
litvan/SDXL_finetuned_for_russian_churches | litvan | "2024-02-08T08:58:02Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-02-06T13:47:35Z" |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_0.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_1.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_2.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Orthodox church
license: openrail++
---
# SDXL LoRA DreamBooth - litvan/SDXL_finetuned_for_russian_churches
<Gallery />
## Model description
These are litvan/SDXL_finetuned_for_russian_churches LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The main purpose of the model: Generate Orthodox churches in different cultural and architectural codes of countries
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Dataset for finetuning: litvan/russian_churches_with_blip_captioning
For training were used: 3 GPU A100(80Gb)
## Trigger words
You should use Orthodox church to trigger the image generation.
## Download model
You can do this using the following lines of code:
```
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").cuda()
pipeline.load_lora_weights("litvan/SDXL_finetuned_for_russian_churches")
```
### For using refiner
```
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=pipeline.text_encoder_2,
vae=pipeline.vae,
torch_dtype=torch.float32,
use_safetensors=True,
).cuda()
```
|
panggi/t5-small-indonesian-summarization-cased | panggi | "2020-12-19T18:01:23Z" | 149 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pipeline:summarization",
"summarization",
"id",
"dataset:indosum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:05Z" | ---
language: id
tags:
- pipeline:summarization
- summarization
- t5
datasets:
- indosum
---
# Indonesian T5 Summarization Small Model
Finetuned T5 small summarization model for Indonesian.
## Finetuning Corpus
`t5-small-indonesian-summarization-cased` model is based on `t5-small-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [indosum](https://github.com/kata-ai/indosum) dataset.
## Load Finetuned Model
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
```
## Code Sample
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
# https://www.sehatq.com/artikel/apa-itu-dispepsia-fungsional-ketahui-gejala-dan-faktor-risikonya
ARTICLE_TO_SUMMARIZE = "Secara umum, dispepsia adalah kumpulan gejala pada saluran pencernaan seperti nyeri, sensasi terbakar, dan rasa tidak nyaman pada perut bagian atas. Pada beberapa kasus, dispepsia yang dialami seseorang tidak dapat diketahui penyebabnya. Jenis dispepsia ini disebut dengan dispepsia fungsional. Apa saja gejala dispepsia fungsional? Apa itu dispepsia fungsional? Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas atau ulu hati. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. Dispepsia ini memiliki nama “fungsional” karena kumpulan gejalanya tidak memiliki penyebab yang jelas. Dilihat dari fungsi dan struktur saluran pencernaan, dokter tidak menemukan hal yang salah. Namun, gejalanya bisa sangat mengganggu dan menyiksa. Dispepsia fungsional disebut juga dengan dispepsia nonulkus. Diperkirakan bahwa 20% masyarakat dunia menderita dispepsia fungsional. Kondisi ini berisiko tinggi dialami oleh wanita, perokok, dan orang yang mengonsumsi obat anti-peradangan nonsteroid (NSAID). Dispepsia fungsional bisa bersifat kronis dan mengganggu kehidupan penderitanya. Namun beruntung, ada beberapa strategi yang bisa diterapkan untuk mengendalikan gejala dispepsia ini. Strategi tersebut termasuk perubahan gaya hidup, obat-obatan, dan terapi.Ragam gejala dispepsia fungsional Gejala dispepsia fungsional dapat bervariasi antara satu pasien dengan pasien lain. Beberapa tanda yang bisa dirasakan seseorang, yaitu: Sensasi terbakar atau nyeri di saluran pencernaan bagian atas Perut kembung Cepat merasa kenyang walau baru makan sedikit Mual Muntah Bersendawa Rasa asam di mulut Penurunan berat badan Tekanan psikologis terkait dengan kondisi yang dialami Apa sebenarnya penyebab dispepsia fungsional? Sebagai penyakit fungsional, dokter mengkategorikan dispepsia ini sebagai penyakit yang tidak diketahui penyebabnya. Hanya saja, beberapa faktor bisa meningkatkan risiko seseorang terkena dispepsia fungsional. Faktor risiko tersebut, termasuk: Alergi terhadap zat tertentu Perubahan mikrobioma usus Infeksi, seperti yang dipicu oleh bakteriHelicobacter pylori Sekresi asam lambung yang tidak normal Peradangan pada saluran pencernaan bagian atas Gangguan pada fungsi lambung untuk mencerna makanan Pola makan tertentu Gaya hidup tidak sehat Stres Kecemasan atau depresi Efek samping pemakaian obat seperti obat antiinflamasi nonsteroid Penanganan untuk dispepsia fungsional Ada banyak pilihan pengobatan untuk dispepsia fungsional. Seperti yang disampaikan di atas, tidak ada penyebab tunggal dispepsia ini yang bisa diketahui. Gejala yang dialami antara satu pasien juga mungkin amat berbeda dari orang lain. Dengan demikian, jenis pengobatan dispepsia fungsional juga akan bervariasi. Beberapa pilihan strategi penanganan untuk dispepsia fungsional, meliputi: 1. Obat-obatan Ada beberapa jenis obat yang mungkin akan diberikan dokter, seperti Obat penetral asam lambung yang disebut penghambat reseptor H2 Obat penghambat produksi asam lambung yang disebut proton pump inhibitors Obat untuk mengendalikan gas di perut yang mengandung simetikon Antidepresan seperti amitriptyline Obat penguat kerongkongan yang disebut agen prokinetik Obat untuk pengosongan isi lambung seperti metoclopramide Antibiotik jika dokter mendeteksi adanya infeksi bakteri H. pylori 2. Anjuran terkait perubahan gaya hidup Selain obat-obatan, dokter akan memberikan rekomendasi perubahan gaya hidup yang harus diterapkan pasien. Tips terkait perubahan gaya hidup termasuk: Makan lebih sering namun dengan porsi yang lebih sedikit Menjauhi makanan berlemak karena memperlambat pengosongan makanan di lambung Menjauhi jenis makanan lain yang memicu gejala dispepsia, seperti makanan pedas, makanan tinggi asam, produk susu, dan produk kafein Menjauhi rokok Dokter juga akan meminta pasien untuk mencari cara untuk mengendalikan stres, tidur dengan kepala lebih tinggi, dan menjalankan usaha untuk mengendalikan berat badan. Apakah penyakit dispepsia itu berbahaya? Dispepsia, termasuk dispepsia fungsional, dapat menjadi kronis dengan gejala yang menyiksa. Jika tidak ditangani, dispepsia tentu dapat berbahaya dan mengganggu kehidupan pasien. Segera hubungi dokter apabila Anda merasakan gejala dispepsia, terlebih jika tidak merespons obat-obatan yang dijual bebas. Catatan dari SehatQ Dispepsia fungsional adalah kumpulan gejala pada saluran pencernaan bagian atas yang tidak diketahui penyebabnya. Dispepsia fungsional dapat ditangani dengan kombinasi obat-obatan dan perubahan gaya hidup. Jika masih memiliki pertanyaan terkait dispepsia fungsional, Anda bisa menanyakan ke dokter di aplikasi kesehatan keluarga SehatQ. Aplikasi SehatQ bisa diunduh gratis di Appstore dan Playstore yang berikan informasi penyakit terpercaya."
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
max_length=100,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
'Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih.
```
## Acknowledgement
Thanks to Immanuel Drexel for his article [Text Summarization, Extractive, T5, Bahasa Indonesia, Huggingface’s Transformers](https://medium.com/analytics-vidhya/text-summarization-t5-bahasa-indonesia-huggingfaces-transformers-ee9bfe368e2f)
|
lkgeo/whisper-small-llm-lingo | lkgeo | "2024-06-24T21:08:11Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-24T21:07:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ben141/LLM10 | Ben141 | "2023-10-22T17:50:52Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2023-10-22T17:36:22Z" | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: LLM10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM10
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ronanki/all-mpnet-base-v2-2022-11-07 | ronanki | "2022-11-07T10:40:36Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-11-07T10:40:27Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ronanki/all-mpnet-base-v2-2022-11-07
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/all-mpnet-base-v2-2022-11-07')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/all-mpnet-base-v2-2022-11-07)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 348 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1044,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
LoneStriker/law-LLM-13B-4.0bpw-h6-exl2 | LoneStriker | "2024-01-01T21:46:10Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:EleutherAI/pile",
"arxiv:2309.09530",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-01T21:43:19Z" | ---
language:
- en
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
- EleutherAI/pile
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- legal
---
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
**************************** **Updates** ****************************
* 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
* 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
* 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the law model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-chat", use_fast=False)
# Put your input here:
user_input = '''Question: Which of the following is false about ex post facto laws?
Options:
- They make criminal an act that was innocent when committed.
- They prescribe greater punishment for an act than was prescribed when it was done.
- They increase the evidence required to convict a person than when the act was done.
- They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity.
Please provide your choice first and then provide explanations if possible.'''
# We use the prompt template of LLaMA-2-Chat demo
prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
``` |
sunyijia97/llama2-7b-qlora-cstuqa-test | sunyijia97 | "2024-02-14T06:55:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-14T06:32:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FounderOfHuggingface/gpt2_gen_lora_r16_wikitext2_t3000_e20_member_shadow6 | FounderOfHuggingface | "2024-01-21T16:52:42Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-21T16:52:39Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Helsinki-NLP/opus-mt-en-ar | Helsinki-NLP | "2023-08-16T11:28:58Z" | 271,098 | 36 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- en
- ar
tags:
- translation
license: apache-2.0
---
### eng-ara
* source group: English
* target group: Arabic
* OPUS readme: [eng-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb apc apc_Latn ara ara_Latn arq arq_Latn ary arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.ara | 14.0 | 0.437 |
### System Info:
- hf_name: eng-ara
- source_languages: eng
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ar']
- src_constituents: {'eng'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt
- src_alpha3: eng
- tgt_alpha3: ara
- short_pair: en-ar
- chrF2_score: 0.43700000000000006
- bleu: 14.0
- brevity_penalty: 1.0
- ref_len: 58935.0
- src_name: English
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: en
- tgt_alpha2: ar
- prefer_old: False
- long_pair: eng-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
kurohige/PyraMIDz | kurohige | "2023-02-09T13:13:43Z" | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-02-09T13:13:36Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: kurohige/PyraMIDz
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
birgermoell/Rapid-Cycling | birgermoell | "2024-01-30T12:54:34Z" | 46 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"timpal0l/Mistral-7B-v0.1-flashback-v2",
"RJuro/munin-neuralbeagle-7b",
"base_model:RJuro/munin-neuralbeagle-7b",
"base_model:merge:RJuro/munin-neuralbeagle-7b",
"base_model:timpal0l/Mistral-7B-v0.1-flashback-v2",
"base_model:merge:timpal0l/Mistral-7B-v0.1-flashback-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-30T12:44:32Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- timpal0l/Mistral-7B-v0.1-flashback-v2
- RJuro/munin-neuralbeagle-7b
base_model:
- timpal0l/Mistral-7B-v0.1-flashback-v2
- RJuro/munin-neuralbeagle-7b
---
# Rapid-Cycling

Rapid-Cycling is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [timpal0l/Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2)
* [RJuro/munin-neuralbeagle-7b](https://huggingface.co/RJuro/munin-neuralbeagle-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: timpal0l/Mistral-7B-v0.1-flashback-v2
layer_range: [0, 32]
- model: RJuro/munin-neuralbeagle-7b
layer_range: [0, 32]
merge_method: slerp
base_model: timpal0l/Mistral-7B-v0.1-flashback-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "birgermoell/Rapid-Cycling"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
thomasjules/office | thomasjules | "2025-03-19T09:38:57Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-19T09:14:05Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: office
---
# Office
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `office` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thomasjules/office', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
apwic/nerugm-lora-r4a2d0.05 | apwic | "2024-05-25T00:22:33Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | "2024-05-24T15:02:23Z" | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerugm-lora-r4a2d0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-lora-r4a2d0.05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1305
- Precision: 0.7407
- Recall: 0.8698
- F1: 0.8001
- Accuracy: 0.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7682 | 1.0 | 528 | 0.4394 | 0.4048 | 0.1185 | 0.1834 | 0.8663 |
| 0.3466 | 2.0 | 1056 | 0.2217 | 0.6022 | 0.7379 | 0.6632 | 0.9327 |
| 0.2131 | 3.0 | 1584 | 0.1728 | 0.6765 | 0.8396 | 0.7493 | 0.9428 |
| 0.1759 | 4.0 | 2112 | 0.1509 | 0.7221 | 0.8559 | 0.7833 | 0.9516 |
| 0.1563 | 5.0 | 2640 | 0.1422 | 0.7303 | 0.8605 | 0.7901 | 0.9533 |
| 0.1464 | 6.0 | 3168 | 0.1429 | 0.7202 | 0.8722 | 0.7890 | 0.9541 |
| 0.1394 | 7.0 | 3696 | 0.1440 | 0.7153 | 0.8745 | 0.7869 | 0.9525 |
| 0.1325 | 8.0 | 4224 | 0.1398 | 0.7274 | 0.8791 | 0.7961 | 0.9553 |
| 0.1269 | 9.0 | 4752 | 0.1341 | 0.7420 | 0.8675 | 0.7999 | 0.9579 |
| 0.124 | 10.0 | 5280 | 0.1331 | 0.7379 | 0.8768 | 0.8014 | 0.9565 |
| 0.1194 | 11.0 | 5808 | 0.1329 | 0.7389 | 0.8815 | 0.8039 | 0.9569 |
| 0.1171 | 12.0 | 6336 | 0.1337 | 0.7384 | 0.8791 | 0.8027 | 0.9567 |
| 0.1153 | 13.0 | 6864 | 0.1294 | 0.7447 | 0.8745 | 0.8044 | 0.9587 |
| 0.1119 | 14.0 | 7392 | 0.1310 | 0.7472 | 0.8791 | 0.8078 | 0.9573 |
| 0.1109 | 15.0 | 7920 | 0.1312 | 0.7457 | 0.8722 | 0.8040 | 0.9579 |
| 0.1102 | 16.0 | 8448 | 0.1309 | 0.7442 | 0.8791 | 0.8061 | 0.9581 |
| 0.1095 | 17.0 | 8976 | 0.1314 | 0.7447 | 0.8815 | 0.8073 | 0.9587 |
| 0.1073 | 18.0 | 9504 | 0.1323 | 0.7403 | 0.8745 | 0.8018 | 0.9577 |
| 0.107 | 19.0 | 10032 | 0.1300 | 0.7407 | 0.8698 | 0.8001 | 0.9581 |
| 0.1073 | 20.0 | 10560 | 0.1305 | 0.7407 | 0.8698 | 0.8001 | 0.9579 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
aegon-h/Airoboros-13B-GPT | aegon-h | "2023-09-04T17:05:13Z" | 85 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-09-04T15:53:58Z" | ---
datasets:
- jondurbin/airoboros-2.1
inference: false
license: llama2
model_creator: Jon Durbin
model_link: https://huggingface.co/jondurbin/airoboros-l2-13b-2.1
model_name: Airoboros L2 13B 2.1
model_type: llama
quantized_by: agonh
---
# Airoboros-13B-GPT
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 13B 2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
## Description
This repo contains GPTQ model files for [Jon Durbin's Airoboros L2 13B 2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1).
|
DevQuasar/Rombo-Org.Rombo-LLM-V3.0-Qwen-32b-GGUF | DevQuasar | "2025-02-15T07:40:40Z" | 150 | 0 | null | [
"gguf",
"text-generation",
"base_model:Rombo-Org/Rombo-LLM-V3.0-Qwen-32b",
"base_model:quantized:Rombo-Org/Rombo-LLM-V3.0-Qwen-32b",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-14T18:58:26Z" | ---
base_model:
- Rombo-Org/Rombo-LLM-V3.0-Qwen-32b
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Rombo-Org/Rombo-LLM-V3.0-Qwen-32b](https://huggingface.co/Rombo-Org/Rombo-LLM-V3.0-Qwen-32b)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mradermacher/Meditron3-Gemma2-2B-GGUF | mradermacher | "2025-03-07T12:48:57Z" | 241 | 1 | transformers | [
"transformers",
"gguf",
"medical",
"en",
"base_model:OpenMeditron/Meditron3-Gemma2-2B",
"base_model:quantized:OpenMeditron/Meditron3-Gemma2-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-27T18:16:31Z" | ---
base_model: OpenMeditron/Meditron3-Gemma2-2B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OpenMeditron/Meditron3-Gemma2-2B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Gemma2-2B-GGUF/resolve/main/Meditron3-Gemma2-2B.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
1mpreccable/10k_trained_bert | 1mpreccable | "2025-02-12T14:13:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-12T14:12:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahsbdcpu/Qwen-Qwen1.5-0.5B-1725003495 | ahsbdcpu | "2024-08-30T07:38:18Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-08-30T07:38:15Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
surya07/swin-tiny-patch4-window7-224-finetuned-eurosat | surya07 | "2022-09-19T16:11:19Z" | 217 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-09-19T14:33:01Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4066
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.57 | 1 | 0.7569 | 0.5417 |
| No log | 1.57 | 2 | 0.5000 | 0.8333 |
| No log | 2.57 | 3 | 0.4066 | 0.875 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jakka/segformer-b0-finetuned-segments-sidewalk-4 | jakka | "2022-05-30T11:56:11Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-05-30T11:23:46Z" | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6258
- Mean Iou: 0.1481
- Mean Accuracy: 0.1991
- Overall Accuracy: 0.7316
- Per Category Iou: [nan, 0.4971884694242825, 0.7844619900838784, 0.0, 0.10165655377640956, 0.007428563507709108, nan, 4.566798099115959e-06, 0.0, 0.0, 0.5570746278221521, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.534278997386317, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7557693923373933, 0.5270379031768208, 0.8254522211471568, 0.0, 0.0, 0.0, 0.0]
- Per Category Accuracy: [nan, 0.8698779680369205, 0.9122325676343133, 0.0, 0.10179229832932858, 0.007508413919135004, nan, 4.566798099115959e-06, 0.0, 0.0, 0.8968168359562617, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8492049383357001, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9388033874781816, 0.6627890453030717, 0.9334458854084583, 0.0, 0.0, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.7912 | 1.0 | 25 | 1.6392 | 0.1412 | 0.1911 | 0.7210 | [nan, 0.48942576059104514, 0.7754689525048201, 0.0, 0.031932013148008094, 0.004348266117522573, nan, 1.5527099355168697e-05, 0.0, 0.0, 0.5356571432088642, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5243044552616699, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7355207837531991, 0.4479559177066271, 0.8315839315332364, 0.0, 0.0, 0.0, 0.0] | [nan, 0.8476069713517648, 0.9129050708992534, 0.0, 0.03194435645315849, 0.004370669306327572, nan, 1.552711353699426e-05, 0.0, 0.0, 0.897824434787493, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8555478632753987, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9510113270409175, 0.5116786406550935, 0.9122706949370997, 0.0, 0.0, 0.0, 0.0] |
| 1.7531 | 2.0 | 50 | 1.6258 | 0.1481 | 0.1991 | 0.7316 | [nan, 0.4971884694242825, 0.7844619900838784, 0.0, 0.10165655377640956, 0.007428563507709108, nan, 4.566798099115959e-06, 0.0, 0.0, 0.5570746278221521, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.534278997386317, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7557693923373933, 0.5270379031768208, 0.8254522211471568, 0.0, 0.0, 0.0, 0.0] | [nan, 0.8698779680369205, 0.9122325676343133, 0.0, 0.10179229832932858, 0.007508413919135004, nan, 4.566798099115959e-06, 0.0, 0.0, 0.8968168359562617, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8492049383357001, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9388033874781816, 0.6627890453030717, 0.9334458854084583, 0.0, 0.0, 0.0, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
lesso/1bcf7455-8983-47fe-8c86-61e96468a77e | lesso | "2025-02-09T00:18:00Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T20:58:29Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1bcf7455-8983-47fe-8c86-61e96468a77e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 1bcf7455-8983-47fe-8c86-61e96468a77e
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000213
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0024 | 1 | 5.4121 |
| 4.7215 | 0.1197 | 50 | 4.6466 |
| 4.4132 | 0.2394 | 100 | 6.4424 |
| 4.6992 | 0.3591 | 150 | 4.5251 |
| 4.2661 | 0.4788 | 200 | 5.9203 |
| 4.5458 | 0.5984 | 250 | 4.6366 |
| 4.1857 | 0.7181 | 300 | 5.5895 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
varun-v-rao/t5-large-squad-model1 | varun-v-rao | "2024-02-10T14:05:58Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-02-08T22:26:50Z" | ---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: t5-large-squad-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-squad-model1
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
QuantFactory/st-vicuna-v1.3-10.5b-ppl-GGUF | QuantFactory | "2024-06-20T17:40:13Z" | 74 | 1 | null | [
"gguf",
"llama",
"text-generation",
"arxiv:2402.02834",
"base_model:nota-ai/st-vicuna-v1.3-10.5b-ppl",
"base_model:quantized:nota-ai/st-vicuna-v1.3-10.5b-ppl",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T06:40:44Z" | ---
pipeline_tag: text-generation
tags:
- llama
base_model: nota-ai/st-vicuna-v1.3-10.5b-ppl
---
# QuantFactory/st-vicuna-v1.3-10.5b-ppl-GGUF
This is quantized version of [nota-ai/st-vicuna-v1.3-10.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-ppl) created using llama.cpp
# Model Description
### Shortened LLaMA Model Card
Shortened LLaMA is a depth-pruned version of LLaMA models & variants for efficient text generation.
- **Developed by:** [Nota AI](https://www.nota.ai/)
- **License:** Non-commercial license
- **Repository:** https://github.com/Nota-NetsPresso/shortened-llm
- **Paper:** https://arxiv.org/abs/2402.02834
## Compression Method
After identifying unimportant Transformer blocks, we perform one-shot pruning and light LoRA-based retraining.
<details>
<summary>
Click to see a method figure.
</summary>
<img alt="method" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st-llama_method.png" width="100%">
</details>
## Model Links
| Source<br>Model | Pruning<br>Ratio | Pruning<br>Criterion | HF Models<br>Link |
|:---:|:---:|:---:|:---:|
| LLaMA-1-7B | 20% | PPL | [nota-ai/st-llama-1-5.5b-ppl](https://huggingface.co/nota-ai/st-llama-1-5.5b-ppl) |
| LLaMA-1-7B | 20% | Taylor+ | [nota-ai/st-llama-1-5.5b-taylor](https://huggingface.co/nota-ai/st-llama-1-5.5b-taylor) |
| Vicuna-v1.3-7B | 20% | PPL | [nota-ai/st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-ppl) |
| Vicuna-v1.3-7B | 20% | Taylor+ | [nota-ai/st-vicuna-v1.3-5.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-taylor) |
| Vicuna-v1.3-13B | 21% | PPL | [nota-ai/st-vicuna-v1.3-10.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-ppl) |
| Vicuna-v1.3-13B | 21% | Taylor+ | [nota-ai/st-vicuna-v1.3-10.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-taylor) |
## Zero-shot Performance & Efficiency Results
- EleutherAI/lm-evaluation-harness version [3326c54](https://github.com/EleutherAI/lm-evaluation-harness/tree/3326c547a733d598b4377e54be96e194861b964c)
<img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st-llama_zero-shot_scores.png" width="100%">
## License
- All rights related to this repository and the compressed models are reserved by Nota Inc.
- The intended use is strictly limited to research and non-commercial projects.
## Acknowledgments
- [LLM-Pruner](https://github.com/horseee/LLM-Pruner), which utilizes [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness), [PEFT](https://github.com/huggingface/peft), and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). Thanks for the pioneering work on structured pruning of LLMs!
- Meta AI's [LLaMA](https://github.com/facebookresearch/llama) and LMSYS Org's [Vicuna](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). Thanks for the open-source LLMs!
## Original Model Citation
```bibtex
@article{kim2024shortened,
title={Shortened LLaMA: A Simple Depth Pruning for Large Language Models},
author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
journal={arXiv preprint arXiv:2402.02834},
year={2024},
url={https://arxiv.org/abs/2402.02834}
}
```
```bibtex
@article{kim2024mefomo,
title={Shortened LLaMA: A Simple Depth Pruning for Large Language Models},
author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
journal={ICLR Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)},
year={2024},
url={https://openreview.net/forum?id=18VGxuOdpu}
}
``` |
richardkelly/Qwen-Qwen1.5-0.5B-1717473352 | richardkelly | "2024-06-04T04:00:37Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-04T03:55:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rxh1/Finetune_2 | rxh1 | "2024-05-15T03:31:37Z" | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-15T03:30:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bigsock/lumber | bigsock | "2023-04-17T13:24:34Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-17T13:08:00Z" | ---
language: en
widget:
- text: "I love sky news,"
---
# The Lumber Model
## Training data
The model was trained on tweets from Lumber himself.
| Data | Lumber |
| --- | --- |
| Tweets downloaded | 1155 |
| Retweets | 4 |
| Short tweets | 87 |
| Tweets kept | 1064 | |
Intel/musicgen-static-openvino | Intel | "2024-03-01T21:24:07Z" | 0 | 2 | null | [
"text-to-audio",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-audio | "2024-03-01T19:40:41Z" | ---
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
---
# MusicGen Static OpenVINO(TM) Models
This repo stores MusicGen (and related) models, and other collateral that have been ported to OpenVINO IR format.
These models are used to run *Music Generation* feature within this project: https://github.com/intel/openvino-plugins-ai-audacity
## Description of collateral stored in *Files and versions* tab
* **musicgen_small_enc_dec_tok_openvino_models.zip**: This stores the following models that have been ported to OpenVINO IR format:
* Tokenizer IR generated using [openvino tokenizers](https://github.com/openvinotoolkit/openvino_tokenizers)
* facebook/encodec_32khz model, both encoder and decoder.
* T5 text encoder
* **musicgen_small_mono_openvino_models.zip**: This stores the [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) model that has been converted into [several] OpenVINO IR files.
* **musicgen_small_stereo_openvino_models.zip**: This stores the [facebook/musicgen-stereo-small](https://huggingface.co/facebook/musicgen-stereo-small) model that has been converted into [several] OpenVINO IR files.
**For more details about intended use of the model, datasets, limitations, etc., these details can be found in the Model Card within original model repo:**
https://huggingface.co/facebook/musicgen-small
https://huggingface.co/facebook/musicgen-stereo-small
## Intel’s Human Rights Disclaimer:
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. |
kadriu/speecht5_finetuned_voxpopuli_nl | kadriu | "2024-02-29T18:47:57Z" | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-02-29T17:53:30Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5224 | 6.19 | 250 | 0.4689 |
| 0.5002 | 12.38 | 500 | 0.4493 |
| 0.4869 | 18.58 | 750 | 0.4440 |
| 0.4878 | 24.77 | 1000 | 0.4423 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
fcakyon/timesformer-hr-finetuned-k400 | fcakyon | "2025-02-12T20:54:00Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"timesformer",
"video-classification",
"vision",
"arxiv:2102.05095",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2022-12-10T21:12:04Z" | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# TimeSformer (high-resolution variant, fine-tuned on Kinetics-400)
TimeSformer model pre-trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon).
## Intended uses & limitations
You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels.
### How to use
Here is how to use this model to classify a video:
```python
from transformers import AutoImageProcessor, TimesformerForVideoClassification
import numpy as np
import torch
video = list(np.random.randn(16, 3, 448, 448))
processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-hr-finetuned-k400")
model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-hr-finetuned-k400")
inputs = processor(images=video, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#).
### BibTeX entry and citation info
```bibtex
@inproceedings{bertasius2021space,
title={Is Space-Time Attention All You Need for Video Understanding?},
author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo},
booktitle={International Conference on Machine Learning},
pages={813--824},
year={2021},
organization={PMLR}
}
``` |
zsf/Unet_coco_50k141_ori_aug | zsf | "2024-06-11T18:34:16Z" | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-11T17:40:43Z" | ---
license: cc-by-nc-4.0
---
|
KappaNeuro/joseph-wright-of-derby-style | KappaNeuro | "2023-09-14T09:48:20Z" | 5 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"style",
"paint",
"joseph wright of derby",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T09:48:15Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- style
- paint
- joseph wright of derby
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Joseph Wright Of Derby Style page
widget:
- text: "Joseph Wright Of Derby Style - This painting is a powerful example of Romanticism, with its emphasis on emotion, imagination, and the sublime. The subject is a scene of natural beauty, rendered with a sense of awe and wonder that is characteristic of the style. The composition is filled with dramatic contrasts of light and dark, creating a sense of depth and mystery. The colors are rich and vibrant, with a sense of intensity and passion that reflects the emotional intensity of the era. The overall effect is one of overwhelming beauty, a celebration of the power and majesty of nature"
- text: "Joseph Wright Of Derby Style - a scene where Gulliver, a man in the simple clothing of a castaway, converses with noble, intelligent horses in a serene landscape. The natural setting has an unspoiled and idyllic quality. The artistic style would match that of a Romantic-era landscape painting, emphasizing emotion and individualism, capturing the contrast between human follies and the wisdom of nature."
- text: "Joseph Wright Of Derby Style - Hyper detailed serene early morning landscape with the wall of giant, dense trees over the lake. A flock of white swans on that lake looks small compared to huge crones of trees. Early morning. Antique Renaisance print on paper, 17th century, highly detailed, concept art, grotesque. Claude Lorrain. Epic, majestic, decadent, lavish, rich, sumptuous, nostalgic, 8K UHD 5000"
- text: "Joseph Wright Of Derby Style - An exhausted Gulliver in the 18th-century clothing advances into a desolate landscape, a satirical take on the Romanticism style. The absence of civilization emphasizes the inherent absurdity and loneliness of his condition. satire"
- text: "Joseph Wright Of Derby Style - recreate the painting of washington crossing the chasm in the same tone and style. change out the people in the boat to have them wear flannel shirts and carrying computers. Make those people to be from present day"
- text: "Joseph Wright Of Derby Style - Portrait of Aim Bonpland, he was a French explorer and botanist who traveled with Alexander von Humboldt in Latin America from 1799 to 1804. He co-authored volumes of the scientific results of their expedition."
- text: "Joseph Wright Of Derby Style - 8k, stourhead, rolling green hills arcadian landscape with dark lush planting on the sides leading up to a seaside crumbling overgrown greek temple landscape folly tower on a rocky cliff"
- text: "Joseph Wright Of Derby Style - a moonlit scene at the edge of a loney castle forecourt bordering a forrest. Two horses without riders stand in the shadows. A 20 year old man from the 16th century stands nearby."
- text: "Joseph Wright Of Derby Style - an art inspired by Joseph Wright of Derby of an impossible line in the middle of the universe, close to a black hole and a dying star. The art talks about time"
---
# Joseph Wright Of Derby Style ([CivitAI](https://civitai.com/models/153849)

>
<p>Joseph Wright of Derby, born in 1734, was an English painter known for his skillful use of light and shadow in his works. He was one of the prominent figures of the English Enlightenment period and is often referred to as "Wright of Derby" due to his association with the city of Derby in England.</p><p>Wright's paintings encompassed a variety of genres, including portraiture, landscapes, and historical scenes. However, he is most renowned for his mastery of capturing the effects of light in his works, particularly in his candlelit scenes and industrial landscapes.</p><p>One of his notable series is the "Candlelight" series, where he depicted individuals or groups illuminated by a single light source, creating dramatic contrasts between light and dark. These works demonstrated his technical skill in rendering the subtleties of light and shadow, and they often evoked a sense of mystery and introspection.</p><p>Wright also explored the emerging industrial revolution in his paintings, depicting scenes of factories, ironworks, and scientific experiments. His works captured the awe and curiosity associated with the advancements of the time.</p><p>Throughout his career, Wright received patronage from influential figures, including prominent scientists and industrialists. His paintings were highly regarded for their technical precision and the emotive atmosphere they conveyed.</p><p>Joseph Wright of Derby's contributions to art during the Enlightenment era marked a significant shift in artistic subject matter and techniques. His skillful handling of light and his ability to capture the human spirit in various contexts solidified his reputation as one of the great painters of his time. Today, his works can be found in museums and galleries, where they continue to inspire and intrigue viewers with their mastery of light and profound subject matter.</p>
## Image examples for the model:

> Joseph Wright Of Derby Style - This painting is a powerful example of Romanticism, with its emphasis on emotion, imagination, and the sublime. The subject is a scene of natural beauty, rendered with a sense of awe and wonder that is characteristic of the style. The composition is filled with dramatic contrasts of light and dark, creating a sense of depth and mystery. The colors are rich and vibrant, with a sense of intensity and passion that reflects the emotional intensity of the era. The overall effect is one of overwhelming beauty, a celebration of the power and majesty of nature

> Joseph Wright Of Derby Style - a scene where Gulliver, a man in the simple clothing of a castaway, converses with noble, intelligent horses in a serene landscape. The natural setting has an unspoiled and idyllic quality. The artistic style would match that of a Romantic-era landscape painting, emphasizing emotion and individualism, capturing the contrast between human follies and the wisdom of nature.

> Joseph Wright Of Derby Style - Hyper detailed serene early morning landscape with the wall of giant, dense trees over the lake. A flock of white swans on that lake looks small compared to huge crones of trees. Early morning. Antique Renaisance print on paper, 17th century, highly detailed, concept art, grotesque. Claude Lorrain. Epic, majestic, decadent, lavish, rich, sumptuous, nostalgic, 8K UHD 5000

> Joseph Wright Of Derby Style - An exhausted Gulliver in the 18th-century clothing advances into a desolate landscape, a satirical take on the Romanticism style. The absence of civilization emphasizes the inherent absurdity and loneliness of his condition. satire

> Joseph Wright Of Derby Style - recreate the painting of washington crossing the chasm in the same tone and style. change out the people in the boat to have them wear flannel shirts and carrying computers. Make those people to be from present day

> Joseph Wright Of Derby Style - Portrait of Aim Bonpland, he was a French explorer and botanist who traveled with Alexander von Humboldt in Latin America from 1799 to 1804. He co-authored volumes of the scientific results of their expedition.

> Joseph Wright Of Derby Style - 8k, stourhead, rolling green hills arcadian landscape with dark lush planting on the sides leading up to a seaside crumbling overgrown greek temple landscape folly tower on a rocky cliff

> Joseph Wright Of Derby Style - a moonlit scene at the edge of a loney castle forecourt bordering a forrest. Two horses without riders stand in the shadows. A 20 year old man from the 16th century stands nearby.

> Joseph Wright Of Derby Style - an art inspired by Joseph Wright of Derby of an impossible line in the middle of the universe, close to a black hole and a dying star. The art talks about time
|
timm/cs3edgenet_x.c2_in1k | timm | "2025-01-21T21:46:24Z" | 184 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"arxiv:2110.00476",
"arxiv:1911.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-12T20:36:27Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
- transformers
---
# Model card for cs3edgenet_x.c2_in1k
A CS3-EdgeNet (Cross-Stage-Partial w/ 3 convolutions and Squeeze-and-Excitation channel attention) image classification model. `EdgeNet` models are similar to `DarkNet` but use a MobileNet-V1 like 3x3 + 1x1 residual block instead of a 1x1 + 3x3 block. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes w/o repeat-aug and stronger mixup
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping)
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 47.8
- GMACs: 11.5
- Activations (M): 12.9
- Image size: train = 256 x 256, test = 288 x 288
- **Papers:**
- CSPNet: A New Backbone that can Enhance Learning Capability of CNN: https://arxiv.org/abs/1911.11929
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('cs3edgenet_x.c2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cs3edgenet_x.c2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 80, 128, 128])
# torch.Size([1, 160, 64, 64])
# torch.Size([1, 320, 32, 32])
# torch.Size([1, 640, 16, 16])
# torch.Size([1, 1280, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cs3edgenet_x.c2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Wang2019CSPNetAN,
title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN},
author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
year={2019},
pages={1571-1580}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
furrutiav/bert_w_sen_mixtral_nllfg_vanilla_rte_none_naive | furrutiav | "2024-12-12T19:01:37Z" | 103 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-12-12T19:01:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF | mradermacher | "2024-11-14T01:23:27Z" | 62 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:nbeerbower/Purpura-DPO",
"base_model:nbeerbower/Mistral-Nemo-Prism-12B-v6",
"base_model:quantized:nbeerbower/Mistral-Nemo-Prism-12B-v6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-14T00:37:54Z" | ---
base_model: nbeerbower/Mistral-Nemo-Prism-12B-v6
datasets:
- nbeerbower/Arkhaios-DPO
- nbeerbower/Purpura-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B-v6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Prism-12B-v6-GGUF/resolve/main/Mistral-Nemo-Prism-12B-v6.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yujpark/kogpt2-base-v2-finetuned-klue-ner | yujpark | "2023-05-07T11:01:49Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-05-06T12:36:23Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.20302605134427973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4218
- F1: 0.2030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5075 | 1.0 | 1313 | 0.4886 | 0.1368 |
| 0.3689 | 2.0 | 2626 | 0.4411 | 0.1756 |
| 0.2931 | 3.0 | 3939 | 0.4218 | 0.2030 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
amuvarma/canopy-tune-stage_1-luna | amuvarma | "2025-02-03T01:00:03Z" | 37 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-03T00:45:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheWorstIsNot/bot | TheWorstIsNot | "2024-12-19T00:22:42Z" | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-12-19T00:22:27Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/_9190fd69-9c94-4c8c-b657-8b3636968e69.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: on_a_tray
---
# bot
<Gallery />
## Trigger words
You should use `on_a_tray` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/TheWorstIsNot/bot/tree/main) them in the Files & versions tab.
|
Frank1092/llama_3.1_8B_finetuned_control | Frank1092 | "2024-11-21T22:08:40Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T22:02:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sharan1712/llama2_7B_hhrlhf_qdora_loftq_4bit_6b | Sharan1712 | "2024-09-03T00:34:32Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-09-03T00:32:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wop/kosmox-quantum-tiny | wop | "2024-05-28T13:54:51Z" | 78 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-28T13:51:10Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** wop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kakafei/aaa | kakafei | "2023-10-18T09:10:48Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | "2023-10-18T08:48:03Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cambridge-climb/combination-roberta_pre_layer_norm-model | cambridge-climb | "2023-10-11T14:38:35Z" | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
] | null | "2023-07-22T10:25:09Z" | ---
license: mit
language:
- en
---
Model directory for storing trained models that use a combination of the three types of curricula that we explore: vocab-, data- and objective-based curricula.
Each experiment is stored as a separate branch. |
mkhan149/output_model9 | mkhan149 | "2023-07-01T15:38:06Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-06-27T15:24:15Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mkhan149/output_model9
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mkhan149/output_model9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8395
- Validation Loss: 4.1541
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 263, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8395 | 4.1541 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
maanasharma5/dialect-debiasing-gpt2-medium-pnlogmse-e1-r100_eval-n1.0-smaller_lora | maanasharma5 | "2025-03-29T08:17:16Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt2",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-medium",
"base_model:adapter:openai-community/gpt2-medium",
"region:us"
] | null | "2025-03-29T08:17:14Z" | ---
base_model: gpt2-medium
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
ZAM-ITI-110/DistilBert1 | ZAM-ITI-110 | "2025-02-17T11:22:27Z" | 0 | 0 | null | [
"pytorch",
"distilbert",
"license:apache-2.0",
"region:us"
] | null | "2025-02-17T09:45:40Z" | ---
license: apache-2.0
---
|
madelineoliver/ToolsBaer-MSG-to-Office-365-Importer | madelineoliver | "2024-04-16T10:53:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-16T10:52:50Z" | Use the ToolsBaer MSG to Office 365 Importer program to quickly and safely convert Outlook MSG to an Office 365 account." It is a stand-alone email importer solution that utilizes the latest developments in technology. The program that converts MSG files to Office 365 offers a complete solution to handling the email transfer process for user' Office 365 accounts. Users can swiftly and effectively convert multiple MSG files to Office 365 accounts without losing any information. The batch conversion function of the program allows users to convert MSG files to Office 365 mail in mass. All email features, including name, CC, BCC, too, from, hyperlinks, pictures, and attachments, can be exported by users using the program. The conversion program ensures perfect conversion accuracy. The tool can be used safely in both personal and professional contexts. In the program's trial versions, up to 10 emails can be converted for straightforward mailing per folder. Installing ToolsBaer MSG to Office 365 Importer Software on Windows 11, 10, 8.1, 8, 7, Vista, XP, and previous versions is a simple process.
Read More:- http://www.toolsbaer.com/msg-to-office-365-importer/ |
michaelosei/Metaevaluation | michaelosei | "2025-03-05T19:16:30Z" | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-03-05T19:16:30Z" | ---
license: bigscience-bloom-rail-1.0
---
|
yueun/git-base-pokemon | yueun | "2023-03-14T05:11:38Z" | 59 | 0 | transformers | [
"transformers",
"pytorch",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-03-14T05:09:29Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF | mradermacher | "2024-12-12T03:18:22Z" | 225 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jondurbin/airoboros-gpt-3.5-turbo-100k-7b",
"base_model:quantized:jondurbin/airoboros-gpt-3.5-turbo-100k-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-11T23:57:17Z" | ---
base_model: jondurbin/airoboros-gpt-3.5-turbo-100k-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-gpt-3.5-turbo-100k-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-gpt-3.5-turbo-100k-7b-i1-GGUF/resolve/main/airoboros-gpt-3.5-turbo-100k-7b.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Heoni/old_v2_1_pt_ep1_sft_ep1_merged_model_based_on_llama3_20240717 | Heoni | "2024-07-17T10:05:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-17T09:42:34Z" | ---
license: cc-by-nc-nd-4.0
---
|
Alphatao/6d672e4c-0370-4dbe-8941-7d98aa132fcf | Alphatao | "2025-03-24T00:20:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-03-23T19:34:20Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d672e4c-0370-4dbe-8941-7d98aa132fcf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ccdcd20b8b8f3096_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ccdcd20b8b8f3096_train_data.json
type:
field_input: Complex_CoT
field_instruction: Question
field_output: Response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/6d672e4c-0370-4dbe-8941-7d98aa132fcf
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1163
micro_batch_size: 4
mlflow_experiment_name: /tmp/ccdcd20b8b8f3096_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b90077a-e89f-499d-9a4b-ed2f9661602c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b90077a-e89f-499d-9a4b-ed2f9661602c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6d672e4c-0370-4dbe-8941-7d98aa132fcf
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1163
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8995 | 0.0014 | 1 | 1.0569 |
| 0.759 | 0.1371 | 100 | 0.6467 |
| 0.5216 | 0.2742 | 200 | 0.6408 |
| 0.5871 | 0.4112 | 300 | 0.6364 |
| 0.6556 | 0.5483 | 400 | 0.6342 |
| 0.6479 | 0.6854 | 500 | 0.6300 |
| 0.6732 | 0.8225 | 600 | 0.6275 |
| 0.6842 | 0.9596 | 700 | 0.6261 |
| 0.4127 | 1.0966 | 800 | 0.6312 |
| 0.6865 | 1.2337 | 900 | 0.6312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso07/a9252985-cc1b-4fe7-bc70-562e98723431 | lesso07 | "2025-01-31T22:20:47Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T22:04:11Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9252985-cc1b-4fe7-bc70-562e98723431
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70f52a63c771d9a2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70f52a63c771d9a2_train_data.json
type:
field_input: user
field_instruction: assistant
field_output: reasoning
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso07/a9252985-cc1b-4fe7-bc70-562e98723431
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/70f52a63c771d9a2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84107b8f-d141-4260-a84e-55d6d48e80b1
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 84107b8f-d141-4260-a84e-55d6d48e80b1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a9252985-cc1b-4fe7-bc70-562e98723431
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4327 | 0.0853 | 200 | 0.4378 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 | aodiniz | "2021-05-18T23:45:25Z" | 27 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"dataset:squad_v2",
"arxiv:1908.08962",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
datasets:
- squad_v2
---
# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0
BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), [fine-tuned for MLM](https://huggingface.co/aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616) on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.
## Training the model
```bash
python run_squad.py
--model_type bert
--model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
--train_file 'train-v2.0.json'
--predict_file 'dev-v2.0.json'
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--max_seq_length 384
--per_gpu_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2
|
caffeinatedwoof/Llama-2-7b-chat-hf-Amod-mental_health_counseling_conversations_peft | caffeinatedwoof | "2023-08-25T15:58:37Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-25T15:58:20Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
mradermacher/Athena-3-3B-i1-GGUF | mradermacher | "2025-03-29T06:36:47Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:Spestly/Athena-3-3B",
"base_model:quantized:Spestly/Athena-3-3B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-29T03:16:41Z" | ---
base_model: Spestly/Athena-3-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Spestly/Athena-3-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Athena-3-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-3-3B-i1-GGUF/resolve/main/Athena-3-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Berk/mixtral_train | Berk | "2024-04-04T18:37:44Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-04T18:37:41Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-v0.1
datasets:
- generator
model-index:
- name: mixtral_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixtral_train
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8533
- eval_runtime: 141.9062
- eval_samples_per_second: 3.622
- eval_steps_per_second: 0.458
- epoch: 5.97
- step: 400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 1000
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
dominic1021/xlmelodyasmr-0 | dominic1021 | "2023-12-19T14:58:12Z" | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-12-19T13:02:39Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: xlmelodyasmr
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
aleegis12/22f44d3f-a157-4c76-81f6-d93514087729 | aleegis12 | "2025-02-09T06:14:10Z" | 35 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1",
"license:mit",
"region:us"
] | null | "2025-02-09T05:54:28Z" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 22f44d3f-a157-4c76-81f6-d93514087729
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb038e00d284bc75_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb038e00d284bc75_train_data.json
type:
field_input: Category
field_instruction: Resume_str
field_output: Resume_html
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/22f44d3f-a157-4c76-81f6-d93514087729
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 450
micro_batch_size: 8
mlflow_experiment_name: /tmp/fb038e00d284bc75_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f7ed688d-2b8f-4a39-876f-1846cdc4266b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f7ed688d-2b8f-4a39-876f-1846cdc4266b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 22f44d3f-a157-4c76-81f6-d93514087729
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 68
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0147 | 1 | 0.9216 |
| 0.1442 | 0.7353 | 50 | 0.1309 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
heldJan/llama-2-7b-froozen_mvit | heldJan | "2024-02-02T08:36:58Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"VideoChatGPT",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2024-02-01T18:11:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
felixbrock/test_trainer | felixbrock | "2024-02-16T17:39:56Z" | 195 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-16T17:39:13Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0632
- Accuracy: 0.992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.0628 | 0.992 |
| No log | 2.0 | 250 | 0.0632 | 0.992 |
| No log | 3.0 | 375 | 0.0632 | 0.992 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
lio/ppo-LunarLander-v2 | lio | "2023-03-19T04:31:31Z" | 0 | 1 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-19T04:31:05Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 225.12 +/- 61.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sappho192/aihub-ja-ko-translator | sappho192 | "2024-06-28T06:38:39Z" | 349 | 3 | transformers | [
"transformers",
"onnx",
"safetensors",
"encoder-decoder",
"text2text-generation",
"translation",
"ja",
"ko",
"license:mit",
"autotrain_compatible",
"region:us"
] | translation | "2024-02-05T00:51:46Z" | ---
license: mit
language:
- ja
- ko
pipeline_tag: translation
inference: false
---
# Japanese to Korean translator
Japanese to Korean translator model based on [EncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)([bert-japanese](https://huggingface.co/cl-tohoku/bert-base-japanese)+[kogpt2](https://github.com/SKT-AI/KoGPT2))
# Usage
## Demo
Please visit https://huggingface.co/spaces/sappho192/aihub-ja-ko-translator-demo
## Dependencies (PyPI)
- torch
- transformers
- fugashi
- unidic-lite
## Inference
```Python
from transformers import(
EncoderDecoderModel,
PreTrainedTokenizerFast,
BertJapaneseTokenizer,
)
import torch
encoder_model_name = "cl-tohoku/bert-base-japanese-v2"
decoder_model_name = "skt/kogpt2-base-v2"
src_tokenizer = BertJapaneseTokenizer.from_pretrained(encoder_model_name)
trg_tokenizer = PreTrainedTokenizerFast.from_pretrained(decoder_model_name)
model = EncoderDecoderModel.from_pretrained("sappho192/aihub-ja-ko-translator")
text = "初めまして。よろしくお願いします。"
def translate(text_src):
embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt')
embeddings = {k: v for k, v in embeddings.items()}
output = model.generate(**embeddings, max_length=500)[0, 1:-1]
text_trg = trg_tokenizer.decode(output.cpu())
return text_trg
print(translate(text))
```
# Dataset
This model used datasets from 'The Open AI Dataset Project (AI-Hub, South Korea)'.
All data information can be accessed through 'AI-Hub ([aihub.or.kr](https://www.aihub.or.kr))'.
(**In order for a corporation, organization, or individual located outside of Korea to use AI data, etc., a separate agreement is required** with the performing organization and the Korea National Information Society agency(NIA). In order to export AI data, etc. outside the country, a separate agreement is required with the performing organization and the NIA. [Link](https://aihub.or.kr/intrcn/guid/usagepolicy.do?currMenu=151&topMenu=105))
이 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아 구축된 데이터셋을 활용하여 수행된 연구입니다.
본 모델에 활용된 데이터는 AI 허브([aihub.or.kr](https://www.aihub.or.kr))에서 다운로드 받으실 수 있습니다.
(**국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는** 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
**본 AI데이터 등의 국외 반출을 위해서는** 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다. [[출처](https://aihub.or.kr/intrcn/guid/usagepolicy.do?currMenu=151&topMenu=105)])
## Dataset list
The dataset used to train the model is merged following sub-datasets:
- 027. 일상생활 및 구어체 한-중, 한-일 번역 병렬 말뭉치 데이터 [[Link](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=546)]
- 053. 한국어-다국어(영어 제외) 번역 말뭉치(기술과학) [[Link](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71493)]
- 054. 한국어-다국어 번역 말뭉치(기초과학) [[Link](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71496)]
- 055. 한국어-다국어 번역 말뭉치 (인문학) [[Link](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71498)]
- 한국어-일본어 번역 말뭉치 [[Link](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=127)]
To reproduce the the merged dataset, you can use the code in below link:
https://github.com/sappho192/aihub-translation-dataset
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run2_AugV5_k3_task2_organization | MayBashendy | "2025-01-04T08:23:51Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-31T00:06:04Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run2_AugV5_k3_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run2_AugV5_k3_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6935
- Qwk: 0.6208
- Mse: 0.6935
- Rmse: 0.8328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1667 | 2 | 4.6705 | 0.0010 | 4.6705 | 2.1611 |
| No log | 0.3333 | 4 | 2.6941 | 0.0215 | 2.6941 | 1.6414 |
| No log | 0.5 | 6 | 1.7015 | 0.0372 | 1.7015 | 1.3044 |
| No log | 0.6667 | 8 | 1.3518 | 0.0811 | 1.3518 | 1.1627 |
| No log | 0.8333 | 10 | 1.1885 | 0.1857 | 1.1885 | 1.0902 |
| No log | 1.0 | 12 | 1.2404 | 0.0802 | 1.2404 | 1.1137 |
| No log | 1.1667 | 14 | 1.2829 | 0.0802 | 1.2829 | 1.1326 |
| No log | 1.3333 | 16 | 1.2515 | 0.1247 | 1.2515 | 1.1187 |
| No log | 1.5 | 18 | 1.5618 | -0.0149 | 1.5618 | 1.2497 |
| No log | 1.6667 | 20 | 1.8265 | 0.0227 | 1.8265 | 1.3515 |
| No log | 1.8333 | 22 | 1.6318 | 0.0 | 1.6318 | 1.2774 |
| No log | 2.0 | 24 | 1.3089 | 0.0 | 1.3089 | 1.1441 |
| No log | 2.1667 | 26 | 1.1603 | 0.3565 | 1.1603 | 1.0772 |
| No log | 2.3333 | 28 | 1.0788 | 0.3965 | 1.0788 | 1.0387 |
| No log | 2.5 | 30 | 1.0864 | 0.3441 | 1.0864 | 1.0423 |
| No log | 2.6667 | 32 | 1.2526 | 0.1346 | 1.2526 | 1.1192 |
| No log | 2.8333 | 34 | 1.2017 | 0.1472 | 1.2017 | 1.0962 |
| No log | 3.0 | 36 | 1.3830 | 0.1552 | 1.3830 | 1.1760 |
| No log | 3.1667 | 38 | 1.3496 | 0.1552 | 1.3496 | 1.1617 |
| No log | 3.3333 | 40 | 1.1004 | 0.2386 | 1.1004 | 1.0490 |
| No log | 3.5 | 42 | 1.0507 | 0.2938 | 1.0507 | 1.0250 |
| No log | 3.6667 | 44 | 1.0208 | 0.3095 | 1.0208 | 1.0103 |
| No log | 3.8333 | 46 | 1.0269 | 0.2520 | 1.0269 | 1.0133 |
| No log | 4.0 | 48 | 1.0172 | 0.2556 | 1.0172 | 1.0086 |
| No log | 4.1667 | 50 | 1.1643 | 0.2220 | 1.1643 | 1.0790 |
| No log | 4.3333 | 52 | 1.2629 | 0.2919 | 1.2629 | 1.1238 |
| No log | 4.5 | 54 | 1.0573 | 0.3012 | 1.0573 | 1.0283 |
| No log | 4.6667 | 56 | 0.9597 | 0.3854 | 0.9597 | 0.9796 |
| No log | 4.8333 | 58 | 1.0472 | 0.3978 | 1.0472 | 1.0233 |
| No log | 5.0 | 60 | 0.9315 | 0.4078 | 0.9315 | 0.9651 |
| No log | 5.1667 | 62 | 1.0528 | 0.4640 | 1.0528 | 1.0261 |
| No log | 5.3333 | 64 | 1.3060 | 0.3949 | 1.3060 | 1.1428 |
| No log | 5.5 | 66 | 1.0021 | 0.5185 | 1.0021 | 1.0010 |
| No log | 5.6667 | 68 | 0.8255 | 0.5198 | 0.8255 | 0.9086 |
| No log | 5.8333 | 70 | 0.7848 | 0.5922 | 0.7848 | 0.8859 |
| No log | 6.0 | 72 | 0.8002 | 0.5869 | 0.8002 | 0.8945 |
| No log | 6.1667 | 74 | 0.7996 | 0.5439 | 0.7996 | 0.8942 |
| No log | 6.3333 | 76 | 0.7988 | 0.5621 | 0.7988 | 0.8938 |
| No log | 6.5 | 78 | 0.7880 | 0.5738 | 0.7880 | 0.8877 |
| No log | 6.6667 | 80 | 0.7812 | 0.5951 | 0.7812 | 0.8839 |
| No log | 6.8333 | 82 | 0.7683 | 0.5938 | 0.7683 | 0.8765 |
| No log | 7.0 | 84 | 0.8295 | 0.5933 | 0.8295 | 0.9108 |
| No log | 7.1667 | 86 | 0.9499 | 0.4941 | 0.9499 | 0.9747 |
| No log | 7.3333 | 88 | 0.8851 | 0.5554 | 0.8851 | 0.9408 |
| No log | 7.5 | 90 | 0.7631 | 0.6205 | 0.7631 | 0.8735 |
| No log | 7.6667 | 92 | 0.7434 | 0.5754 | 0.7434 | 0.8622 |
| No log | 7.8333 | 94 | 0.7397 | 0.6053 | 0.7397 | 0.8601 |
| No log | 8.0 | 96 | 0.8409 | 0.5352 | 0.8409 | 0.9170 |
| No log | 8.1667 | 98 | 0.9545 | 0.4739 | 0.9545 | 0.9770 |
| No log | 8.3333 | 100 | 0.9355 | 0.4794 | 0.9355 | 0.9672 |
| No log | 8.5 | 102 | 0.8160 | 0.5476 | 0.8160 | 0.9033 |
| No log | 8.6667 | 104 | 0.8500 | 0.5875 | 0.8500 | 0.9220 |
| No log | 8.8333 | 106 | 0.9036 | 0.5380 | 0.9036 | 0.9506 |
| No log | 9.0 | 108 | 0.9523 | 0.5352 | 0.9523 | 0.9759 |
| No log | 9.1667 | 110 | 1.0154 | 0.4573 | 1.0154 | 1.0076 |
| No log | 9.3333 | 112 | 0.9520 | 0.5441 | 0.9520 | 0.9757 |
| No log | 9.5 | 114 | 1.0822 | 0.4840 | 1.0822 | 1.0403 |
| No log | 9.6667 | 116 | 1.2880 | 0.4246 | 1.2880 | 1.1349 |
| No log | 9.8333 | 118 | 1.2118 | 0.4407 | 1.2118 | 1.1008 |
| No log | 10.0 | 120 | 0.9838 | 0.4372 | 0.9838 | 0.9919 |
| No log | 10.1667 | 122 | 0.9111 | 0.3874 | 0.9111 | 0.9545 |
| No log | 10.3333 | 124 | 0.9725 | 0.3567 | 0.9725 | 0.9862 |
| No log | 10.5 | 126 | 0.9601 | 0.3390 | 0.9601 | 0.9799 |
| No log | 10.6667 | 128 | 0.9199 | 0.3929 | 0.9199 | 0.9591 |
| No log | 10.8333 | 130 | 0.9145 | 0.4866 | 0.9145 | 0.9563 |
| No log | 11.0 | 132 | 0.8731 | 0.5186 | 0.8731 | 0.9344 |
| No log | 11.1667 | 134 | 0.8296 | 0.4313 | 0.8296 | 0.9108 |
| No log | 11.3333 | 136 | 0.8172 | 0.5339 | 0.8172 | 0.9040 |
| No log | 11.5 | 138 | 0.7993 | 0.6404 | 0.7993 | 0.8940 |
| No log | 11.6667 | 140 | 0.7909 | 0.5811 | 0.7909 | 0.8893 |
| No log | 11.8333 | 142 | 0.9133 | 0.4910 | 0.9133 | 0.9557 |
| No log | 12.0 | 144 | 0.9369 | 0.4730 | 0.9369 | 0.9679 |
| No log | 12.1667 | 146 | 0.7750 | 0.5793 | 0.7750 | 0.8803 |
| No log | 12.3333 | 148 | 0.7103 | 0.6257 | 0.7103 | 0.8428 |
| No log | 12.5 | 150 | 0.7033 | 0.6257 | 0.7033 | 0.8386 |
| No log | 12.6667 | 152 | 0.7010 | 0.6431 | 0.7010 | 0.8373 |
| No log | 12.8333 | 154 | 0.7129 | 0.5985 | 0.7129 | 0.8443 |
| No log | 13.0 | 156 | 0.7340 | 0.6057 | 0.7340 | 0.8568 |
| No log | 13.1667 | 158 | 0.7447 | 0.6346 | 0.7447 | 0.8629 |
| No log | 13.3333 | 160 | 0.7520 | 0.6308 | 0.7520 | 0.8672 |
| No log | 13.5 | 162 | 0.7510 | 0.5756 | 0.7510 | 0.8666 |
| No log | 13.6667 | 164 | 0.8431 | 0.5455 | 0.8431 | 0.9182 |
| No log | 13.8333 | 166 | 0.8688 | 0.5133 | 0.8688 | 0.9321 |
| No log | 14.0 | 168 | 0.7929 | 0.4527 | 0.7929 | 0.8905 |
| No log | 14.1667 | 170 | 0.8034 | 0.4482 | 0.8034 | 0.8964 |
| No log | 14.3333 | 172 | 0.8949 | 0.4186 | 0.8949 | 0.9460 |
| No log | 14.5 | 174 | 0.8812 | 0.4440 | 0.8812 | 0.9387 |
| No log | 14.6667 | 176 | 0.8151 | 0.3998 | 0.8151 | 0.9028 |
| No log | 14.8333 | 178 | 0.8773 | 0.4439 | 0.8773 | 0.9367 |
| No log | 15.0 | 180 | 0.8439 | 0.4657 | 0.8439 | 0.9186 |
| No log | 15.1667 | 182 | 0.7823 | 0.4828 | 0.7823 | 0.8845 |
| No log | 15.3333 | 184 | 0.8243 | 0.5287 | 0.8243 | 0.9079 |
| No log | 15.5 | 186 | 0.8045 | 0.4507 | 0.8045 | 0.8969 |
| No log | 15.6667 | 188 | 0.7540 | 0.5528 | 0.7540 | 0.8683 |
| No log | 15.8333 | 190 | 0.7472 | 0.5342 | 0.7472 | 0.8644 |
| No log | 16.0 | 192 | 0.7510 | 0.5120 | 0.7510 | 0.8666 |
| No log | 16.1667 | 194 | 0.7463 | 0.5455 | 0.7463 | 0.8639 |
| No log | 16.3333 | 196 | 0.7549 | 0.5451 | 0.7549 | 0.8689 |
| No log | 16.5 | 198 | 0.7866 | 0.6365 | 0.7866 | 0.8869 |
| No log | 16.6667 | 200 | 0.8376 | 0.6307 | 0.8376 | 0.9152 |
| No log | 16.8333 | 202 | 0.7908 | 0.6019 | 0.7908 | 0.8893 |
| No log | 17.0 | 204 | 0.7629 | 0.6127 | 0.7629 | 0.8734 |
| No log | 17.1667 | 206 | 0.7483 | 0.6251 | 0.7483 | 0.8650 |
| No log | 17.3333 | 208 | 0.7471 | 0.6258 | 0.7471 | 0.8644 |
| No log | 17.5 | 210 | 0.7011 | 0.6328 | 0.7011 | 0.8373 |
| No log | 17.6667 | 212 | 0.7054 | 0.6333 | 0.7054 | 0.8399 |
| No log | 17.8333 | 214 | 0.7037 | 0.6424 | 0.7037 | 0.8388 |
| No log | 18.0 | 216 | 0.7048 | 0.6054 | 0.7048 | 0.8395 |
| No log | 18.1667 | 218 | 0.7926 | 0.5511 | 0.7926 | 0.8903 |
| No log | 18.3333 | 220 | 0.8095 | 0.5511 | 0.8095 | 0.8997 |
| No log | 18.5 | 222 | 0.6910 | 0.6340 | 0.6910 | 0.8313 |
| No log | 18.6667 | 224 | 0.7524 | 0.4929 | 0.7524 | 0.8674 |
| No log | 18.8333 | 226 | 0.9355 | 0.5357 | 0.9355 | 0.9672 |
| No log | 19.0 | 228 | 0.9030 | 0.5222 | 0.9030 | 0.9502 |
| No log | 19.1667 | 230 | 0.7386 | 0.4841 | 0.7386 | 0.8594 |
| No log | 19.3333 | 232 | 0.7347 | 0.4690 | 0.7347 | 0.8571 |
| No log | 19.5 | 234 | 0.8451 | 0.5113 | 0.8451 | 0.9193 |
| No log | 19.6667 | 236 | 0.8783 | 0.5098 | 0.8783 | 0.9372 |
| No log | 19.8333 | 238 | 0.7865 | 0.5071 | 0.7865 | 0.8868 |
| No log | 20.0 | 240 | 0.7216 | 0.5672 | 0.7216 | 0.8495 |
| No log | 20.1667 | 242 | 0.7684 | 0.6022 | 0.7684 | 0.8766 |
| No log | 20.3333 | 244 | 0.7589 | 0.6022 | 0.7589 | 0.8711 |
| No log | 20.5 | 246 | 0.7206 | 0.5672 | 0.7206 | 0.8489 |
| No log | 20.6667 | 248 | 0.7611 | 0.5996 | 0.7611 | 0.8724 |
| No log | 20.8333 | 250 | 0.9018 | 0.5255 | 0.9018 | 0.9496 |
| No log | 21.0 | 252 | 0.9368 | 0.5421 | 0.9368 | 0.9679 |
| No log | 21.1667 | 254 | 0.8421 | 0.5781 | 0.8421 | 0.9177 |
| No log | 21.3333 | 256 | 0.7100 | 0.6089 | 0.7100 | 0.8426 |
| No log | 21.5 | 258 | 0.6912 | 0.5659 | 0.6912 | 0.8314 |
| No log | 21.6667 | 260 | 0.6992 | 0.5811 | 0.6992 | 0.8362 |
| No log | 21.8333 | 262 | 0.6885 | 0.5755 | 0.6885 | 0.8298 |
| No log | 22.0 | 264 | 0.6983 | 0.6217 | 0.6983 | 0.8357 |
| No log | 22.1667 | 266 | 0.7266 | 0.6249 | 0.7266 | 0.8524 |
| No log | 22.3333 | 268 | 0.7422 | 0.6215 | 0.7422 | 0.8615 |
| No log | 22.5 | 270 | 0.7131 | 0.6304 | 0.7131 | 0.8445 |
| No log | 22.6667 | 272 | 0.7016 | 0.5633 | 0.7016 | 0.8376 |
| No log | 22.8333 | 274 | 0.6986 | 0.5633 | 0.6986 | 0.8358 |
| No log | 23.0 | 276 | 0.6998 | 0.6287 | 0.6998 | 0.8365 |
| No log | 23.1667 | 278 | 0.7334 | 0.6453 | 0.7334 | 0.8564 |
| No log | 23.3333 | 280 | 0.7706 | 0.5553 | 0.7706 | 0.8778 |
| No log | 23.5 | 282 | 0.7215 | 0.5921 | 0.7215 | 0.8494 |
| No log | 23.6667 | 284 | 0.6885 | 0.5878 | 0.6885 | 0.8298 |
| No log | 23.8333 | 286 | 0.7252 | 0.5500 | 0.7252 | 0.8516 |
| No log | 24.0 | 288 | 0.7914 | 0.5642 | 0.7914 | 0.8896 |
| No log | 24.1667 | 290 | 0.7686 | 0.5642 | 0.7686 | 0.8767 |
| No log | 24.3333 | 292 | 0.6855 | 0.5993 | 0.6855 | 0.8280 |
| No log | 24.5 | 294 | 0.7817 | 0.6151 | 0.7817 | 0.8842 |
| No log | 24.6667 | 296 | 0.9145 | 0.5433 | 0.9145 | 0.9563 |
| No log | 24.8333 | 298 | 0.9259 | 0.5781 | 0.9259 | 0.9622 |
| No log | 25.0 | 300 | 0.8333 | 0.5724 | 0.8333 | 0.9128 |
| No log | 25.1667 | 302 | 0.7283 | 0.6195 | 0.7283 | 0.8534 |
| No log | 25.3333 | 304 | 0.7126 | 0.6404 | 0.7126 | 0.8442 |
| No log | 25.5 | 306 | 0.7032 | 0.6196 | 0.7032 | 0.8386 |
| No log | 25.6667 | 308 | 0.7390 | 0.6300 | 0.7390 | 0.8596 |
| No log | 25.8333 | 310 | 0.7991 | 0.5412 | 0.7991 | 0.8939 |
| No log | 26.0 | 312 | 0.7602 | 0.6029 | 0.7602 | 0.8719 |
| No log | 26.1667 | 314 | 0.7510 | 0.6029 | 0.7510 | 0.8666 |
| No log | 26.3333 | 316 | 0.7593 | 0.6059 | 0.7593 | 0.8714 |
| No log | 26.5 | 318 | 0.7284 | 0.6089 | 0.7284 | 0.8535 |
| No log | 26.6667 | 320 | 0.7105 | 0.5611 | 0.7105 | 0.8429 |
| No log | 26.8333 | 322 | 0.7046 | 0.5408 | 0.7046 | 0.8394 |
| No log | 27.0 | 324 | 0.7000 | 0.5089 | 0.7000 | 0.8367 |
| No log | 27.1667 | 326 | 0.7075 | 0.5220 | 0.7075 | 0.8412 |
| No log | 27.3333 | 328 | 0.7716 | 0.5637 | 0.7716 | 0.8784 |
| No log | 27.5 | 330 | 0.7961 | 0.5614 | 0.7961 | 0.8922 |
| No log | 27.6667 | 332 | 0.7734 | 0.5637 | 0.7734 | 0.8794 |
| No log | 27.8333 | 334 | 0.6983 | 0.6059 | 0.6983 | 0.8357 |
| No log | 28.0 | 336 | 0.6600 | 0.6230 | 0.6600 | 0.8124 |
| No log | 28.1667 | 338 | 0.6497 | 0.6288 | 0.6497 | 0.8060 |
| No log | 28.3333 | 340 | 0.6472 | 0.6423 | 0.6472 | 0.8045 |
| No log | 28.5 | 342 | 0.6455 | 0.6368 | 0.6455 | 0.8035 |
| No log | 28.6667 | 344 | 0.6586 | 0.6287 | 0.6586 | 0.8116 |
| No log | 28.8333 | 346 | 0.6878 | 0.6453 | 0.6878 | 0.8294 |
| No log | 29.0 | 348 | 0.6957 | 0.6453 | 0.6957 | 0.8341 |
| No log | 29.1667 | 350 | 0.6659 | 0.6565 | 0.6659 | 0.8160 |
| No log | 29.3333 | 352 | 0.6675 | 0.5382 | 0.6675 | 0.8170 |
| No log | 29.5 | 354 | 0.6825 | 0.5526 | 0.6825 | 0.8261 |
| No log | 29.6667 | 356 | 0.6808 | 0.5200 | 0.6808 | 0.8251 |
| No log | 29.8333 | 358 | 0.6605 | 0.6199 | 0.6605 | 0.8127 |
| No log | 30.0 | 360 | 0.6561 | 0.6106 | 0.6561 | 0.8100 |
| No log | 30.1667 | 362 | 0.6606 | 0.6106 | 0.6606 | 0.8128 |
| No log | 30.3333 | 364 | 0.6712 | 0.5729 | 0.6712 | 0.8193 |
| No log | 30.5 | 366 | 0.6861 | 0.5790 | 0.6861 | 0.8283 |
| No log | 30.6667 | 368 | 0.7004 | 0.5479 | 0.7004 | 0.8369 |
| No log | 30.8333 | 370 | 0.7132 | 0.5223 | 0.7132 | 0.8445 |
| No log | 31.0 | 372 | 0.7264 | 0.5223 | 0.7264 | 0.8523 |
| No log | 31.1667 | 374 | 0.7458 | 0.5184 | 0.7458 | 0.8636 |
| No log | 31.3333 | 376 | 0.7568 | 0.6385 | 0.7568 | 0.8700 |
| No log | 31.5 | 378 | 0.7343 | 0.6385 | 0.7343 | 0.8569 |
| No log | 31.6667 | 380 | 0.7041 | 0.5671 | 0.7041 | 0.8391 |
| No log | 31.8333 | 382 | 0.6861 | 0.6313 | 0.6861 | 0.8283 |
| No log | 32.0 | 384 | 0.6847 | 0.6139 | 0.6847 | 0.8274 |
| No log | 32.1667 | 386 | 0.6839 | 0.6525 | 0.6839 | 0.8270 |
| No log | 32.3333 | 388 | 0.6951 | 0.6388 | 0.6951 | 0.8337 |
| No log | 32.5 | 390 | 0.7279 | 0.6280 | 0.7279 | 0.8532 |
| No log | 32.6667 | 392 | 0.7729 | 0.5661 | 0.7729 | 0.8792 |
| No log | 32.8333 | 394 | 0.7995 | 0.5173 | 0.7995 | 0.8942 |
| No log | 33.0 | 396 | 0.7668 | 0.54 | 0.7668 | 0.8757 |
| No log | 33.1667 | 398 | 0.6988 | 0.6044 | 0.6988 | 0.8359 |
| No log | 33.3333 | 400 | 0.6673 | 0.6251 | 0.6673 | 0.8169 |
| No log | 33.5 | 402 | 0.6704 | 0.6304 | 0.6704 | 0.8188 |
| No log | 33.6667 | 404 | 0.6688 | 0.6304 | 0.6688 | 0.8178 |
| No log | 33.8333 | 406 | 0.6695 | 0.6304 | 0.6695 | 0.8183 |
| No log | 34.0 | 408 | 0.6812 | 0.6404 | 0.6812 | 0.8254 |
| No log | 34.1667 | 410 | 0.7436 | 0.5661 | 0.7436 | 0.8623 |
| No log | 34.3333 | 412 | 0.8091 | 0.5553 | 0.8091 | 0.8995 |
| No log | 34.5 | 414 | 0.8659 | 0.5342 | 0.8659 | 0.9305 |
| No log | 34.6667 | 416 | 0.9102 | 0.5055 | 0.9102 | 0.9540 |
| No log | 34.8333 | 418 | 0.9021 | 0.5055 | 0.9021 | 0.9498 |
| No log | 35.0 | 420 | 0.8536 | 0.5113 | 0.8536 | 0.9239 |
| No log | 35.1667 | 422 | 0.7655 | 0.54 | 0.7655 | 0.8749 |
| No log | 35.3333 | 424 | 0.7265 | 0.5650 | 0.7265 | 0.8523 |
| No log | 35.5 | 426 | 0.7177 | 0.5823 | 0.7177 | 0.8471 |
| No log | 35.6667 | 428 | 0.7444 | 0.6174 | 0.7444 | 0.8628 |
| No log | 35.8333 | 430 | 0.7524 | 0.6142 | 0.7524 | 0.8674 |
| No log | 36.0 | 432 | 0.7354 | 0.6417 | 0.7354 | 0.8575 |
| No log | 36.1667 | 434 | 0.6966 | 0.6817 | 0.6966 | 0.8346 |
| No log | 36.3333 | 436 | 0.6807 | 0.7143 | 0.6807 | 0.8250 |
| No log | 36.5 | 438 | 0.6829 | 0.6117 | 0.6829 | 0.8264 |
| No log | 36.6667 | 440 | 0.6907 | 0.5656 | 0.6907 | 0.8311 |
| No log | 36.8333 | 442 | 0.6719 | 0.6214 | 0.6719 | 0.8197 |
| No log | 37.0 | 444 | 0.6665 | 0.5891 | 0.6665 | 0.8164 |
| No log | 37.1667 | 446 | 0.6744 | 0.5905 | 0.6744 | 0.8212 |
| No log | 37.3333 | 448 | 0.6834 | 0.5810 | 0.6834 | 0.8267 |
| No log | 37.5 | 450 | 0.6788 | 0.5841 | 0.6788 | 0.8239 |
| No log | 37.6667 | 452 | 0.6849 | 0.5810 | 0.6849 | 0.8276 |
| No log | 37.8333 | 454 | 0.6899 | 0.6041 | 0.6899 | 0.8306 |
| No log | 38.0 | 456 | 0.6885 | 0.6218 | 0.6885 | 0.8298 |
| No log | 38.1667 | 458 | 0.7003 | 0.5870 | 0.7003 | 0.8368 |
| No log | 38.3333 | 460 | 0.7056 | 0.5870 | 0.7056 | 0.8400 |
| No log | 38.5 | 462 | 0.7045 | 0.5507 | 0.7045 | 0.8394 |
| No log | 38.6667 | 464 | 0.7228 | 0.6142 | 0.7228 | 0.8502 |
| No log | 38.8333 | 466 | 0.7300 | 0.6100 | 0.7300 | 0.8544 |
| No log | 39.0 | 468 | 0.7251 | 0.6280 | 0.7251 | 0.8515 |
| No log | 39.1667 | 470 | 0.7223 | 0.6350 | 0.7223 | 0.8499 |
| No log | 39.3333 | 472 | 0.7501 | 0.6350 | 0.7501 | 0.8661 |
| No log | 39.5 | 474 | 0.7347 | 0.6350 | 0.7347 | 0.8572 |
| No log | 39.6667 | 476 | 0.7043 | 0.6319 | 0.7043 | 0.8392 |
| No log | 39.8333 | 478 | 0.6982 | 0.6247 | 0.6982 | 0.8356 |
| No log | 40.0 | 480 | 0.7052 | 0.5922 | 0.7052 | 0.8397 |
| No log | 40.1667 | 482 | 0.7281 | 0.6228 | 0.7281 | 0.8533 |
| No log | 40.3333 | 484 | 0.7450 | 0.6066 | 0.7450 | 0.8631 |
| No log | 40.5 | 486 | 0.7672 | 0.6208 | 0.7672 | 0.8759 |
| No log | 40.6667 | 488 | 0.7652 | 0.6385 | 0.7652 | 0.8747 |
| No log | 40.8333 | 490 | 0.7740 | 0.6315 | 0.7740 | 0.8798 |
| No log | 41.0 | 492 | 0.7715 | 0.6131 | 0.7715 | 0.8784 |
| No log | 41.1667 | 494 | 0.7712 | 0.6131 | 0.7712 | 0.8782 |
| No log | 41.3333 | 496 | 0.7435 | 0.6266 | 0.7435 | 0.8623 |
| No log | 41.5 | 498 | 0.7197 | 0.6300 | 0.7197 | 0.8484 |
| 0.2693 | 41.6667 | 500 | 0.7229 | 0.6266 | 0.7229 | 0.8502 |
| 0.2693 | 41.8333 | 502 | 0.7136 | 0.6300 | 0.7136 | 0.8448 |
| 0.2693 | 42.0 | 504 | 0.7031 | 0.6385 | 0.7031 | 0.8385 |
| 0.2693 | 42.1667 | 506 | 0.7149 | 0.6350 | 0.7149 | 0.8455 |
| 0.2693 | 42.3333 | 508 | 0.7430 | 0.6350 | 0.7430 | 0.8619 |
| 0.2693 | 42.5 | 510 | 0.7325 | 0.6350 | 0.7325 | 0.8559 |
| 0.2693 | 42.6667 | 512 | 0.7181 | 0.6350 | 0.7181 | 0.8474 |
| 0.2693 | 42.8333 | 514 | 0.7293 | 0.6350 | 0.7293 | 0.8540 |
| 0.2693 | 43.0 | 516 | 0.7817 | 0.5954 | 0.7817 | 0.8841 |
| 0.2693 | 43.1667 | 518 | 0.8091 | 0.5660 | 0.8091 | 0.8995 |
| 0.2693 | 43.3333 | 520 | 0.7826 | 0.5954 | 0.7826 | 0.8847 |
| 0.2693 | 43.5 | 522 | 0.7225 | 0.6350 | 0.7225 | 0.8500 |
| 0.2693 | 43.6667 | 524 | 0.6860 | 0.6385 | 0.6860 | 0.8283 |
| 0.2693 | 43.8333 | 526 | 0.6599 | 0.6887 | 0.6599 | 0.8124 |
| 0.2693 | 44.0 | 528 | 0.6519 | 0.6487 | 0.6519 | 0.8074 |
| 0.2693 | 44.1667 | 530 | 0.6512 | 0.6735 | 0.6512 | 0.8070 |
| 0.2693 | 44.3333 | 532 | 0.6512 | 0.5978 | 0.6512 | 0.8070 |
| 0.2693 | 44.5 | 534 | 0.6637 | 0.6429 | 0.6637 | 0.8147 |
| 0.2693 | 44.6667 | 536 | 0.6786 | 0.6441 | 0.6786 | 0.8238 |
| 0.2693 | 44.8333 | 538 | 0.7086 | 0.6100 | 0.7086 | 0.8418 |
| 0.2693 | 45.0 | 540 | 0.7490 | 0.5962 | 0.7490 | 0.8655 |
| 0.2693 | 45.1667 | 542 | 0.7664 | 0.5962 | 0.7664 | 0.8754 |
| 0.2693 | 45.3333 | 544 | 0.7289 | 0.6100 | 0.7289 | 0.8538 |
| 0.2693 | 45.5 | 546 | 0.6935 | 0.6208 | 0.6935 | 0.8328 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Misted-v2-7B-GGUF | mradermacher | "2024-12-30T15:07:51Z" | 69 | 0 | transformers | [
"transformers",
"gguf",
"code",
"mistral",
"merge",
"slerp",
"en",
"es",
"base_model:Walmart-the-bag/Misted-v2-7B",
"base_model:quantized:Walmart-the-bag/Misted-v2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-30T14:29:44Z" | ---
base_model: Walmart-the-bag/Misted-v2-7B
language:
- en
- es
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- mistral
- merge
- slerp
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Walmart-the-bag/Misted-v2-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Misted-v2-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Misted-v2-7B-GGUF/resolve/main/Misted-v2-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ssmits/Falcon2-5.5B-multilingual-embed-base | ssmits | "2024-06-10T13:48:31Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"falcon",
"ssmits/Falcon2-5.5B-multilingual",
"text-classification",
"custom_code",
"es",
"fr",
"de",
"no",
"sv",
"da",
"nl",
"pt",
"pl",
"ro",
"it",
"cs",
"base_model:ssmits/Falcon2-5.5B-multilingual",
"base_model:finetune:ssmits/Falcon2-5.5B-multilingual",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-06-08T18:39:16Z" | ---
base_model:
- ssmits/Falcon2-5.5B-multilingual
library_name: sentence-transformers
tags:
- ssmits/Falcon2-5.5B-multilingual
license: apache-2.0
language:
- es
- fr
- de
- 'no'
- sv
- da
- nl
- pt
- pl
- ro
- it
- cs
pipeline_tag: text-classification
---
## Usage
Embeddings version of the base model [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual/edit/main/README.md).
The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as it is pruned and shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
Additionaly, in stead of a normalization layer, the hidden layers are followed up by both a classical weight and bias 1-dimensional array of 4096 values.
The basic Sentence-Transformers implementation is working correctly. This would imply other more sophisticated embeddings techniques such as adding a custom classification head, will work correctly as well.
## Inference (sentence-transformers)
```python
from sentence_transformers import SentenceTransformer
import torch
# 1. Load a pretrained Sentence Transformer model
model = SentenceTransformer("ssmits/Falcon2-5.5B-multilingual-embed-base") # device = "cpu" when <= 24 GB VRAM
# The sentences to encode
sentences = [
"The weather is lovely today.",
"It's so sunny outside!",
"He drove to the stadium.",
]
# 2. Calculate embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 4096)
# 3. Calculate the embedding similarities
# Using torch to compute cosine similarity matrix
similarities = torch.nn.functional.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# tensor([[1.0000, 0.7120, 0.5937],
# [0.7120, 1.0000, 0.5925],
# [0.5937, 0.5925, 1.0000]])
```
Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
## Inference (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base')
model = AutoModel.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base') # device = "cpu" when <= 24 GB VRAM
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
### How to enable Multi-GPU
```python
from transformers import AutoModel
from torch.nn import DataParallel
model = AutoModel.from_pretrained("ssmits/Falcon2-5.5B-multilingual-embed-base")
for module_key, module in model._modules.items():
model._modules[module_key] = DataParallel(module)
``` |
nfliu/deberta-v3-large_boolq | nfliu | "2023-09-08T05:40:57Z" | 209,595 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:boolq",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-07T05:55:24Z" | ---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
datasets:
- boolq
metrics:
- accuracy
model-index:
- name: deberta-v3-large_boolq
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: boolq
type: boolq
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8834862385321101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large_boolq
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the boolq dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Accuracy: 0.8835
## Model description
More information needed
## Example
```
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nfliu/deberta-v3-large_boolq")
tokenizer = AutoTokenizer.from_pretrained("nfliu/deberta-v3-large_boolq")
# Each example is a (question, context) pair.
examples = [
("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
("Water is wet", "Contrary to popular belief, water is not wet.")
]
encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
probability_no = [round(prob[0], 2) for prob in probabilities]
probability_yes = [round(prob[1], 2) for prob in probabilities]
for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
print(f"Question: {example[0]}")
print(f"Context: {example[1]}")
print(f"p(No | question, context): {p_no}")
print(f"p(Yes | question, context): {p_yes}")
print()
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.85 | 250 | 0.5306 | 0.8823 |
| 0.1151 | 1.69 | 500 | 0.4601 | 0.8835 |
| 0.1151 | 2.54 | 750 | 0.5897 | 0.8792 |
| 0.0656 | 3.39 | 1000 | 0.6477 | 0.8804 |
| 0.0656 | 4.24 | 1250 | 0.6847 | 0.8838 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en | vocabtrimmer | "2023-05-21T13:16:57Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-21T13:14:24Z" | # `vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en`
This model is a fine-tuned version of [vocabtrimmer/xlm-roberta-base-trimmed-en-50000](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 68.51 | 68.51 | 68.51 | 67.26 | 68.51 | 68.63 | 68.51 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en/raw/main/eval.json). |
MISHANM/Qwen-QwQ-32B.gguf | MISHANM | "2025-03-13T13:15:40Z" | 0 | 0 | transformers | [
"transformers",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"endpoints_compatible",
"region:us"
] | null | "2025-03-13T12:23:47Z" | ---
base_model:
- Qwen/QwQ-32B
library_name: transformers
---
# MISHANM/Qwen-QwQ-32B.gguf
This model is a GGUF version of Qwen/QwQ-32B model, It is specially designed to work smoothly with the llama.cpp framework. It's built to run efficiently on CPU systems and has been tested on the AMD EPYC™ 9755 processor. The model handles various natural language processing tasks really well. It not only processes text quickly but also has strong reasoning and thinking skills, allowing it to manage difficult language-related challenges effectively.
## Model Details
1. Language: English
2. Tasks: Text generation
3. Base Model: Qwen/QwQ-32B
## Building and Running the Model
To build and run the model using `llama.cpp`, follow these steps:
### Model
Steps to Download the Model:
1. Go to the "Files and Versions" section.
2. Click on the model.
3. Copy the download link.
4. Create a directory (e.g., for Linux: mkdir Qwen32B).
5. Navigate to that directory (cd Qwen32B).
6. Download both model parts: Qwen-QwQ-32B.gguf.part_01 and Qwen-QwQ-32B.gguf.part_02 (e.g., using wget with the copied link).
After downloading the model parts, use the following command to combine them into a complete model:
```
cat Qwen-QwQ-32B.gguf.part_01 Qwen-QwQ-32B.gguf.part_02 > Qwen-QwQ-32B.gguf
```
### Build llama.cpp Locally
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release
```
## Run the Model
Navigate to the build directory and run the model with a prompt:
```
cd llama.cpp/build/bin
```
## Inference with llama.cpp
```
./llama-cli -m /path/to/model/ -p "Your prompt here" -n 128 --ctx-size 8192 --temp 0.6 --seed 3407
```
## Citation Information
```
@misc{MISHANM/Qwen-QwQ-32B.gguf,
author = {Mishan Maurya},
title = {Introducing Qwen QwQ-32B GGUF Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
}
``` |
nidek/q-FrozenLake-v1-4x4-noSlippery | nidek | "2022-12-14T14:13:58Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-14T14:13:53Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nidek/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kazuma313/lora_model_dokter_consultasi_q4_k_m | kazuma313 | "2024-06-04T09:59:17Z" | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T06:18:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** kazuma313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
- **Dataset from :** hermanshid/doctor-id-qa
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gremlin97/remote_sensing_gpt_expt2 | gremlin97 | "2024-04-17T06:19:45Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-1b1",
"base_model:adapter:bigscience/bloom-1b1",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-04-17T02:17:40Z" | ---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-1b1
model-index:
- name: remote_sensing_gpt_expt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# remote_sensing_gpt_expt2
This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.661 | 1.0 | 938 | 3.5622 |
| 3.5192 | 2.0 | 1876 | 3.5396 |
| 3.4909 | 3.0 | 2814 | 3.5338 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
AndreiUrsu/TweetRobertaNewDataset | AndreiUrsu | "2024-05-03T16:59:59Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:AndreiUrsu/TweetRoberta_5epochs",
"base_model:finetune:AndreiUrsu/TweetRoberta_5epochs",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-03T16:47:22Z" | ---
base_model: AndreiUrsu/TweetRoberta_5epochs
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TweetRobertaNewDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetRobertaNewDataset
This model is a fine-tuned version of [AndreiUrsu/TweetRoberta_5epochs](https://huggingface.co/AndreiUrsu/TweetRoberta_5epochs) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0 | 1.0 | 1000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 2.0 | 2000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 3.0 | 3000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 4.0 | 4000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 5.0 | 5000 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
KatarLegacy/pinkbunny | KatarLegacy | "2023-07-31T15:57:12Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-07-31T15:56:30Z" | ---
license: creativeml-openrail-m
---
|
cackerman/ft_randalias_0to31_interleaved_both10alt7_orthrand44_mult1 | cackerman | "2025-03-21T17:22:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-21T17:14:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hkivancoral/smids_3x_beit_base_adamax_0001_fold4 | hkivancoral | "2023-12-13T07:37:52Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T07:06:28Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_adamax_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_adamax_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2272
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3102 | 1.0 | 225 | 0.3679 | 0.8633 |
| 0.1053 | 2.0 | 450 | 0.3867 | 0.87 |
| 0.0809 | 3.0 | 675 | 0.4978 | 0.8617 |
| 0.1418 | 4.0 | 900 | 0.5585 | 0.8717 |
| 0.0152 | 5.0 | 1125 | 0.6419 | 0.885 |
| 0.0232 | 6.0 | 1350 | 0.6902 | 0.8717 |
| 0.0119 | 7.0 | 1575 | 0.8503 | 0.8633 |
| 0.0116 | 8.0 | 1800 | 0.8413 | 0.8667 |
| 0.0484 | 9.0 | 2025 | 0.9018 | 0.8683 |
| 0.0101 | 10.0 | 2250 | 0.9930 | 0.855 |
| 0.0039 | 11.0 | 2475 | 1.0769 | 0.8733 |
| 0.0004 | 12.0 | 2700 | 1.0602 | 0.8717 |
| 0.0292 | 13.0 | 2925 | 1.1584 | 0.875 |
| 0.0029 | 14.0 | 3150 | 1.2271 | 0.8583 |
| 0.01 | 15.0 | 3375 | 1.1632 | 0.8733 |
| 0.0001 | 16.0 | 3600 | 1.1832 | 0.8633 |
| 0.0 | 17.0 | 3825 | 1.2281 | 0.86 |
| 0.004 | 18.0 | 4050 | 1.0844 | 0.8783 |
| 0.0003 | 19.0 | 4275 | 1.2463 | 0.8683 |
| 0.0112 | 20.0 | 4500 | 1.2122 | 0.8733 |
| 0.0013 | 21.0 | 4725 | 1.2444 | 0.8617 |
| 0.0002 | 22.0 | 4950 | 1.2159 | 0.86 |
| 0.0002 | 23.0 | 5175 | 1.2215 | 0.8667 |
| 0.0 | 24.0 | 5400 | 1.2014 | 0.8733 |
| 0.0007 | 25.0 | 5625 | 1.1844 | 0.875 |
| 0.0 | 26.0 | 5850 | 1.3054 | 0.8683 |
| 0.0 | 27.0 | 6075 | 1.3588 | 0.8583 |
| 0.0332 | 28.0 | 6300 | 1.2029 | 0.875 |
| 0.0 | 29.0 | 6525 | 1.2414 | 0.87 |
| 0.0001 | 30.0 | 6750 | 1.2400 | 0.8783 |
| 0.0005 | 31.0 | 6975 | 1.1861 | 0.87 |
| 0.0 | 32.0 | 7200 | 1.1528 | 0.8767 |
| 0.0 | 33.0 | 7425 | 1.2071 | 0.8783 |
| 0.0002 | 34.0 | 7650 | 1.2652 | 0.875 |
| 0.0 | 35.0 | 7875 | 1.2647 | 0.8783 |
| 0.0 | 36.0 | 8100 | 1.3389 | 0.865 |
| 0.0 | 37.0 | 8325 | 1.3158 | 0.8683 |
| 0.0 | 38.0 | 8550 | 1.2845 | 0.8717 |
| 0.0 | 39.0 | 8775 | 1.2211 | 0.8783 |
| 0.0383 | 40.0 | 9000 | 1.3005 | 0.865 |
| 0.0001 | 41.0 | 9225 | 1.3129 | 0.8567 |
| 0.0025 | 42.0 | 9450 | 1.2924 | 0.865 |
| 0.0 | 43.0 | 9675 | 1.2393 | 0.8667 |
| 0.0021 | 44.0 | 9900 | 1.2861 | 0.87 |
| 0.0 | 45.0 | 10125 | 1.2626 | 0.8717 |
| 0.0 | 46.0 | 10350 | 1.2383 | 0.8733 |
| 0.0 | 47.0 | 10575 | 1.2652 | 0.8717 |
| 0.0 | 48.0 | 10800 | 1.2466 | 0.8733 |
| 0.0 | 49.0 | 11025 | 1.2259 | 0.875 |
| 0.0 | 50.0 | 11250 | 1.2272 | 0.8733 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
baru98/distilbert-base-uncased-finetuned-squad | baru98 | "2022-06-03T13:54:01Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-06-03T11:00:56Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2393 | 1.0 | 5475 | 1.1570 |
| 0.9651 | 2.0 | 10950 | 1.0903 |
| 0.7513 | 3.0 | 16425 | 1.1274 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits