modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
trinhxuankhai/external_vehicle_appearance | trinhxuankhai | 2024-03-27T14:00:15Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"base_model:adapter:Qwen/Qwen-VL-Chat",
"region:us"
]
| null | 2024-03-27T13:59:41Z | ---
library_name: peft
base_model: Qwen/Qwen-VL-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
xverse/XVERSE-65B-Chat-GPTQ-Int4 | xverse | 2024-03-27T13:59:19Z | 5 | 1 | transformers | [
"transformers",
"xverse",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-03-25T13:13:16Z | ---
license: apache-2.0
inference: false
---
# XVERSE-65B-Chat-GPTQ-Int4
## 更新信息
- **[2024/03/25]** 发布XVERSE-65B-Chat-GPTQ-Int4量化模型,支持vLLM推理XVERSE-65B-Chat量化模型。
- **[2023/12/08]** 发布 **XVERSE-65B-2** 底座模型,该模型在前一版本的基础上进行了 **Continual Pre-Training**,训练总 token 量达到 **3.2** 万亿;模型各方面的能力均得到提升,尤其是数学和代码能力,在 GSM8K 上提升 **20**%,HumanEval 上提升 **41**%。
- **[2023/11/29]** 更新模型架构及更多底座数据的相关信息。
- **[2023/11/24]** 更新预训练数据的相关信息。
- **[2023/11/06]** 发布 65B 尺寸的 XVERSE-65B 底座模型。
## Update Information
- **[2024/03/25]** Release the XVERSE-65B-Chat-GPTQ-Int4 quantification model, supporting vLLM inference for the XVERSE-65B-Chat quantification model.
- **[2023/12/08]** Released the **XVERSE-65B-2** base model. This model builds upon its predecessor through **Continual Pre-Training**, reaching a total training volume of **3.2** trillion tokens. It exhibits enhancements in all capabilities, particularly in mathematics and coding skills, with a **20%** improvement on the GSM8K benchmark and a **41%** increase on HumanEval.
- **[2023/11/29]** Update model architecture and additional pre-training data information.
- **[2023/11/24]** Update the related information of the pre-training data.
- **[2023/11/06]** Released the XVERSE-65B base model.
## 模型介绍
**XVERSE-65B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),参数规模为 650 亿,本次开源的模型为底座模型 **XVERSE-65B**,主要特点如下:
- **模型结构**:XVERSE-65B 使用主流 Decoder-only 的标准 Transformer 网络结构,支持 16K 的上下文长度(Context Length),能满足更长的多轮对话、知识问答与摘要等需求,模型应用场景更广泛。
- **训练数据**:构建了 2.6 万亿 token 的高质量、多样化的数据对模型进行充分训练,包含中、英、俄、西等 40 多种语言,通过精细化设置不同类型数据的采样比例,使得中英两种语言表现优异,也能兼顾其他语言效果。
- **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
- **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
**XVERSE-65B**的模型大小、架构和学习率如下:
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
## Model Introduction
**XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
- **Model Structure**: XVERSE-65B uses the mainstream Decoder-only Transformer network structure, supports 16k context length, which can meet the need of longer multi-round dialogues, knowledge question-answering, and summarization. This makes the model more versatile in application scenarios.
- **Training Data**: The model has been thoroughly trained on a diversified and high-quality dataset consisting of 2.6 trillion of tokens, including more than 40 languages such as Chinese, English, Russian, and Spanish. The sampling ratio of different types of data is finely set, which makes the performance of Chinese and English excellent, and also takes into account the effect of other languages.
- **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
- **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
The models sizes, architectures and learning rate of **XVERSE-65B** are showed as follows:
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
## 环境准备
我们建议您克隆[`vllm`](https://github.com/vllm-project/vllm.git)并按照官方指南进行安装。
## Environment Setup
We advise you to clone [`vllm`](https://github.com/vllm-project/vllm.git) and install it following the official guide.
## 使用方法
我们演示了如何使用 `vllm` 来运行XVERSE-65B-Chat-GPTQ-Int4量化模型:
```python
from vllm import LLM, SamplingParams
model_dir = "xverse/XVERSE-65B-Chat-GPTQ-Int4/"
# Create an LLM.
llm = LLM(model_dir,
trust_remote_code=True)
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.5, top_p=0.85, max_tokens=2048, repetition_penalty=1.1)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
prompts = ["Human: 请你写一篇关于环保的文章,题材是从个人做起。\n\nAssistant: ",]
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Generated text:\n{generated_text}")
```
## Usage
We demonstrated how to use 'vllm' to run the XVERSE-65B-Chat-GPTQ-Int4 quantization model:
```python
from vllm import LLM, SamplingParams
model_dir = "xverse/XVERSE-65B-Chat-GPTQ-Int4/"
# Create an LLM.
llm = LLM(model_dir,
trust_remote_code=True)
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.5, top_p=0.85, max_tokens=2048, repetition_penalty=1.1)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
prompts = ["Human: 请你写一篇关于环保的文章,题材是从个人做起。\n\nAssistant: ",]
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Generated text:\n{generated_text}")
```
## 局限性与免责申明
XVERSE-65B 与其他所有 LLM 一样,在某些情况下可能会产生不准确、有偏见或其他令人反感的内容。因此,请谨慎使用模型生成的内容,请勿将生成的有害内容进行传播,在部署任何 XVERSE-65B 的应用之前,开发人员应根据其具体应用对模型进行安全测试和调优。
我们强烈警告不要将 XVERSE-65B 模型用于制造或传播有害信息,或进行任何可能损害公众、国家、社会安全或违反法规的活动。如果使用 XVERSE-65B 模型产生任何问题,无论是数据安全问题、公共舆论风险,还是模型被误解、滥用、传播或不合规使用所引发的任何风险和问题,我们将不承担任何责任。
## Limitations and Disclaimer
Like all other Large Language Models (LLMs), XVERSE-65B may produce inaccurate, biased, or otherwise offensive content under certain circumstances. Therefore, please use the content generated by the model with caution and refrain from disseminating harmful content. Before deploying any application of XVERSE-65B, developers should conduct safety tests and optimization of the model according to its specific application.
We strongly warn against the use of the XVERSE-65B model for producing or spreading harmful information, or conducting any activities that might harm the public, national, or social security, or violate regulations. We assume no responsibility for any problems arising from the use of the XVERSE-65B model, whether it be data security issues, public opinion risks, or any risks and issues caused by misunderstanding, misuse, dissemination, or non-compliance with the model.
## 模型开源协议
使用本仓库的源码需要遵循 [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) 开源协议,使用 XVERSE-65B 的模型权重则需要遵循[模型许可协议](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf)。
XVERSE-65B 模型权重对学术研究**完全开放**,并且支持**免费商用**。如需申请商业许可证,请填写【[申请表](https://chat.xverse.cn/home/business.html)】,如有其他问题或合作,请联系 <[email protected]>。
## Open Source License
The use of the source code in this repository must follow the [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) open-source license, while the use of the model weights of XVERSE-65B needs to adhere to the [Model License Agreement](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf).
The XVERSE-65B model weights are **fully open** to academic research and support **free commercial use**. To apply for a commercial license, please fill in the [application form](https://chat.xverse.cn/home/business.html). For other questions or collaborations, please contact <[email protected]>.
|
mpasila/JP-EN-Translator-1K-steps-7B-merged | mpasila | 2024-03-27T13:57:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:NilanE/ParallelFiction-Ja_En-100k",
"dataset:mpasila/ParallelFiction-Ja_En-100k-alpaca",
"base_model:augmxnt/shisa-base-7b-v1",
"base_model:finetune:augmxnt/shisa-base-7b-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T13:19:23Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: augmxnt/shisa-base-7b-v1
datasets:
- NilanE/ParallelFiction-Ja_En-100k
- mpasila/ParallelFiction-Ja_En-100k-alpaca
---
Experimental model, may not perform that well. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k).
Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).
### Prompt format: Alpaca
```
Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** augmxnt/shisa-base-7b-v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
SungJinD/ppo-Huggy2 | SungJinD | 2024-03-27T13:53:06Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2024-03-27T13:53:01Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SungJinD/ppo-Huggy2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cgato/TheSpice-7b-FT-ExperimentalOrca | cgato | 2024-03-27T13:50:04Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-26T23:16:04Z | ---
license: cc-by-nc-4.0
---
A smort model made using the cleaned Orca data.
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
Seriously, I have to add more due to HF Leaderboard requirements. so basically, this model uses a cleaned version of Orca along with my typical RP data package. It was intended as a test to see if the models RP evals would be affected by an overwhelming amount of instruct tokens. |
geektech/flan-t5-large-lora-ce-gpt4 | geektech | 2024-03-27T13:47:52Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
]
| null | 2024-03-27T10:57:39Z | ---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Malvinaeva/ModelEva | Malvinaeva | 2024-03-27T13:47:09Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:56:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IshimaIshimsky/Knjazevdesu | IshimaIshimsky | 2024-03-27T13:39:49Z | 0 | 0 | null | [
"license:unknown",
"region:us"
]
| null | 2024-03-27T13:38:24Z | ---
license: unknown
---
Knjazevdesu (twitch streamer/vtuber) crepe 100 epoches |
tsavage68/v1_1000_STEPS_1e5_rate_05_beta_DPO | tsavage68 | 2024-03-27T13:34:45Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T13:30:05Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: v1_1000_STEPS_1e5_rate_05_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1_1000_STEPS_1e5_rate_05_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8688
- Rewards/chosen: -27.6674
- Rewards/rejected: -27.1162
- Rewards/accuracies: 0.4330
- Rewards/margins: -0.5512
- Logps/rejected: -71.1119
- Logps/chosen: -70.5878
- Logits/rejected: -5.9442
- Logits/chosen: -5.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.5553 | 0.05 | 50 | 1.8706 | -4.7825 | -4.7649 | 0.4286 | -0.0176 | -26.4094 | -24.8181 | -3.5109 | -3.5109 |
| 5.8188 | 0.1 | 100 | 5.0281 | -26.6571 | -26.6181 | 0.4308 | -0.0390 | -70.1157 | -68.5673 | -1.3923 | -1.3923 |
| 5.8033 | 0.15 | 150 | 7.1546 | -40.4235 | -40.6296 | 0.4593 | 0.2060 | -98.1387 | -96.1001 | -3.5667 | -3.5667 |
| 7.8696 | 0.2 | 200 | 5.5313 | -29.1486 | -29.0376 | 0.4505 | -0.1109 | -74.9547 | -73.5501 | -3.4414 | -3.4414 |
| 4.4882 | 0.24 | 250 | 5.1766 | -27.5527 | -27.1630 | 0.4308 | -0.3897 | -71.2056 | -70.3585 | -4.9735 | -4.9735 |
| 6.4403 | 0.29 | 300 | 5.1323 | -27.5513 | -27.0082 | 0.4440 | -0.5431 | -70.8959 | -70.3556 | -5.3879 | -5.3879 |
| 5.2094 | 0.34 | 350 | 5.0288 | -27.1714 | -26.6651 | 0.4418 | -0.5063 | -70.2098 | -69.5959 | -5.6729 | -5.6729 |
| 9.8925 | 0.39 | 400 | 4.8892 | -27.3549 | -26.8568 | 0.4462 | -0.4981 | -70.5932 | -69.9629 | -5.8703 | -5.8703 |
| 8.279 | 0.44 | 450 | 4.8903 | -27.7693 | -27.3098 | 0.4374 | -0.4595 | -71.4991 | -70.7916 | -5.9049 | -5.9049 |
| 6.9741 | 0.49 | 500 | 4.9634 | -27.7246 | -27.2569 | 0.4484 | -0.4677 | -71.3933 | -70.7022 | -5.9114 | -5.9114 |
| 7.5287 | 0.54 | 550 | 4.9185 | -27.7575 | -27.2719 | 0.4505 | -0.4857 | -71.4233 | -70.7681 | -5.9444 | -5.9444 |
| 4.1175 | 0.59 | 600 | 4.9414 | -27.6038 | -27.0763 | 0.4418 | -0.5275 | -71.0321 | -70.4606 | -5.9236 | -5.9236 |
| 7.6353 | 0.64 | 650 | 4.8901 | -27.4506 | -26.8656 | 0.4308 | -0.5850 | -70.6107 | -70.1542 | -5.9567 | -5.9567 |
| 6.5311 | 0.68 | 700 | 4.8640 | -27.4782 | -26.9239 | 0.4242 | -0.5543 | -70.7274 | -70.2095 | -5.8651 | -5.8651 |
| 3.8896 | 0.73 | 750 | 4.8727 | -27.6349 | -27.0700 | 0.4374 | -0.5649 | -71.0195 | -70.5229 | -5.9781 | -5.9781 |
| 2.4094 | 0.78 | 800 | 4.8792 | -27.7076 | -27.1530 | 0.4352 | -0.5546 | -71.1855 | -70.6682 | -5.9983 | -5.9983 |
| 8.463 | 0.83 | 850 | 4.8683 | -27.6713 | -27.1213 | 0.4308 | -0.5500 | -71.1221 | -70.5956 | -5.9384 | -5.9384 |
| 5.1159 | 0.88 | 900 | 4.8691 | -27.6713 | -27.1222 | 0.4352 | -0.5491 | -71.1239 | -70.5956 | -5.9441 | -5.9441 |
| 7.8796 | 0.93 | 950 | 4.8688 | -27.6673 | -27.1163 | 0.4330 | -0.5510 | -71.1121 | -70.5876 | -5.9442 | -5.9442 |
| 6.2745 | 0.98 | 1000 | 4.8688 | -27.6674 | -27.1162 | 0.4330 | -0.5512 | -71.1119 | -70.5878 | -5.9442 | -5.9442 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-imdb-1b-mz-ada-v3-ch-129000 | AlignmentResearch | 2024-03-27T13:32:55Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b-deduped",
"base_model:finetune:EleutherAI/pythia-1b-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T13:30:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b-deduped
model-index:
- name: robust_llm_pythia-imdb-1b-mz-ada-v3-ch-129000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-1b-mz-ada-v3-ch-129000
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
bencyc1129/gpt2-base | bencyc1129 | 2024-03-27T13:32:48Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-21T02:47:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WOMEN/your_repos_hub | WOMEN | 2024-03-27T13:30:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T13:20:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
desarrolloasesoreslocales/prueba | desarrolloasesoreslocales | 2024-03-27T13:29:14Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T13:28:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bsen26/113-Aspect-Emotion-Model | bsen26 | 2024-03-27T13:21:43Z | 36 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T13:16:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alpindale/dbrx-instruct | alpindale | 2024-03-27T13:20:48Z | 6 | 20 | transformers | [
"transformers",
"safetensors",
"dbrx",
"text-generation",
"custom_code",
"arxiv:2211.15841",
"arxiv:2304.11277",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-27T13:09:38Z | ---
extra_gated_heading: You need to share contact information with Databricks to access this model
extra_gated_prompt: >-
### DBRX Terms of Use
Use of DBRX is governed by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and the [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
extra_gated_fields:
First Name: text
Last Name: text
Organization: text
By clicking 'Submit' below, I accept the terms of the license and acknowledge that the information I provide will be collected, stored, processed, and shared in accordance with Databricks' Privacy Notice and I understand I can update my preferences at any time: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed, and shared in accordance with Databricks [Privacy Notice](https://www.databricks.com/legal/privacynotice).
extra_gated_button_content: Submit
inference: false
---
# Re-upload because original repo is gated
# DBRX Instruct
* DBRX Instruct is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. DBRX Instruct specializes in few-turn interactions.
* We are releasing both DBRX Instruct and DBRX Base, the pretrained base model which underlies it, under [an open license](https://www.databricks.com/legal/open-model-license).
* This is the repository for DBRX Instruct. DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
* For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
## Model Overview
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
It was pre-trained on 12T tokens of text and code data.
Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
This provides 65x more possible combinations of experts and we found that this improves model quality.
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository.
We made these choices based on exhaustive evaluation and scaling experiments.
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
* **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
* **Outputs:** DBRX only produces text-based outputs.
* **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
* **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
* **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
* **Version:** 1.0
* **Owner:** Databricks, Inc.
## Usage
These are several general ways to use the DBRX models:
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Instruct; DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
## Quickstart Guide
**NOTE: This is DBRX Instruct, and has been instruction finetuned.**
If you are looking for the base model, please use [DBRX Base](https://huggingface.co/databricks/dbrx-base).
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
```bash
pip install transformers tiktoken
```
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
```bash
pip install hf_transfer
export HF_HUB_ENABLE_HF_TRANSFER=1
```
### Run the model on a CPU:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True)
input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
```
### Run the model on multiple GPUs:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
```
If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
## Limitations and Ethical Considerations
### Training Dataset Limitations
The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
DBRX does not have multimodal capabilities.
### Associated Risks and Recommendations
All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
## Intended Uses
### Intended Use Cases
The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
They can be further fine-tuned for various domain-specific natural language and coding tasks.
DBRX Instruct can be used as an off-the-shelf model for few-turn question answering related to general English-language and coding tasks.
Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
### Out-of-Scope Use Cases
DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
## Training Stack
MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
Composer is our core library for large-scale training.
It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
[FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
## Evaluation
We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
HumanEval measures coding ability.
Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
## Acknowledgements
The DBRX models were made possible thanks in large part to the open-source community, especially:
* The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
* [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
|
Ujan/whisper-medium-cv-13-et-8000 | Ujan | 2024-03-27T13:19:08Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-27T13:09:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LeroyDyer/SpydazWeb_AI_ImageText_Text_Project | LeroyDyer | 2024-03-27T13:14:17Z | 2,990 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"art",
"medical",
"biology",
"code",
"chemistry",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2024-03-25T15:38:39Z | ---
license: mit
language:
- en
library_name: transformers
tags:
- art
- medical
- biology
- code
- chemistry
metrics:
- code_eval
- chrf
- charcut_mt
- cer
- brier_score
- bleurt
- bertscore
- accuracy
pipeline_tag: image-text-to-text
---
# MULTI-MODAL-MODEL
## LeroyDyer/Mixtral_AI_Vision-Instruct_X
currently in test mode
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. ([LeroyDyer/Mixtral_AI_Vision-Instruct_X](https://huggingface.co/LeroyDyer/Mixtral_AI_Vision-Instruct_X))
* You can load the **mmproj** by using the corresponding section in the interface:

## Vision/multimodal capabilities:
* For loading 4-bit use 4-bit mmproj file.- mmproj-Mixtral_AI_Vision-Instruct_X-Q4_0
* For loading 8-bit use 8 bit mmproj file - mmproj-Mixtral_AI_Vision-Instruct_X-Q8_0
* For loading 8-bit use 8 bit mmproj file - mmproj-Mixtral_AI_Vision-Instruct_X-f16
## Extended capabilities:
```
* mistralai/Mistral-7B-Instruct-v0.1 - Prime-Base
* ChaoticNeutrals/Eris-LelantaclesV2-7b - role play
* ChaoticNeutrals/Eris_PrimeV3-Vision-7B - vision
* rvv-karma/BASH-Coder-Mistral-7B - coding
* Locutusque/Hercules-3.1-Mistral-7B - Unhinging
* KoboldAI/Mistral-7B-Erebus-v3 - NSFW
* Locutusque/Hyperion-2.1-Mistral-7B - CHAT
* Severian/Nexus-IKM-Mistral-7B-Pytorch - Thinking
* NousResearch/Hermes-2-Pro-Mistral-7B - Generalizing
* mistralai/Mistral-7B-Instruct-v0.2 - BASE
* Nitral-AI/ProdigyXBioMistral_7B - medical
* Nitral-AI/Infinite-Mika-7b - 128k - Context Expansion enforcement
* Nous-Yarn-Mistral-7b-128k - 128k - Context Expansion
* yanismiraoui/Yarn-Mistral-7b-128k-sharded
* ChaoticNeutrals/Eris_Prime-V2-7B - Roleplay
```
# "image-text-text"
## using transformers
``` python
from transformers import AutoProcessor, LlavaForConditionalGeneration
from transformers import BitsAndBytesConfig
import torch
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
model_id = "LeroyDyer/Mixtral_AI_Vision-Instruct_X"
processor = AutoProcessor.from_pretrained(model_id)
model = LlavaForConditionalGeneration.from_pretrained(model_id, quantization_config=quantization_config, device_map="auto")
import requests
from PIL import Image
image1 = Image.open(requests.get("https://llava-vl.github.io/static/images/view.jpg", stream=True).raw)
image2 = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
display(image1)
display(image2)
prompts = [
"USER: <image>\nWhat are the things I should be cautious about when I visit this place? What should I bring with me?\nASSISTANT:",
"USER: <image>\nPlease describe this image\nASSISTANT:",
]
inputs = processor(prompts, images=[image1, image2], padding=True, return_tensors="pt").to("cuda")
for k,v in inputs.items():
print(k,v.shape)
```
## Using pipeline
``` python
from transformers import pipeline
from PIL import Image
import requests
model_id = LeroyDyer/Mixtral_AI_Vision-Instruct_X
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
prompt = f"A chat between a curious human and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n{question}###Assistant:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
```
## Mistral ChatTemplating
Instruction format
In order to leverage instruction fine-tuning,
your prompt should be surrounded by [INST] and [/INST] tokens.
The very first instruction should begin with a begin of sentence id. The next instructions should not.
The assistant generation will be ended by the end-of-sentence token id.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/Mixtral_AI_Vision-Instruct_X")
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
tokenizer.apply_chat_template(chat, tokenize=False)
```
# TextToText
``` python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("LeroyDyer/Mixtral_AI_Vision-Instruct_X")
tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/Mixtral_AI_Vision-Instruct_X")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
|
ahmed-naseer/textiledesgn-02 | ahmed-naseer | 2024-03-27T13:04:30Z | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-27T13:00:48Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### textiledesgn-02 Dreambooth model trained by ahmed-naseer with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Kamga/premiertpIA | Kamga | 2024-03-27T13:03:05Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:59:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF | TimeMobius | 2024-03-27T12:59:09Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"rwkv5",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:40:07Z | ---
license: apache-2.0
---
# Huggingface format for Mobius Chat 12B 128k v4
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
if input:
return f"""Instruction: {instruction}
Input: {input}
Response:"""
else:
return f"""User: {instruction}
Assistant:"""
#model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True, torch_dtype=torch.bfloat16).to(0)
model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF", trust_remote_code=True)
text = "Write a beginning of sci-fi novel"
prompt = generate_prompt(text)
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
# Mobius RWKV 12B v4 version
Good at Writing and role play, can do some rag.
# Mobius Chat 12B 128K
## Introduction
Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.
In comparison with the previous released Mobius, the improvements include:
* Only 24G Vram to run this model locally with fp16;
* Significant performance improvement;
* Multilingual support ;
* Stable support of 128K context length.
* Base model [Mobius-mega-12B-128k-base](https://huggingface.co/TimeMobius/Moibus-mega-12B-128k-base)
## Usage
We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.
Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8
## More details
Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community
* 10~100 trainning/inference cost reduce;
* state based,selected memory, which mean good at grok;
* community support.
## requirements
24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.
* [RWKV Runner](https://github.com/josStorer/RWKV-Runner)
* [Ai00 server](https://github.com/cgisky1980/ai00_rwkv_server)
## future plan
go bigger and v6
[Mobius-Chat-12B-128k-v4](https://huggingface.co/TimeMobius/Mobius-RWKV-Chat-12B-128k-v4-HF) |
GraydientPlatformAPI/artmerge-xl | GraydientPlatformAPI | 2024-03-27T12:57:30Z | 30 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-03-27T12:48:36Z | ---
library_name: diffusers
pipeline_tag: text-to-image
--- |
DataVare/PST-To-HTML-Conveter | DataVare | 2024-03-27T12:50:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-03-27T12:48:50Z | Download the DataVare PST to HTML Converter to access Outlook PST files in a web browser such as Chrome or Internet Explorer. It has been expertly developed and upgraded with superior technical features. Its qualities make it particularly popular with users. It can convert individual or several PST files to HTML format at once without deleting any data. Throughout the conversion procedure, all email attributes and data structures are preserved. It can convert any size Outlook PST file to HTML without limitation. Before starting the entire conversion process, it displays complete previews of the selected PST files/folders.
Our tool has been thoroughly evaluated by our experts, therefore it is reliable and secure for users' data. We also provide 24x7 technical help to fix all concerns with user-related tools. To ensure user happiness, we also provide a free demo pack to test its functionality and learn how it works.
Visit Here - https://www.datavare.com/software/pst-to-html-converter-expert.html |
ALTAH/AraT5v2-base-1024-finetuned-TUN-to-MSA | ALTAH | 2024-03-27T12:49:53Z | 0 | 0 | transformers | [
"transformers",
"translation",
"ar",
"en",
"fr",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| translation | 2024-03-27T10:04:08Z | ---
language:
- ar
- en
- fr
metrics:
- bleu
library_name: transformers
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zardian/Cyber_assist2.0 | Zardian | 2024-03-27T12:48:52Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:ahmed000000000/cybersec",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-13T11:27:30Z | ---
library_name: transformers
datasets:
- ahmed000000000/cybersec
language:
- en
widget:
- text: I have a port vulnerability on my device. What should I do?
example_title: Port Vulnerability
- text: >-
An attacker hacked my pc with ransomware and is asking for money to decrypt
my files. What should I do?
example_title: Ransomware
- text: >-
I want to install malicious software on a client's device without them
noticing. What should I do?
example_title: Installing Malicious softwares
- text: I want to attack a pc with virus. What should I do?
example_title: Virus Attack
---
# Model Card for Model ID
Works as a cyber assistant.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** <a href="https://github.com/Zardian18">Zardian18</a>
- **Model type:** GPT2
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** <a href="https://huggingface.co/openai-community/gpt2">OpenAi GPT2</a>
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** <a href="https://github.com/Zardian18/CyberAssist">Github repo</a>
## Uses
Can be used to handle and solve basic cybersec queries rather than beating the bush.
## Bias, Risks, and Limitations
Currently it is fine-tuned on GPT2, which is good but not comparable to state of the art LLMs and Transformers. Moreover, the dataset is small.
Moreover, the predictions are not always accurate. There might be cases where it just doesn't responds directly to the question.
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
```# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Zardian/Cyber_assist2.0")
```
```# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Zardian/Cyber_assist2.0")
model = AutoModelForCausalLM.from_pretrained("Zardian/Cyber_assist2.0")
```
## Training Details
### Training Data
<a href="https://huggingface.co/datasets/ahmed000000000/cybersec">Cybersec queries and responses dataset</a> consisting of 12408 rows and 2 columns.
#### Training Hyperparameters
- **Training regime:**
- Block size = 128
- Epochs = 10
- Batch Size = 16
- Save step size = 5000
- Save step limit =3
- <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
**Training time:** 1hr 11mins 58sec
[More Information Needed]
## Evaluation

## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Tesla T4 GPU
- **Cloud Provider:** Google Colab
- **Compute Region:** Asia
- **Carbon Emitted:** 0.08 kg of CO2eq
## Technical Specifications [optional]
### Objective
To construct an assistant which can help us provide solutions to any cybersecurity related queries. |
linuxhunter/LunarLander-v2 | linuxhunter | 2024-03-27T12:48:00Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-27T12:47:54Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -167.53 +/- 102.29
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
erikmsz/erikrepo | erikmsz | 2024-03-27T12:47:01Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-03-26T16:58:02Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: erikrepo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# erikrepo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
diversen/doctr-torch-crnn_vgg16_bn-danish-v1 | diversen | 2024-03-27T12:46:30Z | 46 | 2 | transformers | [
"transformers",
"pytorch",
"en",
"da",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-17T10:08:16Z | ---
language:
- en
- da
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
This model does a good job if you need to do OCR on Danish documents.
### Example usage:
```python
from doctr.io import DocumentFile
from doctr.models import ocr_predictor, from_hub
reco_arch = from_hub('diversen/doctr-torch-crnn_vgg16_bn-danish-v1')
det_arch = "db_resnet50"
model = ocr_predictor(det_arch=det_arch, reco_arch=reco_arch, pretrained=True)
image = DocumentFile.from_images(['test.jpg'])
result = model(image)
result.show()
output = result.export()
text_str = ""
for block in output["pages"][0]["blocks"]:
block_txt = ""
for line in block["lines"]:
line_txt = ""
for word in line["words"]:
line_txt += word["value"] + " "
block_txt += line_txt + "\n"
text_str += block_txt + "\n"
print(text_str)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "train-data",
"val_path": "validation-data",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 32,
"name": "doctr-torch-crnn_vgg16_bn-danish-v1",
"epochs": 1,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": "crnn_vgg16_bn_20240317-095746.pt",
"vocab": "danish",
"test_only": false,
"freeze_backbone": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": true,
"sched": "cosine",
"amp": false,
"find_lr": false,
"early_stop": false,
"early_stop_epochs": 5,
"early_stop_delta": 0.01
}
|
brodarexlhungdim123/paradise-land | brodarexlhungdim123 | 2024-03-27T12:45:19Z | 5 | 1 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-27T12:41:05Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### paradise-land Dreambooth model trained by brodarexlhungdim123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: ajce15760CE022
Sample pictures of this concept:

|
sudo0/INF4188_CARICK_MODEL | sudo0 | 2024-03-27T12:39:50Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:21:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoreMoretti/ppo-LunarLander-v2 | LoreMoretti | 2024-03-27T12:39:44Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-27T12:39:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.25 +/- 20.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Eurdem/megatron_v3_2x7B | Eurdem | 2024-03-27T12:33:51Z | 49 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"conversational",
"en",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T07:38:41Z | ---
license: apache-2.0
tags:
- moe
- merge
language:
- en
- tr
---
# Megatron_v3_2x7B
Megatron_v3_2x7B is a bilingual Mixure of Experts (MoE) which can comprehend and speak English/Turkish.
Megatron, MoE mimarisine sahip Türkçe ve İngilizce talimatları anlayan ve cevap veren bir modeldir.
## 💻 Usage/Kullanımı
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Eurdem/Megatron_v3_2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Tell me about AI"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=500, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
NjontaKevin/StartTrainingAI | NjontaKevin | 2024-03-27T12:30:25Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T12:19:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0x0mom/nous_r5 | 0x0mom | 2024-03-27T12:26:28Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T12:20:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hyojin99/whisper_hyojin5 | hyojin99 | 2024-03-27T12:25:45Z | 75 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:hyojin99/EBRC",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-26T08:03:53Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- hyojin99/EBRC
base_model: openai/whisper-base
model-index:
- name: ft_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_model
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the EBRC dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3945
- eval_cer: 16.1444
- eval_runtime: 1274.2842
- eval_samples_per_second: 4.709
- eval_steps_per_second: 0.294
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 7500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
zmazz/20240327_monacan-translator-fr-mon | zmazz | 2024-03-27T12:25:24Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mlabonne/NeuralMonarch-7B",
"base_model:adapter:mlabonne/NeuralMonarch-7B",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-03-27T09:46:09Z | ---
license: cc-by-nc-4.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mlabonne/NeuralMonarch-7B
datasets:
- generator
model-index:
- name: 20240327_monacan-translator-fr-mon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240327_monacan-translator-fr-mon
This model is a fine-tuned version of [mlabonne/NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
Oyunbaatar/xlm-roberta-base | Oyunbaatar | 2024-03-27T12:10:19Z | 114 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"mn",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-27T12:09:37Z | ---
language:
- mn
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1144
- Precision: 0.9244
- Recall: 0.9343
- F1: 0.9293
- Accuracy: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1647 | 1.0 | 477 | 0.0849 | 0.8983 | 0.9111 | 0.9046 | 0.9749 |
| 0.0832 | 2.0 | 954 | 0.0877 | 0.9040 | 0.9193 | 0.9116 | 0.9752 |
| 0.0606 | 3.0 | 1431 | 0.0851 | 0.9101 | 0.9246 | 0.9173 | 0.9772 |
| 0.0459 | 4.0 | 1908 | 0.0857 | 0.9174 | 0.9255 | 0.9214 | 0.9776 |
| 0.0351 | 5.0 | 2385 | 0.0920 | 0.9189 | 0.9288 | 0.9238 | 0.9773 |
| 0.0265 | 6.0 | 2862 | 0.0979 | 0.9225 | 0.9323 | 0.9274 | 0.9786 |
| 0.0197 | 7.0 | 3339 | 0.1047 | 0.9204 | 0.9310 | 0.9257 | 0.9783 |
| 0.0154 | 8.0 | 3816 | 0.1088 | 0.9178 | 0.9319 | 0.9248 | 0.9782 |
| 0.0116 | 9.0 | 4293 | 0.1131 | 0.9255 | 0.9343 | 0.9299 | 0.9791 |
| 0.0096 | 10.0 | 4770 | 0.1144 | 0.9244 | 0.9343 | 0.9293 | 0.9789 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
XYZ123XYZ/sentence-analysis-2 | XYZ123XYZ | 2024-03-27T12:04:42Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-27T12:04:35Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bert-base-uncased
model-index:
- name: sentence-analysis-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-analysis-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
AnshulSharma-1045/my-robot-bot | AnshulSharma-1045 | 2024-03-27T12:04:26Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-27T10:18:05Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Robot-Bot-dgx Dreambooth model trained by AnshulSharma-1045 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 0873EC231001
Sample pictures of this concept:
|
hyojin99/whisper_hyojin | hyojin99 | 2024-03-27T12:04:04Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:hyojin99/EBRC",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-20T04:35:05Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- hyojin99/EBRC
base_model: openai/whisper-base
model-index:
- name: ft_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_model
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the EBRC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4181
- Cer: 15.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 7500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4744 | 1.0 | 1250 | 0.4683 | 20.8493 |
| 0.24 | 2.0 | 2500 | 0.4053 | 18.0384 |
| 0.1392 | 3.0 | 3750 | 0.3982 | 17.4262 |
| 0.0664 | 4.0 | 5000 | 0.4042 | 16.7622 |
| 0.0273 | 5.0 | 6250 | 0.4119 | 16.3872 |
| 0.0096 | 6.0 | 7500 | 0.4181 | 15.8554 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ksgk-fy/alignment-adaptor-test01 | Ksgk-fy | 2024-03-27T12:01:56Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
]
| null | 2024-03-23T08:37:14Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: alignment-adaptor-test01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alignment-adaptor-test01
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.0 |
Nima-nlc/lora-aya2 | Nima-nlc | 2024-03-27T11:55:50Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:CohereForAI/aya-101",
"base_model:adapter:CohereForAI/aya-101",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-27T11:47:31Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: CohereForAI/aya-101
model-index:
- name: lora-aya2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-aya2
This model is a fine-tuned version of [CohereForAI/aya-101](https://huggingface.co/CohereForAI/aya-101) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
sridhar1111111111111111/gemma2B-It-Medical-Finetuned-2epochs | sridhar1111111111111111 | 2024-03-27T11:37:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T11:37:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DekelShoot/DekelShoot | DekelShoot | 2024-03-27T11:36:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T11:01:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pmmcbride/gemma-7b-dolly-chatml | pmmcbride | 2024-03-27T11:35:35Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:other",
"region:us"
]
| null | 2024-03-27T10:12:49Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
datasets:
- generator
model-index:
- name: gemma-7b-dolly-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-dolly-chatml
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
saransh03sharma/mintrec-mistral-2-7b-100 | saransh03sharma | 2024-03-27T11:33:01Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-23T15:04:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baris-yazici/fake_news_classifier_distilbert_cased | baris-yazici | 2024-03-27T11:31:48Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:liar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T11:19:43Z | ---
datasets:
- liar
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
--- |
Narkantak/TheBloke-Marcoroni-7B-v3-GPTQ-Sonakshi | Narkantak | 2024-03-27T11:26:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T11:26:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
doc2txt/pdfBLines-segFormer | doc2txt | 2024-03-27T11:26:50Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T11:17:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
patrickfosso/python_code_instructions_18k_alpaca | patrickfosso | 2024-03-27T11:23:34Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T11:16:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mica1234/NTMG_github_Inf4188 | mica1234 | 2024-03-27T11:21:54Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T11:10:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rrw-x2/KoSOLAR-10.7B-qlora-v1.2.1 | rrw-x2 | 2024-03-27T11:19:11Z | 0 | 0 | peft | [
"peft",
"pytorch",
"llama",
"generated_from_trainer",
"base_model:JY623/KoSOLAR-v2.0",
"base_model:adapter:JY623/KoSOLAR-v2.0",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-27T11:08:20Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: JY623/KoSOLAR-v2.0
model-index:
- name: qlora-out/v1.2
results: []
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: JY623/KoSOLAR-v2.0
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: kyujinpy/KOR-OpenOrca-Platypus-v3
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./qlora-out/v1.2
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 20
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out/v1.2
This model is a fine-tuned version of [JY623/KoSOLAR-v2.0](https://huggingface.co/JY623/KoSOLAR-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 28
- total_eval_batch_size: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.4775 | 0.0 | 1 | 13.4330 |
| 6.9219 | 0.25 | 64 | 6.2022 |
| 5.5416 | 0.5 | 128 | 5.2780 |
| 5.4282 | 0.75 | 192 | 5.1929 |
| 5.4864 | 1.0 | 256 | 5.1416 |
| 5.2877 | 1.24 | 320 | 5.1441 |
| 5.1731 | 1.49 | 384 | 5.1413 |
| 5.6221 | 1.74 | 448 | 5.1406 |
| 5.3737 | 1.99 | 512 | 5.1419 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
sharanharsoor/distilbert-base-uncased-finetuned-imdb | sharanharsoor | 2024-03-27T11:16:54Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-03-27T10:55:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.527 | 3.0 | 471 | 2.4823 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mohanaroachakka/finetunedfalcon1b | mohanaroachakka | 2024-03-27T11:16:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"region:us"
]
| null | 2024-03-27T08:07:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
peace4ever/roberta-base-ner-demo | peace4ever | 2024-03-27T11:16:17Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-05-01T07:44:41Z | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mongolian-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1225
- Precision: 0.9338
- Recall: 0.9396
- F1: 0.9367
- Accuracy: 0.9818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.169 | 1.0 | 477 | 0.0846 | 0.8408 | 0.8852 | 0.8625 | 0.9713 |
| 0.0586 | 2.0 | 954 | 0.0753 | 0.9263 | 0.9347 | 0.9305 | 0.9801 |
| 0.0288 | 3.0 | 1431 | 0.0813 | 0.9262 | 0.9355 | 0.9308 | 0.9808 |
| 0.0158 | 4.0 | 1908 | 0.0937 | 0.9318 | 0.9384 | 0.9351 | 0.9814 |
| 0.0102 | 5.0 | 2385 | 0.0967 | 0.9331 | 0.9386 | 0.9358 | 0.9820 |
| 0.006 | 6.0 | 2862 | 0.1072 | 0.9318 | 0.9382 | 0.9350 | 0.9817 |
| 0.0046 | 7.0 | 3339 | 0.1139 | 0.9354 | 0.9408 | 0.9381 | 0.9821 |
| 0.0025 | 8.0 | 3816 | 0.1185 | 0.9341 | 0.9402 | 0.9371 | 0.9820 |
| 0.0021 | 9.0 | 4293 | 0.1217 | 0.9347 | 0.9397 | 0.9372 | 0.9819 |
| 0.0011 | 10.0 | 4770 | 0.1225 | 0.9338 | 0.9396 | 0.9367 | 0.9818 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
caidas/swin2SR-classical-sr-x4-64 | caidas | 2024-03-27T11:13:03Z | 1,155 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin2sr",
"image-to-image",
"vision",
"arxiv:2209.11345",
"license:apache-2.0",
"region:us"
]
| image-to-image | 2022-12-16T14:07:21Z | ---
license: apache-2.0
tags:
- vision
- image-to-image
inference: false
---
# Swin2SR model (image super-resolution)
Swin2SR model that upscales images x4. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr).
# Intended use cases
This model is intended for image super resolution.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example). |
karangupta224/mental_roberta_suicide | karangupta224 | 2024-03-27T11:04:48Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:mental/mental-roberta-base",
"base_model:finetune:mental/mental-roberta-base",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T10:44:24Z | ---
license: cc-by-nc-4.0
base_model: mental/mental-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mental_roberta_suicide
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_roberta_suicide
This model is a fine-tuned version of [mental/mental-roberta-base](https://huggingface.co/mental/mental-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5994
- Accuracy: 0.7446
- F1: 0.7487
- Precision: 0.7368
- Recall: 0.7609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6934 | 0.97 | 25 | 0.6934 | 0.5 | 0.0 | 0.0 | 0.0 |
| 0.691 | 1.98 | 51 | 0.6905 | 0.5 | 0.0213 | 0.5 | 0.0109 |
| 0.6866 | 2.99 | 77 | 0.6666 | 0.6522 | 0.5493 | 0.78 | 0.4239 |
| 0.6427 | 4.0 | 103 | 0.5652 | 0.7174 | 0.7011 | 0.7439 | 0.6630 |
| 0.5594 | 4.97 | 128 | 0.5586 | 0.7228 | 0.6982 | 0.7662 | 0.6413 |
| 0.521 | 5.98 | 154 | 0.5405 | 0.7283 | 0.7283 | 0.7283 | 0.7283 |
| 0.4097 | 6.8 | 175 | 0.5994 | 0.7446 | 0.7487 | 0.7368 | 0.7609 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Lewdiculous/Eris_PrimeV4-Vision-32k-7B-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-27T11:01:37Z | 458 | 13 | null | [
"gguf",
"quantized",
"roleplay",
"multimodal",
"vision",
"llava",
"sillytavern",
"merge",
"mistral",
"conversational",
"license:other",
"region:us"
]
| null | 2024-03-27T02:53:08Z | ---
license: other
inference: false
tags:
- gguf
- quantized
- roleplay
- multimodal
- vision
- llava
- sillytavern
- merge
- mistral
- conversational
---
# #Roleplay #Multimodal #Vision
This repository hosts GGUF-IQ-Imatrix quants for [Nitral-AI/Eris_PrimeV4-Vision-32k-7B](https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-32k-7B).
"More stable and with better long context handling."
**Recommended starting [SillyTavern presets here](https://huggingface.co/Lewdiculous/Eris_PrimeV4-Vision-32k-7B-GGUF-IQ-Imatrix/tree/main/sillytavern-presets-lewdicu-3.0.2-mistral-0.2).**
This is a **#multimodal** model that also has **#vision** capabilities. <br> Read the full card information if you also want to use that functionality.
Quants:
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
```
**What does "Imatrix" mean?**
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
</details><br>
# Vision/multimodal capabilities:
<details><summary>
⇲ Click here to expand/hide how this would work in practice in a roleplay chat.
</summary>

</details><br>
<details><summary>
⇲ Click here to expand/hide what your SillyTavern Image Captions extension settings should look like.
</summary>

</details><br>
**If you want to use vision functionality:**
* Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the repository.
* You can load the **mmproj** by using the corresponding section in the interface:

* For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
```
--mmproj your-mmproj-file.gguf
```
# Quantization information:
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
**Steps performed:**
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*
</details><br>
# Original model information:

# Eris Prime: Version 4.0 32k
After many trials and tribulations we have a winner: A more coherent, format stable version of Eris Prime v4 with better long context handling. |
mikycamp/michelecampanile | mikycamp | 2024-03-27T10:59:02Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-27T10:55:02Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### michelecampanile Dreambooth model trained by mikycamp with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
KunalEsM/bank_complaint_intent_classifier_v2_old | KunalEsM | 2024-03-27T10:57:27Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"bank user complaint",
"intent-classifier",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T10:30:31Z | ---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- bank user complaint
- intent-classifier
widget:
- text: "I was charged Rs. 500 for an SMS alert that I did not request. Can you please reverse this charge and deactivate the service?"
example_title: "Example- Dispute in charges deducted"
- text: "My account was levied with a Rs. 436 minimum balance charge on 15th June 2022. Please explain the reason for this deduction."
example_title: "Example- Minimum Balance Charges Related"
- text: "On 8th June 2023, I was charged Rs. 200 for an RTGS transaction that was not authorized by me. Kindly investigate and reverse this charge."
example_title: "Example- Fund Remittance: NEFT/ RTGS/ IMPS through Branch"
- text: "I received a charge of Rs. 1000 for my education loan, but I have already made a full payment. Can you please explain why this charge was applied?"
example_title: "Example- Education Loans"
- text: "I have noticed a discrepancy in the processing fees for my State Government Scheme loan. On 15th December 2023"
example_title: "Example- Govt Scheme Loan"
- text: "my Home Loan account was debited with Rs. 1500 without any explanation."
example_title: "Example- Home Loan"
- text: "I was surprised to see a charge of Rs. 2569 on my Other Advance account. Can you please clarify what this charge is for?"
example_title: "Example- Other Advance amount"
- text: "I have been charged Rs. 9000 as a processing fee, but I was informed that there would be no processing fees for my loan."
example_title: "Example- Personal Loan"
- text: "Our Cash Credit (CC) Limit account with your bank was renewed in December 2023. However, we have been charged twice for the account renewal amount of Rs. 2500. Requesting an immediate reversal of the excess charge."
example_title: "Example- SME Advances"
- text: "I was charged a documentation fee of Rs. 3500 for my Vehicle Loan, which was not approved due to incomplete documentation. As per the loan terms, this fee should be waived. I request a refund for this amount."
example_title: "Example- Vehicle Loan"
- text: "I received a notification stating that I will not be charged for ATM AMC as long as I maintain a minimum balance. However, on 20/09/2022, I was charged Rs. 250 for ATM annual maintenance. Please clarify why I was charged and provide a refund if necessary."
example_title: "Example- ATM Related"
- text: "I received a credit card statement with charges that I did not make. Can you help me dispute these charges and get them reversed?"
example_title: "Example- Others"
---
# Bank User Query Intent Classification Model
This model classifies queries into twelve distinct buckets:
- Sample Label: Category, Type, Subtype
- **Label 0**: DEPOSIT, SB/ CA/TERM DEPOSIT ACCOUNTS, Dispute in charges deducted
- **Label 1**: DEPOSIT, SB/ CA/TERM DEPOSIT ACCOUNTS, Minimum Balance Charges related
- **Label 2**: DIGITAL BANKING, ATM RELATED, Dispute in ATM AMC Charges
- **Label 3**: DIGITAL BANKING,FUND REMITTANCE: NEFT/ RTGS/ IMPS through Branch,Dispute in charges deducted
- **Label 4**: LOANS & ADVANCES, Education Loans, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 5**: LOANS & ADVANCES, Govt Scheme loans, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 6**: LOANS & ADVANCES, Home loans, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 7**: LOANS & ADVANCES, OTHER ADVANCES, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 8**: LOANS & ADVANCES, Personal loans, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 9**: LOANS & ADVANCES, SME ADVANCES, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 10**: LOANS & ADVANCES, VEHICLE LOANS, Discrepancy in Charges (Processing Fee/Documentation charges, Inspection charges, etc)
- **Label 11**: Others
## Training Metrics
The following training metrics were observed over 10 epochs:
| Epoch | Loss | Accuracy | F1 Score |
|-------|---------|----------|----------|
| 1 | 0.4103 | 0.9182 | 0.9184 |
| 2 | 0.0672 | 0.9827 | 0.9828 |
| 3 | 0.0351 | 0.9917 | 0.9917 |
| 4 | 0.0221 | 0.9948 | 0.9948 |
| 5 | 0.0171 | 0.9942 | 0.9943 |
| 6 | 0.0107 | 0.9966 | 0.9966 |
| 7 | 0.0056 | 0.9989 | 0.9989 |
| 8 | 0.0037 | 0.9986 | 0.9986 |
| 9 | 0.0152 | 0.9955 | 0.9955 |
|10 | 0.0061 | 0.9982 | 0.9982 |
|
rorito/bbtwo | rorito | 2024-03-27T10:54:00Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2024-03-26T15:48:22Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: bebe
output:
url: images/out-0.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: apache-2.0
---
# bbtwo
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/rorito/bbtwo/tree/main) them in the Files & versions tab.
|
vantaa32/xlm-roberta-base-finetuned_panx-de | vantaa32 | 2024-03-27T10:53:14Z | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-27T10:36:02Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned_panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned_panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kavg/LiLT-SER-PT | kavg | 2024-03-27T10:49:15Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"base_model:nielsr/lilt-xlm-roberta-base",
"base_model:finetune:nielsr/lilt-xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-27T10:47:29Z | ---
license: mit
base_model: nielsr/lilt-xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xfun
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: LiLT-SER-PT
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xfun
type: xfun
config: xfun.pt
split: validation
args: xfun.pt
metrics:
- name: Precision
type: precision
value: 0.6997755331088664
- name: Recall
type: recall
value: 0.7550711474417197
- name: F1
type: f1
value: 0.72637250618902
- name: Accuracy
type: accuracy
value: 0.7709534665415047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LiLT-SER-PT
This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on the xfun dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1403
- Precision: 0.6998
- Recall: 0.7551
- F1: 0.7264
- Accuracy: 0.7710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:------:|:-----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.0838 | 8.47 | 500 | 0.7697 | 0.6542 | 1.0006 | 0.6081 | 0.7078 |
| 0.0366 | 16.95 | 1000 | 0.7606 | 0.6795 | 1.4063 | 0.6533 | 0.7078 |
| 0.0173 | 25.42 | 1500 | 0.7848 | 0.7047 | 1.4681 | 0.6752 | 0.7369 |
| 0.0036 | 33.9 | 2000 | 0.7706 | 0.7003 | 1.6267 | 0.6577 | 0.7487 |
| 0.0023 | 42.37 | 2500 | 1.6728 | 0.6839 | 0.7172 | 0.7002 | 0.7698 |
| 0.0001 | 50.85 | 3000 | 1.6210 | 0.6742 | 0.7493 | 0.7098 | 0.7941 |
| 0.0001 | 59.32 | 3500 | 1.6883 | 0.6962 | 0.7505 | 0.7223 | 0.7929 |
| 0.0007 | 67.8 | 4000 | 1.8709 | 0.6730 | 0.7590 | 0.7134 | 0.7811 |
| 0.0003 | 76.27 | 4500 | 1.9387 | 0.6884 | 0.7151 | 0.7015 | 0.7690 |
| 0.0034 | 84.75 | 5000 | 1.8042 | 0.6927 | 0.7554 | 0.7227 | 0.7787 |
| 0.0 | 93.22 | 5500 | 2.0395 | 0.6954 | 0.7596 | 0.7261 | 0.7527 |
| 0.0003 | 101.69 | 6000 | 1.9295 | 0.6861 | 0.7511 | 0.7172 | 0.7790 |
| 0.0001 | 110.17 | 6500 | 1.9690 | 0.6813 | 0.7611 | 0.7190 | 0.7694 |
| 0.0 | 118.64 | 7000 | 1.9217 | 0.6974 | 0.7520 | 0.7237 | 0.7754 |
| 0.0001 | 127.12 | 7500 | 2.0703 | 0.6885 | 0.7536 | 0.7196 | 0.7694 |
| 0.0002 | 135.59 | 8000 | 2.0438 | 0.6915 | 0.7635 | 0.7258 | 0.7770 |
| 0.0 | 144.07 | 8500 | 2.0429 | 0.6980 | 0.7599 | 0.7276 | 0.7782 |
| 0.0 | 152.54 | 9000 | 2.1403 | 0.6998 | 0.7551 | 0.7264 | 0.7710 |
| 0.0 | 161.02 | 9500 | 2.1786 | 0.6986 | 0.7578 | 0.7270 | 0.7726 |
| 0.0 | 169.49 | 10000 | 2.1782 | 0.6965 | 0.7560 | 0.7250 | 0.7721 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
Lewdiculous/Erosumika-7B-v3-0.2-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-27T10:47:03Z | 820 | 21 | null | [
"gguf",
"text-generation-inference",
"instruct",
"conversational",
"roleplay",
"mistral",
"merge",
"sillytavern",
"text-generation",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-26T22:25:29Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
- instruct
- conversational
- roleplay
- mistral
- merge
- sillytavern
license: cc-by-4.0
---
This repo contains GGUF-IQ-Imatrix quantized model files for [Erosumika-7B-v3-0.2](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2).
**Recommended starting [SillyTavern presets here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/lewdicu-3.0.2-mistral-0.2).**
Quants:
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"

**What does "Imatrix" mean?**
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
**Steps:**
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*
# Original model information:
<h1 style="text-align: center">Erosumika-7B-v3-0.2</h1>
<h2 style="text-align: center">~Mistral 0.2 Edition~</h1>
<div style="display: flex; justify-content: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/8YBKfcegQZliRlQNm0oir.gif" alt="Header GIF">
</div>
## Model Details
The Mistral 0.2 version of Erosumika-7B-v3, a DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best. Slightly smarter and better prompt comprehension than Mistral 0.1 Erosumika-7B-v3. 32k context should work.
[GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2-GGUF)
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
```yaml
merge_method: task_arithmetic
base_model: alpindale/Mistral-7B-v0.2-hf
models:
- model: localfultonextractor/Erosumika-7B-v3
parameters:
weight: 1.0
dtype: float16
``` |
Abhishek107/Azure_metric_log | Abhishek107 | 2024-03-27T10:35:18Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T10:32:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adriansanz/test6_balanced | adriansanz | 2024-03-27T10:34:24Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"zero-shot_classification",
"base_model:projecte-aina/roberta-base-ca-v2-cawikitc",
"base_model:finetune:projecte-aina/roberta-base-ca-v2-cawikitc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T09:34:54Z | ---
license: apache-2.0
base_model: ibaucells/RoBERTa-ca-CaWikiTC
tags:
- generated_from_trainer
- zero-shot_classification
model-index:
- name: test6_balanced
results: []
---
<!-- Este modelo ha sido ajustado automáticamente según la información a la que el Trainer tuvo acceso. Deberías revisarlo y completarlo, luego elimina este comentario. -->
# test6_balanced
Este modelo ha sido ajustado para la clasificación de cero disparos utilizando una versión preentrenada de [ibaucells/RoBERTa-ca-CaWikiTC](https://huggingface.co/ibaucells/RoBERTa-ca-CaWikiTC) en el conjunto de datos None.
Logra los siguientes resultados en el conjunto de evaluación:
- Pérdida: 1.6982
## Descripción del modelo
Se necesita más información sobre este modelo.
## Usos previstos y limitaciones
Se necesita más información sobre los usos previstos y las limitaciones de este modelo.
## Datos de entrenamiento y evaluación
Se necesita más información sobre los datos utilizados para entrenar y evaluar este modelo.
## Procedimiento de entrenamiento
### Hiperparámetros de entrenamiento
Se utilizaron los siguientes hiperparámetros durante el entrenamiento:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam con betas=(0.9,0.999) y epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
### Resultados del entrenamiento
| Pérdida de entrenamiento | Época | Paso | Pérdida de validación |
|:------------------------:|:-----:|:----:|:---------------------:|
| 2.8358 | 1.0 | 63 | 2.8655 |
| 2.8469 | 2.0 | 126 | 2.8447 |
| ... | ... | ... | ... |
### Versiones del framework
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
areegtarek/idefics-9b-split1-v1 | areegtarek | 2024-03-27T10:30:57Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| image-text-to-text | 2024-03-27T04:17:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SoloPietro/test | SoloPietro | 2024-03-27T10:29:20Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-27T10:25:15Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Test Dreambooth model trained by SoloPietro with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
hchcsuim/FFPP-Raw_1FPS-224-Unresize | hchcsuim | 2024-03-27T10:20:03Z | 211 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-03-27T09:24:23Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: FFPP-Raw_1FPS-224-Unresize
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9316086343421195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FFPP-Raw_1FPS-224-Unresize
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1650
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3473 | 1.0 | 720 | 0.3061 | 0.8633 |
| 0.276 | 2.0 | 1440 | 0.2049 | 0.9135 |
| 0.223 | 3.0 | 2160 | 0.1650 | 0.9316 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.2.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ViperYX/BiRefNet | ViperYX | 2024-03-27T10:17:06Z | 0 | 50 | null | [
"license:mit",
"region:us"
]
| null | 2024-03-21T11:21:36Z |
---
license: mit
---
This is used to store the checkpoints of BiRefNet, please refer the following repo link
1. Official implement https://github.com/zhengpeng7/birefnet
2. ComfyUI BiRefNet node https://github.com/viperyl/ComfyUI-BiRefNet |
YANG301/Yolov5-VD | YANG301 | 2024-03-27T10:14:49Z | 0 | 0 | transformers | [
"transformers",
"object-detection",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-03-26T05:50:15Z | ---
pipeline_tag: object-detection
library_name: transformers
--- |
N0de/Pixelcopter-PLE-v0 | N0de | 2024-03-27T10:03:27Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-27T10:03:24Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rrw-x2/KoSOLAR-10.7B-qlora-v1.2 | rrw-x2 | 2024-03-27T10:03:12Z | 0 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:JY623/KoSOLAR-v2.0",
"base_model:adapter:JY623/KoSOLAR-v2.0",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-26T07:18:06Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: JY623/KoSOLAR-v2.0
model-index:
- name: qlora-out/v1.2
results: []
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: JY623/KoSOLAR-v2.0
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: kyujinpy/KOR-OpenOrca-Platypus-v3
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./qlora-out/v1.2
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 20
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out/v1.2
This model is a fine-tuned version of [JY623/KoSOLAR-v2.0](https://huggingface.co/JY623/KoSOLAR-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 28
- total_eval_batch_size: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.4775 | 0.0 | 1 | 13.4330 |
| 6.9219 | 0.25 | 64 | 6.2022 |
| 5.5416 | 0.5 | 128 | 5.2780 |
| 5.4282 | 0.75 | 192 | 5.1929 |
| 5.4864 | 1.0 | 256 | 5.1416 |
| 5.2877 | 1.24 | 320 | 5.1441 |
| 5.1731 | 1.49 | 384 | 5.1413 |
| 5.6221 | 1.74 | 448 | 5.1406 |
| 5.3737 | 1.99 | 512 | 5.1419 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
lemon-mint/gemma-ko-7b-it-v0.31 | lemon-mint | 2024-03-27T09:53:23Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:41:30Z | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
---

# Gemma Ko 7B Instruct v0.31
- Eval Loss: `1.739`
- lr: `1e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
### Model Description
The Gemma 7B Ko Instruct v0.31 model is designed for generating human-like text in the Korean language.
It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation.
This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** Korean, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
# Limitations and Ethical Considerations
As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.
|
ajay141/chat-sql | ajay141 | 2024-03-27T09:49:04Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yuiseki/tinyllama-coder-sql-en-v0.1",
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:merge:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:yuiseki/tinyllama-coder-sql-en-v0.1",
"base_model:merge:yuiseki/tinyllama-coder-sql-en-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:48:00Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yuiseki/tinyllama-coder-sql-en-v0.1
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
base_model:
- yuiseki/tinyllama-coder-sql-en-v0.1
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# chat-sql
chat-sql is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yuiseki/tinyllama-coder-sql-en-v0.1](https://huggingface.co/yuiseki/tinyllama-coder-sql-en-v0.1)
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yuiseki/tinyllama-coder-sql-en-v0.1
layer_range: [0, 22]
- model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
layer_range: [0, 22]
merge_method: slerp
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75,0.25]
- filter: mlp
value: [0.25,0.75]
- filter: layernorm
value: [0.5,0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ajay141/chat-sql"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Sharanyam/DPO_Model1 | Sharanyam | 2024-03-27T09:31:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T09:31:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JayhC/kuno-kunoichi-v1-DPO-v2-SLERP-7B-8bpw-h8-exl2-rpcal | JayhC | 2024-03-27T09:29:37Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T08:06:30Z | ---
base_model:
- SanjiWatsuki/Kunoichi-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
<br/><br/>
8bpw/h8 exl2 quantization of [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B) using [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) calibration dataset.
---
**ORIGINAL CARD:**
# kuno-kunoichi-v1-DPO-v2-SLERP-7B
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts.
[GGUF-IQ-Imatrix quants helpfully provided by Lewdiculous.](https://huggingface.co/Lewdiculous/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF-IQ-Imatrix)
[Q8_0 GGUF quant](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF)
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0,32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0,32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
t:
- value: 0.5
dtype: float16
``` |
Herry443/Mistral-7B-KNUT-ref-en-mmlu-0.6 | Herry443 | 2024-03-27T09:27:15Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T09:04:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0x0son0/nr_s1 | 0x0son0 | 2024-03-27T09:25:31Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T05:52:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
karangupta224/mental_roberta_stress | karangupta224 | 2024-03-27T09:22:01Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:mental/mental-roberta-base",
"base_model:finetune:mental/mental-roberta-base",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T08:55:57Z | ---
license: cc-by-nc-4.0
base_model: mental/mental-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mental_roberta_stress
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_roberta_stress
This model is a fine-tuned version of [mental/mental-roberta-base](https://huggingface.co/mental/mental-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4034
- Accuracy: 0.8266
- F1: 0.8278
- Precision: 0.8490
- Recall: 0.8076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6934 | 0.99 | 31 | 0.6885 | 0.5427 | 0.6930 | 0.5302 | 1.0 |
| 0.6692 | 1.98 | 62 | 0.6606 | 0.5664 | 0.7042 | 0.5434 | 1.0 |
| 0.46 | 2.98 | 93 | 0.4357 | 0.8098 | 0.8116 | 0.8300 | 0.7940 |
| 0.3539 | 3.97 | 124 | 0.4034 | 0.8266 | 0.8278 | 0.8490 | 0.8076 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Smuggling1710/An4-7B-GUFF | Smuggling1710 | 2024-03-27T09:20:30Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T09:17:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** Smuggling1710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rdmpage/autotrain-f8u3m-1w0uc | rdmpage | 2024-03-27T09:18:14Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-f8u3m-1w0uc/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-03-27T09:18:03Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-f8u3m-1w0uc/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.588494598865509
f1_macro: 0.726419878296146
f1_micro: 0.7674418604651162
f1_weighted: 0.7444926647483373
precision_macro: 0.8743946731234866
precision_micro: 0.7674418604651163
precision_weighted: 0.8185567881074384
recall_macro: 0.6858527131782945
recall_micro: 0.7674418604651163
recall_weighted: 0.7674418604651163
accuracy: 0.7674418604651163
|
BangorAI/cyfieithydd-7b-fersiwn-2 | BangorAI | 2024-03-27T09:14:18Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"cy",
"dataset:techiaith/cofnodycynulliad_en-cy",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-25T12:51:29Z | ---
license: apache-2.0
datasets:
- techiaith/cofnodycynulliad_en-cy
language:
- cy
---
Mae'r model LLM yn seiliedig ar [BangorAI/mistral-7b-cy-epoch-2](https://huggingface.co/BangorAI/mistral-7b-cy-epoch-2), sef y model Mistral-7B wedi hyfforddiant parhaus ar gyfer y Gymraeg.
Cafodd y model hyfforddiant cywrain pellach ar ddata Cofnod y Cynulliad a ddarparir gan [TechIaith](https://huggingface.co/techiaith).
## Demo
Gallwch roi gynnig ar esiampl o'r model yma: [https://demo.bangor.ai/](https://demo.bangor.ai/)
### Fformat Sgwrs
Mae'r hyfforddiant cywrain wedi defnyddio'r fformat canlynol ar gyfer trosi o'r Saesneg i'r Gymraeg (a'r naill ffordd i'r llall).
```
Cyfieithwch y testun Saesneg canlynol i'r Gymraeg.
### Saesneg:
{prompt}
### Cymraeg:
```
## Sut i'w ddefnyddio
Mae'r model Mistral-7B-v-0.1 sy'n tanseilio'r model yma hefo 4k-sliding-context-window, felly mae'n well pecynnu'r testyn fesul paragraff neu ddau a'u bwydo i mewn i'r LLM yn eu tro.
## Hawlfraint
Mae'r data Cofnod y Cynulliad dan drywdded [Llywodraeth Agored](https://www.nationalarchives.gov.uk/doc/open-government-licence-cymraeg/version/3/). |
hfl/chinese-mixtral-instruct-gguf | hfl | 2024-03-27T09:14:13Z | 362 | 13 | null | [
"gguf",
"moe",
"zh",
"en",
"arxiv:2403.01851",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-01-24T02:28:39Z | ---
license: apache-2.0
language:
- zh
- en
tags:
- moe
---
# Chinese-Mixtral-Instruct-GGUF
<p align="center">
<a href="https://github.com/ymcui/Chinese-Mixtral"><img src="https://ymcui.com/images/chinese-mixtral-banner.png" width="600"/></a>
</p>
**Chinese Mixtral GitHub repository: https://github.com/ymcui/Chinese-Mixtral**
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Mixtral-Instruct** (chat/instruction model).
**Note: When using instruction/chat model, you MUST follow the official prompt template! Example: [chat.sh](https://github.com/ymcui/Chinese-Mixtral/blob/main/scripts/llamacpp/chat.sh)**
## Performance
Metric: PPL, lower is better
| Quant | Size ↓ | PPL |
| ------- | ------- | ------------------ |
| IQ1_S | 9.8 GB | 9.5782 +/- 0.08909 |
| IQ1_M | 10.8 GB | 7.4666 +/- 0.06741 |
| IQ2_XXS | 12.3 GB | 6.3923 +/- 0.05674 |
| IQ2_XS | 13.7 GB | 6.0606 +/- 0.05834 |
| IQ2_S | 14.1 GB | 4.7617 +/- 0.04177 |
| IQ2_M | 15.5 GB | 4.5911 +/- 0.04054 |
| Q2_K | 17.3 GB | 4.8592 +/- 0.04303 |
| IQ3_XXS | 18.3 GB | 4.3557 +/- 0.03846 |
| IQ3_XS | 19.3 GB | 4.3328 +/- 0.03779 |
| IQ3_S | 20.4 GB | 4.3138 +/- 0.03785 |
| IQ3_M | 21.4 GB | 4.3024 +/- 0.03775 |
| Q3_K | 22.5 GB | 4.4334 +/- 0.03937 |
| IQ4_XS | 25.1 GB | 4.2324 +/- 0.03757 |
| Q4_0 | 26.4 GB | 4.2688 +/- 0.03787 |
| IQ4_NL | 26.5 GB | 4.2384 +/- 0.03763 |
| Q4_K | 28.4 GB | 4.2433 +/- 0.03768 |
| Q5_0 | 32.2 GB | 4.2142 +/- 0.03733 |
| Q5_K | 33.2 GB | 4.2177 +/- 0.03743 |
| Q6_K | 38.4 GB | 4.2184 +/- 0.03754 |
| Q8_0 | 49.6 GB | 4.2053 +/- 0.03732 |
| F16 | 93.5 GB | x |
Due to the file size limitation, for F16 model, please use `cat` command to concatenate all parts into a single file. **You must concatenate these parts in order.**
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-mixtral-instruct
Please refer to [https://github.com/ymcui/Chinese-Mixtral/](https://github.com/ymcui/Chinese-Mixtral/) for more details.
## Citation
Please consider cite our paper if you use the resource of this repository.
Paper link: https://arxiv.org/abs/2403.01851
```
@article{chinese-mixtral,
title={Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral},
author={Cui, Yiming and Yao, Xin},
journal={arXiv preprint arXiv:2403.01851},
url={https://arxiv.org/abs/2403.01851},
year={2024}
}
``` |
zabir735/clip-seed-vit-11 | zabir735 | 2024-03-27T09:10:00Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-base-patch16",
"base_model:finetune:openai/clip-vit-base-patch16",
"endpoints_compatible",
"region:us"
]
| zero-shot-image-classification | 2024-03-27T06:24:19Z | ---
base_model: openai/clip-vit-base-patch16
tags:
- generated_from_trainer
model-index:
- name: clip-seed-vit-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-seed-vit-11
This model is a fine-tuned version of [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11.0
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.1
|
0x0mom/nous_r1 | 0x0mom | 2024-03-27T09:09:11Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-26T19:49:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
berquetR/model_opti_paths | berquetR | 2024-03-27T09:08:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T09:08:13Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** berquetR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zzzwisebird/finetune_starcoder2b | zzzwisebird | 2024-03-27T09:02:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
]
| null | 2024-03-27T08:59:43Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: bigcode/starcoder2-3b
model-index:
- name: finetune_starcoder2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_starcoder2b
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
James449/nlp-t5-qa-model | James449 | 2024-03-27T08:53:15Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-03-27T06:46:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dat96/videomae-base-finetuned-scratch | dat96 | 2024-03-27T08:51:18Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2024-03-25T04:12:50Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-scratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-scratch
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- Accuracy: 0.7994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3952
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6869 | 0.08 | 330 | 0.6326 | 0.6490 |
| 0.6342 | 1.08 | 660 | 0.6356 | 0.6447 |
| 0.6718 | 2.08 | 990 | 0.6112 | 0.6648 |
| 0.5003 | 3.08 | 1320 | 0.5741 | 0.6991 |
| 0.4131 | 4.08 | 1650 | 0.5480 | 0.7077 |
| 0.3233 | 5.08 | 1980 | 0.5564 | 0.7464 |
| 0.2411 | 6.08 | 2310 | 0.4929 | 0.7923 |
| 0.3402 | 7.08 | 2640 | 0.7592 | 0.7593 |
| 0.2174 | 8.08 | 2970 | 0.7752 | 0.7779 |
| 0.1706 | 9.08 | 3300 | 0.8511 | 0.7923 |
| 0.1127 | 10.08 | 3630 | 0.9263 | 0.7994 |
| 0.062 | 11.08 | 3952 | 1.0187 | 0.7808 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MiuN2k3/ViWikiSBert-fine-tuning | MiuN2k3 | 2024-03-27T08:50:49Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:COCOTECH-AI/ViNLI-SimCSE-supervised",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-25T17:00:50Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- COCOTECH-AI/ViNLI-SimCSE-supervised
---
# MiuN2k3/ViWikiSBert-fine-tuning
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('MiuN2k3/ViWikiSBert-fine-tuning')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('MiuN2k3/ViWikiSBert-fine-tuning')
model = AutoModel.from_pretrained('MiuN2k3/ViWikiSBert-fine-tuning')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=MiuN2k3/ViWikiSBert-fine-tuning)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1597 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 798,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
martibosch/detectree | martibosch | 2024-03-27T08:47:17Z | 0 | 2 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"region:us"
]
| tabular-classification | 2024-03-27T08:47:15Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: skops
model_file: clf.skops
widget:
- structuredData:
x0:
- 25.861774444580078
- 13.925846099853516
- 18.626529693603516
x1:
- -9.131807327270508
- -9.77347183227539
- -9.504095077514648
x10:
- 1.9384498596191406
- 2.884918212890625
- 2.2260468006134033
x11:
- 3.0784831047058105
- 3.9418864250183105
- 4.06421422958374
x12:
- 2.991974353790283
- 2.9264330863952637
- 2.5255069732666016
x13:
- 2.345289707183838
- 2.3997442722320557
- 2.200080394744873
x14:
- 1.7882720232009888
- 1.840790867805481
- 1.383643388748169
x15:
- 1.6710506677627563
- 1.2678987979888916
- 0.47583726048469543
x16:
- 1.840790867805481
- 1.7882720232009888
- 1.0316907167434692
x17:
- 2.3997442722320557
- 2.345289707183838
- 1.8518061637878418
x18:
- 1.380855917930603
- 1.2926031351089478
- 1.0395294427871704
x19:
- 1.241168737411499
- 1.126420021057129
- 0.8134236931800842
x2:
- 4.62739896774292
- 5.171527862548828
- 4.921814441680908
x20:
- 0.9832149744033813
- 0.8152679800987244
- 0.38093870878219604
x21:
- 0.8598455786705017
- 0.6651478409767151
- 0.17098481953144073
x22:
- 0.9832149744033813
- 0.8152679800987244
- 0.38093870878219604
x23:
- 1.241168737411499
- 1.126420021057129
- 0.8134236931800842
x24:
- 2.725480556488037
- 3.022055149078369
- 3.3232314586639404
x25:
- 1.8365917205810547
- 2.0849626064300537
- 2.1735572814941406
x26:
- 1.22439444065094
- 1.251629114151001
- 1.4565647840499878
x3:
- 0.15449941158294678
- 0.03677806630730629
- 0.07167093455791473
x4:
- -0.024682553485035896
- -0.02837284840643406
- -0.029335789382457733
x5:
- -0.5647724866867065
- -0.19825565814971924
- -0.3138836622238159
x6:
- 4.030393123626709
- 5.093674182891846
- 5.913875102996826
x7:
- 3.9418864250183105
- 3.0784831047058105
- 3.5550312995910645
x8:
- 2.884918212890625
- 1.9384498596191406
- 1.8467140197753906
x9:
- 1.4331213235855103
- 1.9454820156097412
- 0.7261717319488525
---
# Model description
LightGBM classifier of tree/non-tree pixels from aerial imagery trained on Zurich's Orthofoto Sommer 2014/15 using detectree.
## Intended uses & limitations
Segment tree/non-tree pixels from aerial imagery
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------|---------|
| boosting_type | gbdt |
| class_weight | |
| colsample_bytree | 1.0 |
| importance_type | split |
| learning_rate | 0.1 |
| max_depth | -1 |
| min_child_samples | 20 |
| min_child_weight | 0.001 |
| min_split_gain | 0.0 |
| n_estimators | 200 |
| n_jobs | |
| num_leaves | 31 |
| objective | |
| random_state | |
| reg_alpha | 0.0 |
| reg_lambda | 0.0 |
| subsample | 1.0 |
| subsample_for_bin | 200000 |
| subsample_freq | 0 |
</details>
### Model Plot
<style>#sk-container-id-15 {/* Definition of color scheme common for light and dark mode */--sklearn-color-text: black;--sklearn-color-line: gray;/* Definition of color scheme for unfitted estimators */--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/* Definition of color scheme for fitted estimators */--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/* Specific color for light theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/* Redefinition of color scheme for dark theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;}
}#sk-container-id-15 {color: var(--sklearn-color-text);
}#sk-container-id-15 pre {padding: 0;
}#sk-container-id-15 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;
}#sk-container-id-15 div.sk-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background);
}#sk-container-id-15 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }`but bootstrap.min.css set `[hidden] { display: none !important; }`so we also need the `!important` here to be able to override thedefault hidden behavior on the sphinx rendered scikit-learn.org.See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;
}#sk-container-id-15 div.sk-text-repr-fallback {display: none;
}div.sk-parallel-item,
div.sk-serial,
div.sk-item {/* draw centered vertical line to link estimators */background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center;
}/* Parallel-specific style estimator block */#sk-container-id-15 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1;
}#sk-container-id-15 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative;
}#sk-container-id-15 div.sk-parallel-item {display: flex;flex-direction: column;
}#sk-container-id-15 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;
}#sk-container-id-15 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;
}#sk-container-id-15 div.sk-parallel-item:only-child::after {width: 0;
}/* Serial-specific style estimator block */#sk-container-id-15 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em;
}/* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is
clickable and can be expanded/collapsed.
- Pipeline and ColumnTransformer use this feature and define the default style
- Estimators will overwrite some part of the style using the `sk-estimator` class
*//* Pipeline and ColumnTransformer style (default) */#sk-container-id-15 div.sk-toggleable {/* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer */background-color: var(--sklearn-color-background);
}/* Toggleable label */
#sk-container-id-15 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center;
}#sk-container-id-15 label.sk-toggleable__label-arrow:before {/* Arrow on the left of the label */content: "▸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon);
}#sk-container-id-15 label.sk-toggleable__label-arrow:hover:before {color: var(--sklearn-color-text);
}/* Toggleable content - dropdown */#sk-container-id-15 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-15 div.sk-toggleable__content.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-15 div.sk-toggleable__content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-15 div.sk-toggleable__content.fitted pre {/* unfitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-15 input.sk-toggleable__control:checked~div.sk-toggleable__content {/* Expand drop-down */max-height: 200px;max-width: 100%;overflow: auto;
}#sk-container-id-15 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";
}/* Pipeline/ColumnTransformer-specific style */#sk-container-id-15 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-15 div.sk-label.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator-specific style *//* Colorize estimator box */
#sk-container-id-15 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-15 div.sk-estimator.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}#sk-container-id-15 div.sk-label label.sk-toggleable__label,
#sk-container-id-15 div.sk-label label {/* The background is the default theme color */color: var(--sklearn-color-text-on-default-background);
}/* On hover, darken the color of the background */
#sk-container-id-15 div.sk-label:hover label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}/* Label box, darken color on hover, fitted */
#sk-container-id-15 div.sk-label.fitted:hover label.sk-toggleable__label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator label */#sk-container-id-15 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;
}#sk-container-id-15 div.sk-label-container {text-align: center;
}/* Estimator-specific */
#sk-container-id-15 div.sk-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-15 div.sk-estimator.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}/* on hover */
#sk-container-id-15 div.sk-estimator:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-15 div.sk-estimator.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}/* Specification for estimator info (e.g. "i" and "?") *//* Common style for "i" and "?" */.sk-estimator-doc-link,
a:link.sk-estimator-doc-link,
a:visited.sk-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/* unfitted */border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1);
}.sk-estimator-doc-link.fitted,
a:link.sk-estimator-doc-link.fitted,
a:visited.sk-estimator-doc-link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
div.sk-estimator:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover,
div.sk-label-container:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}div.sk-estimator.fitted:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover,
div.sk-label-container:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}/* Span, style for the box shown on hovering the info icon */
.sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/* unfitted */background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3);
}.sk-estimator-doc-link.fitted span {/* fitted */background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3);
}.sk-estimator-doc-link:hover span {display: block;
}/* "?"-specific style due to the `<a>` HTML tag */#sk-container-id-15 a.estimator_doc_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/* unfitted */color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid;
}#sk-container-id-15 a.estimator_doc_link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
#sk-container-id-15 a.estimator_doc_link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}#sk-container-id-15 a.estimator_doc_link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);
}
</style><div id="sk-container-id-15" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LGBMClassifier(n_estimators=200)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-15" type="checkbox" checked><label for="sk-estimator-id-15" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> LGBMClassifier<span class="sk-estimator-doc-link fitted">i<span>Fitted</span></span></label><div class="sk-toggleable__content fitted"><pre>LGBMClassifier(n_estimators=200)</pre></div> </div></div></div></div>
## Evaluation Results
Metrics calculated on a validation set of 1% of the test tiles
| Metric | Value |
|-----------|----------|
| accuracy | 0.87635 |
| precision | 0.785237 |
| recall | 0.756414 |
| f1 | 0.770556 |
## Dataset description
https://www.geolion.zh.ch/geodatensatz/2831
## Preprocessing description
Images are resampled to 50 cm resolution. Train/test split based on image descriptors with 1% of tiles selected for training.
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
Martí Bosch
# Model Card Contact
[email protected]
# Citation
https://joss.theoj.org/papers/10.21105/joss.02172
# Example predictions
<details>
<summary> Click to expand </summary>

</details>
|
Gingnose/ppo-LunarLander-v2 | Gingnose | 2024-03-27T08:37:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-27T08:37:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.40 +/- 13.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DUAL-GPO/phi-2-gpo-test-longest-iter-random2-2 | DUAL-GPO | 2024-03-27T08:36:30Z | 2 | 0 | peft | [
"peft",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-03-27T07:31:41Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-test-longest-iter-random2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-test-longest-iter-random2-2
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-test-longest-iter-random2-1](https://huggingface.co/DUAL-GPO/phi-2-gpo-test-longest-iter-random2-1) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Rewards/chosen: -0.0060
- Rewards/rejected: -0.0057
- Rewards/accuracies: 0.4955
- Rewards/margins: -0.0003
- Logps/rejected: -279.5291
- Logps/chosen: -307.4422
- Logits/rejected: 0.0341
- Logits/chosen: -0.0650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.001 | 1.6 | 100 | 0.0017 | -0.0031 | -0.0027 | 0.4900 | -0.0004 | -279.2278 | -307.1495 | 0.0468 | -0.0515 |
| 0.0011 | 3.2 | 200 | 0.0019 | -0.0041 | -0.0041 | 0.5005 | 0.0000 | -279.3705 | -307.2468 | 0.0071 | -0.0933 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
tung491/ppo-LunarLander-v2 | tung491 | 2024-03-27T08:36:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-22T11:55:14Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.72 +/- 19.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chanelcolgate/chamdiemgianhang-vsk | chanelcolgate | 2024-03-27T08:33:39Z | 13 | 0 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:chanelcolgate/yenthienviet",
"model-index",
"region:us"
]
| object-detection | 2024-03-13T13:14:14Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.239
inference: false
datasets:
- chanelcolgate/yenthienviet
model-index:
- name: chanelcolgate/chamdiemgianhang-vsk
results:
- task:
type: object-detection
dataset:
type: chanelcolgate/yenthienviet
name: yenthienviet
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.92084 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="chanelcolgate/chamdiemgianhang-vsk" src="https://huggingface.co/chanelcolgate/chamdiemgianhang-vsk/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['BOM_GEN', 'BOM_JUN', 'BOM_KID', 'BOM_SAC', 'BOM_VTG', 'BOM_YTV', 'HOP_FEJ', 'HOP_FRE', 'HOP_JUN', 'HOP_POC', 'HOP_VTG', 'HOP_YTV', 'LOC_JUN', 'LOC_KID', 'LOC_YTV', 'LOO_DAU', 'LOO_KID', 'LOO_MAM', 'LOO_YTV', 'POS_NHO', 'TUI_GEN', 'TUI_JUN', 'TUI_KID', 'TUI_SAC', 'TUI_VTG', 'TUI_YTV']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.1.0 ultralytics==8.0.239
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('chanelcolgate/chamdiemgianhang-vsk')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|
Suparious/mistral-7b-v0.2-layla-v4-exl2 | Suparious | 2024-03-27T08:29:21Z | 10 | 1 | null | [
"text-generation",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-03-27T06:45:38Z | ---
license: apache-2.0
quantized_by: suparious
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of mistral-7b-v0.2-layla-v4
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: l3utterfly/mistral-7b-v0.2-layla-v4
Model Size: 7b
| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------- | ------------ |
| [8_0](https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
<a href="https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/suparious/mistral-7b-v0.2-layla-v4-exl2
```
With huggingface hub (credit to TheBloke and bartowski for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `mistral-7b-v0.2-layla-v4-exl2`:
```shell
mkdir mistral-7b-v0.2-layla-v4-exl2
huggingface-cli download suparious/mistral-7b-v0.2-layla-v4-exl2 --local-dir mistral-7b-v0.2-layla-v4-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir mistral-7b-v0.2-layla-v4-exl2-6_5
huggingface-cli download suparious/mistral-7b-v0.2-layla-v4-exl2 --revision 6_5 --local-dir mistral-7b-v0.2-layla-v4-exl2-6_5 --local-dir-use-symlinks False
``` |
n0m09g3/code-llama-7b-text-to-sql | n0m09g3 | 2024-03-27T08:27:25Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-03-27T07:58:02Z | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: code-llama-7b-text-to-sql
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.5.0
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.15.0
|
lewtun/kto-aligned-model-lora | lewtun | 2024-03-27T08:24:41Z | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"kto",
"generated_from_trainer",
"base_model:trl-lib/qwen1.5-1.8b-sft",
"base_model:adapter:trl-lib/qwen1.5-1.8b-sft",
"license:other",
"region:us"
]
| null | 2024-03-27T08:24:39Z | ---
license: other
library_name: peft
tags:
- trl
- kto
- generated_from_trainer
base_model: trl-lib/qwen1.5-1.8b-sft
model-index:
- name: kto-aligned-model-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kto-aligned-model-lora
This model is a fine-tuned version of [trl-lib/qwen1.5-1.8b-sft](https://huggingface.co/trl-lib/qwen1.5-1.8b-sft) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
chihoonlee10/T3Q-ko-solar-dpo-v3.0 | chihoonlee10 | 2024-03-27T08:18:57Z | 2,325 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-20T15:10:37Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---

# T3Q-ko-solar-dpo-v3.0
## This model is a version of davidkim205/nox-solar-10.7b-v4 that has been fine-tuned with DPO.
## Model Developers Chihoon Lee(chihoonlee10), T3Q |
PunGrumpy/music-genre-classification | PunGrumpy | 2024-03-27T08:15:59Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-02T02:12:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits