modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 06:28:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 06:25:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Skywork/Skywork-13B-Base-8bits | Skywork | 2023-11-05T05:02:49Z | 5 | 7 | transformers | [
"transformers",
"pytorch",
"skywork",
"text-generation",
"custom_code",
"arxiv:2310.19341",
"arxiv:2310.16713",
"license:other",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2023-10-24T03:58:41Z | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
<!-- <div align="center">
<h1>
✨Skywork
</h1>
</div> -->
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
<p align="center">
👨💻 <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a>• 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a>
</p>
<div align="center">
[🎉天工在线对话平台已正式向公众开放](https://sso.tiangong.cn/?redirect=https://model-platform.tiangong.cn/overview&client_id=200005)
</div>
<div align="center">
[](https://github.com/SkyworkAI/Skywork/stargazers)
[](https://github.com/SkyworkAI/Skywork/fork)
</div>
# 模型介绍(Introduction)
**Skywork-13B-Base**模型在高质量清洗过滤的3.2万亿个多语言(主要是中文和英文)和代码数据上进行预训练,它在多种评测和各种基准测试上都展现了同等规模模型的最佳效果。
**Skywork-13B-Base**: The model was trained on a high-quality cleaned dataset consisting of 3.2 trillion multilingual data (mainly Chinese and English) and code. It has demonstrated the best performance among models of similar scale in various evaluations and benchmark tests.
**Skywork-13B-Base-8bits**模型为**Skywork-13B-Base**的8bits量化版,支持用户在消费级显卡上进行进行部署和推理。
**Skywork-13B-Base-8bits** is a quantizated model of **Skywork-13B-Base**, to support deployment and inference on consumer-grade GPUs.
如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](http://arxiv.org/abs/2310.19341),[Skymath](https://arxiv.org/abs/2310.16713)论文,[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)论文。
If you are interested in more training and evaluation details, please refer to our [technical report](http://arxiv.org/abs/2310.19341), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper.
## 训练数据(Training Data)
我们精心搭建了数据清洗流程对文本中的低质量数据、有害信息、敏感信息进行清洗过滤。我们的Skywork-13B-Base模型是在清洗后的3.2TB高质量中、英、代码数据上进行训练,其中英文占比52.2%,中文占比39.6%,代码占比8%,在兼顾中文和英文上的表现的同时,代码能力也能有保证。
We have developed a data cleaning pipeline with great care to effectively clean and filter low-quality data and eliminate harmful information from text data. Our Skywork-13B-Base model is trained on a dataset with 3.2TB tokens that consists of high-quality Chinese, English, and code data, all of which have been thoroughly cleaned. The English data comprises 52.2% of the dataset, the Chinese data accounts for 39.6%, and the code data makes up 8%. This comprehensive approach ensures optimal performance for both Chinese and English while also maintaining the ability to handle code.
| | Category | Percentage |
|-------------|------------------|------------|
| **English** | Webpages | 39.8% |
| | Books | 3.6% |
| | Academic Papers | 3.0% |
| | Encyclopedia | 0.5% |
| | Miscellany | 2.9% |
| **Chinese** | Webpages | 30.4% |
| | Social Media | 5.5% |
| | Encyclopedia | 0.8% |
| | Miscellany | 3.1% |
| **Other Lang.** | Encyclopedia | 2.4% |
| **Code** | Github | 8.0% |
## 模型结构(Model Structure)
与Llama-2-13B模型对比,天工Skywork-13B模型采用相对更加瘦长的网络结构,层数为52层,同时将FFN Dim和Hidden Dim缩小到12288和4608,从而保证模型参数量和原始Llama-2-13B模型相当。根据我们前期实验对比,相对瘦长的网络结构在大Batch Size训练下可以取得更好的泛化效果。Skywork-13B和Llama-2-13B模型的对比如下:
Compared to the Llama2-13B model, the Skywork-13B model adopts a relatively thinner and deeper network structure with 52 layers. At the same time, the FFN Dim and Hidden Dim are reduced to 12288 and 4608, respectively, to ensure that the model has a similar number of parameters as the original Llama-13B model. Based on our preliminary experimental results, a relatively thinner and deeper network structure can achieve better generalization performance under large batch size training. The detailed comparison between the Skywork-13B and Llama-2-13B models is as follows:
| Model Structure | Llama2-13B | Skywork-13B |
|----------------------|:----:|:-----------:|
| Vocab. Size | 32,000 | 65,536 |
| Hidden Dim. | 5,120 | 4,608 |
| FFN Dim. | 13,696 | 12,288 |
| Head Dim. | 128 | 128 |
| Num. Heads | 40 | 36 |
| Num. Layers | 40 | 52 |
| Seq. Len. | 4,096 | 4,096 |
| Positional Embedding | RoPE | RoPE |
## 分词器(Tokenizer)
我们使用Byte-Pair Encoding(BPE)对数据进行分词,词表大小为65536,其中拉丁字符和子词为32000个,汉字和Unicode符号8000个,汉语词语25519个,剩下的17个为保留字。
We use Byte-Pair Encoding (BPE) to tokenize the data, with a vocabulary size of 65536. Among them, there are 32000 Latin characters and subwords, 8000 Chinese characters and Unicode symbols, 25519 Chinese words, and the remaining 17 are reserved words.
| Category | Size |
|---------------------------------|--------|
| Latin based words & subwords | 32000 |
| Chinese characters & Unicode symbols | 8000 |
| Chinese words | 25519 |
| Reserved symbols | 17 |
| **Total** | **65536** |
# 模型评估(Evaluation)
## 领域数据困惑度评估(Perplexity Evaluaiton)
语言模型训练的本质上是让预测下一个词更准确。基于这个认知,我们认为评估基础大模型一个重要的方式是评估在各大领域上语言模型生成文章的概率。在模型训练中预测下一个词的概率一般使用Cross Entropy损失函数,整体的损失函数为每个位置预测真实词损失的平均,则有:
$$loss = -\sum^{n}_{i=1} log(p_i) / n = -log( \prod_{i=1}^n p_i) / n$$
其中$n$是文档的长度,即token数,$p_i$是位置i上真实词的概率,我们知道文档中每一个位置上真实词的概率的联乘则为生成该文档的概率,如此我们就将loss和生成文章的概率联系在了一起。而不同模型因为使用的分词器不同,具有不同的token数,因此对损失函数乘以token数目$n$,这样就仅考虑生成文章的概率部分,不同模型也可以进行比较。我们将标准化后loss取指数转换成perplexity,使得模型的差异更加可读。为了阅读方便后续提到的loss和ppl为模型标准化后的loss和perplexity。
基于上述分析,我们对对多个领域筛选出2023年9月份新发布的几百到上千篇高质量文章,并人工进行了核对。保证所有的测试数据不在天工模型以及其他所有模型的训练集中,并且测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的ppl,模型很难作弊。
下图列出了不同开源模型,天工Skywork-13B-Base取得最优效果,证明了我们的Base模型的基础能力处于国内开源模型中文最强水平。
We have chosen several hundred to thousands of high-quality articles that were published after September 1, 2023 across various fields. We have manually verified these articles to ensure their quality. It is important to note that none of the test data used in evaluating the Skywork model or any other models is included in their training set. Furthermore, the test data is diverse and of high quality, making it challenging for the models to gain an unfair advantage.
The figure below displays the performance of different open source models. Skywork-13B-Base achieves the best results.
| | Tech | Movie | Gov. | Game | Finance | General | Average |
|------------------|-------|-------|-------|-------|---------|---------|---------|
| MOSS-7B | 20.83 | 39.66 | 11.08 | 31.24 | 10.59 | 13.25 | 18.50 |
| InternLM-7B | 13.43 | 24.90 | 5.88 | 19.78 | 6.17 | 8.10 | 11.17 |
| Qwen-7B | 13.39 | 25.16 | 5.55 | 19.26 | 5.76 | 7.78 | 10.83 |
| Baichuan2-7B | 12.89 | 23.26 | 5.34 | 18.36 | 5.68 | 7.62 | 10.41 |
| LLaMA2-13B | 23.26 | 50.66 | 18.09 | 32.52 | 14.85 | 16.55 | 23.54 |
| Xverse-13B | 12.55 | 23.49 | 5.20 | 17.69 | 5.54 | 7.46 | 10.19 |
| Baichuan-13B | 12.38 | 22.46 | 5.21 | 17.59 | 5.42 | 7.37 | 10.03 |
| Baichuan2-13B | 12.14 | 21.85 | 5.05 | 17.15 | 5.35 | 7.24 | 9.81 |
| Qwen-14B | 11.90 | 22.43 | 4.89 | **16.94** | 5.24 | 7.03 | 9.67 |
| InternLM-20B | 12.34 | 22.06 | 5.75 | 17.45 | 5.73 | 7.78 | 10.34 |
| Aquila2-34B | 14.62 | 29.09 | 5.72 | 21.78 | 5.83 | 8.45 | 11.73 |
| Skywork-13B-Base | **11.58** | **21.84** | **4.76** | 17.28 | **4.92** | **6.82** | **9.42** |
### 评测数据和评测脚本(Loss Evaluation)
我们将评测数据和评测脚本也进行了开源,下载github上的代码运行下面命令则可以复现我们的结果。
We have also open-sourced the data and evaluation scripts. You can reproduce our results by running the following command.
```
bash bash_scripts/skywork_eval_loss.sh
```
## Benchmark评估(Benchmark Results)
我们评估了各大权威评测benchmark上的结果作为参考,包括C-Eval,MMLU,CMMLU,GSM8K。遵循之前的评估流程,C-Eval、MMLU、CMMLU测试5-shot结果,GSM8K测试8-shot结果。可以看到Skywork-13B-Base模型在中文开源模型中处于前列,在同等参数规模下为最优水平。
We evaluated Skywork-13B-Base on several popular benchmarks, including C-Eval, MMLU, CMMLU, and GSM8K. Following the previous evaluation process, we tested the 5-shot results of C-Eval, MMLU, and CMMLU, and the 8-shot results of GSM8K. It can be seen that the Skywork-13B-Base model is among the top models in the Chinese open source model community, performing at an optimal level with the same parameter scale.
| Model | C-Eval | CMMLU | MMLU | GSM8K |
|-------------------------|:-----:|:---------------:|:----------:|:-------:|
| LLaMA-1-13B-Base | 35.5 | 31.2 | 46.9 | 17.8 |
| Open-LLaMA-13B | 27.1 | 26.7 | 42.7 | 12.4 |
| LLaMA-2-13B-Base | 36.5 | 36.6 | 54.8 | 28.7 |
| InternLM-20B | 58.8 | - | 62.0 | 52.6 |
| Qwen-14B-Base | 72.1 | 71.0 | 66.3 | 61.3 |
| Aquila2-34B-Base | 63.1 | 71.4 | 64.2 | 58.4 |
| XVERSE-13B-Base | 54.7 | - | 55.1 | - |
| Baichuan-13B-Base | 52.4 | 55.3 | 51.6 | 26.6 |
| Baichuan-2-13B-Base | 58.1 | 62.0 | 59.2 | 52.3 |
| Skywork-13B-Base (ours) | 60.6 | 61.8 | 62.1 | 55.8 |
## Benchmark评估详细结果
我们给出**Skywork-13B-Base**模型在C-Eval,CMMLU,MMLU上模型的详细结果。
We provide detailed results of the Skywork-13B-Base model on C-EVAL, CMMLU, and MMLU.
| Benchmark | **STEM** | **Humanities** | **Social Science** | **Other** | **China Specific** | **Hard** | **Average** |
|:-----:|:---------:|:--------:|:-------------:|:--------:|:--------:|:--------:|:--------:|
| **C-EVAL** | 51.2 | 67.8 | 74.6 | 57.5 | - | 39.4 | 60.6 |
| **CMMLU** | 49.5 | 69.3 | 65.9 | 63.3 | 64.2 | - | 61.8 |
| **MMLU** | 51.6 | 58.0 | 72.5 | 68.8 | - | - | 62.1 |
# 快速开始(Quickstart)
我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。
We have open-sourced the model parameters, configuration files, tokenizer, and more on Huggingface and Modelscope.
## 依赖安装(Requirements)
- Python 3.8及以上版本
- Pytorch 2.0及以上版本
- CUDA建议使用11.4以上版本。
Skywork-13B-Base模型,Skywork-13B-Chat模型和Skywork-13B-Math模型运行下面的脚本进行Python依赖安装。
- Python 3.8 and above
- Pytorch 2.0 and above
- CUDA 11.4 and above are recommended.
Skywork-13B-Base model, Skywork-13B-Chat model, and Skywork-13B-Math model run the following script for Python dependency installation:
```shell
pip install -r requirements.txt
```
## Huggingface模型测试(Demonstration)
### Base 模型推理(Base Model Inference)
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation import GenerationConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("SkyworkAI/Skywork-13B-Base", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("SkyworkAI/Skywork-13B-Base", device_map="auto", trust_remote_code=True).eval()
>>> inputs = tokenizer('陕西的省会是西安', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,西安是我国著名的古都,在历史上有十三个朝代在此建都,所以西安又被称为“十三朝古都”。西安是我国著名的旅游城市,每年都有大量的游客来到西安旅游,西安的旅游资源非常丰富,有很多著名的旅游景点,比如秦始皇兵马俑、大雁塔、华清池、大唐芙蓉园、西安城墙、大明宫国家遗址公园、西安碑林博物馆、西安钟楼、西安鼓楼、西安半坡博物馆、西安大兴善寺、西安小雁塔
>>> inputs = tokenizer('陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州,湖北的省会是武汉,湖南的省会是长沙,江西的省会是南昌,安徽的省会是合肥,江苏的省会是南京,浙江的省会是杭州,福建的省会是福州,广东的省会是广州,广西的省会是南宁,海南的省会是海口,四川的省会是成都,贵州的省会是贵阳,云南的省会是昆明,西藏的省会是拉萨,青海的省会是西宁,宁夏的省会是银川,新疆的省会是乌鲁木齐。
```
# 模型微调(Fine-tuning)
## 全量微调(Full-parameter Fine-tuning)
使用Skywork-13B-Base模型进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
## launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt.sh
```
使用Skywork-13B-Base模型进行有监督微调(SFT, Supevise Fine-tuning)
```bash
## preprocess data and launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft.sh
```
## LoRA微调(PEFT)
使用Skywork-13B-Base模型以及LoRA进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt_lora.sh
```
使用Skywork-13B-Base模型以及LoRA进行有监督微调(SFT, Supevise Fine-tuning)
```bash
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft_lora.sh
```
# 量化部署(Quantization)
## 8bit量化(Int8 Quantization)
skywork 采用主流8bits量化方法:[BitsAndBytes](https://github.com/TimDettmers/bitsandbytes)。该方法量化后性能基本无损,且已经集成到transformers库中,基于BitsAndBytes,我们提供在线量化和离线8bits模型两种方式。
以下我们提供示例说明如何使用int8量化模型,在开始使用之前,请先安装BitsAndBytes库并安装所需依赖包,具体安装方式见[BitsAndBytes](https://github.com/TimDettmers/bitsandbytes)库。
### 在线量化(Online Quantization)
```python
model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base", torch_dtype=torch.bfloat16,load_in_8bit=True, trust_remote_code=True).eval()
```
### 离线量化(Offline Quantization)
```python
model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base-8bits", device_map="auto", torch_dtype=torch.bfloat16,trust_remote_code=True).eval()
```
### 量化效果(Evaluation)
我们对量化模型在基准评测数据集上做了测试,结果如下所示:
| Precision | C-Eval | MMLU | CMMLU |
| --------- | ------ | ----- | ----- |
| bf16 | 60.6 | 61.8 | 62.1 |
| 8bits | 58.5 | 61.8 | 61.0 |
### 显存占用(GPU Mem in GB)
| Precision | Skywork-13B |
| --------- | ----------- |
| bf16 | 25.91 |
| 8bits | 13.57 |
# 声明和协议(Declaration and License Agreement)
## 声明(Declaration)
我们在此声明,不要利用Skywork模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Skywork 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用skywork开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.
We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.
## 协议(License Agreement)
社区使用Skywork模型需要遵循[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)。Skywork模型支持商业用途,如果您计划将Skywork模型或其衍生品用于商业目的,无需再次申请, 但请您仔细阅读[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)并严格遵守相关条款。
The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf).
[《Skywork 模型社区许可协议》》]:https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf
[[email protected]]: mailto:[email protected]
# 引用和联系我们(Contact Us and Citation)
如果您觉得我们的工作对您有帮助,欢迎引用我们的论文~
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{skyworkmath,
title={SkyMath: Technical Report},
author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
journal={arXiv preprint arXiv: 2310.16713},
url={https://arxiv.org/abs/2310.16713},
year={2023}
}
```
```
@article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
author = {Skywork Multi-Modal Group},
month = sep,
title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
year = {2023}
}
```
|
SantiagoPG/chatbot_gen_tweets | SantiagoPG | 2023-11-05T04:52:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-01T22:39:56Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4 | kanishka | 2023-11-05T04:48:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-03T21:36:31Z | ---
base_model: models/smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4
This model is a fine-tuned version of [models/smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4/config.json](https://huggingface.co/models/smolm-autoreg-bpe-babylm-aann-counterfactual-naan-3e-4/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1689
- Accuracy: 0.4311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5639 | 1.0 | 18353 | 3.6707 | 0.3689 |
| 3.2643 | 2.0 | 36706 | 3.3898 | 0.3954 |
| 3.1066 | 3.0 | 55059 | 3.2762 | 0.4100 |
| 3.0219 | 4.0 | 73412 | 3.2161 | 0.4167 |
| 2.9559 | 5.0 | 91765 | 3.1848 | 0.4207 |
| 2.9096 | 6.0 | 110118 | 3.1685 | 0.4232 |
| 2.8687 | 7.0 | 128471 | 3.1490 | 0.4256 |
| 2.8339 | 8.0 | 146824 | 3.1451 | 0.4271 |
| 2.8046 | 9.0 | 165177 | 3.1453 | 0.4280 |
| 2.7762 | 10.0 | 183530 | 3.1355 | 0.4291 |
| 2.7499 | 11.0 | 201883 | 3.1409 | 0.4296 |
| 2.7344 | 12.0 | 220236 | 3.1451 | 0.4298 |
| 2.708 | 13.0 | 238589 | 3.1444 | 0.4304 |
| 2.6808 | 14.0 | 256942 | 3.1492 | 0.4304 |
| 2.6658 | 15.0 | 275295 | 3.1495 | 0.4307 |
| 2.6461 | 16.0 | 293648 | 3.1507 | 0.4312 |
| 2.6243 | 17.0 | 312001 | 3.1567 | 0.4312 |
| 2.6017 | 18.0 | 330354 | 3.1593 | 0.4312 |
| 2.5855 | 19.0 | 348707 | 3.1647 | 0.4312 |
| 2.5643 | 20.0 | 367060 | 3.1689 | 0.4311 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
dvijay/out_opt_guanaco_1k | dvijay | 2023-11-05T04:36:39Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"region:us"
]
| null | 2023-11-02T05:24:21Z | ---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
model-index:
- name: out_opt_guanaco_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out_opt_guanaco_1k
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
omaimaam/uplimit-project-3-phi-1.5 | omaimaam | 2023-11-05T04:14:00Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:scitldr",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-11-05T04:13:58Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: uplimit-project-3-phi-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uplimit-project-3-phi-1.5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5314 | 0.1 | 200 | 2.5779 |
| 2.5486 | 0.2 | 400 | 2.5684 |
| 2.5032 | 0.3 | 600 | 2.5570 |
| 2.5614 | 0.4 | 800 | 2.5456 |
| 2.5264 | 0.5 | 1000 | 2.5365 |
| 2.5572 | 0.6 | 1200 | 2.5312 |
| 2.5268 | 0.7 | 1400 | 2.5253 |
| 2.5023 | 0.8 | 1600 | 2.5193 |
| 2.5553 | 0.9 | 1800 | 2.5129 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
colemane/rl_course_vizdoom_health_gathering_supreme | colemane | 2023-11-05T04:12:50Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T21:54:42Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.48 +/- 4.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r colemane/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
JackFram/llama-160m-base | JackFram | 2023-11-05T03:53:09Z | 29 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:wikipedia",
"arxiv:2305.09781",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-17T18:07:53Z | ---
license: other
language:
- en
datasets:
- wikipedia
pipeline_tag: text-generation
---
## Model description
This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.
No evaluation has been conducted yet, so use it with care.
The model is mainly developed as a base Small Speculative Model in the [SpecInfer](https://arxiv.org/abs/2305.09781) paper.
## Citation
To cite the model, please use
```bibtex
@misc{miao2023specinfer,
title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification},
author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
year={2023},
eprint={2305.09781},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
LeakyDishes/finetuning-sentiment-model-150410-samples | LeakyDishes | 2023-11-05T03:00:03Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-05T02:28:55Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-150410-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-150410-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4131
- Accuracy: 0.8273
- F1: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Tokenizers 0.14.1
|
tjake/llama2-7b-hf-jlama-Q4_0 | tjake | 2023-11-05T02:56:56Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-04T21:01:22Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
HarrisShen/llama2-compressed-notes | HarrisShen | 2023-11-05T02:36:32Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-11-02T23:13:48Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0
|
teknium/Hermes-Trismegistus-Mistral-7B | teknium | 2023-11-05T02:31:54Z | 16 | 52 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"mistral-7b",
"instruct",
"finetune",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:teknium/trismegistus-project",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-02T12:10:29Z | ---
base_model: teknium/OpenHermes-2.5-Mistral-7B-v0.1
tags:
- mistral-7b
- instruct
- finetune
- gpt4
- synthetic data
- distillation
model-index:
- name: Hermes-Trismegistus-Mistral-7B
results: []
datasets:
- teknium/trismegistus-project
license: apache-2.0
language:
- en
---
## Model Description:

Transcendence is All You Need! Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual.
### Trismegistus evolved, trained over Hermes 2.5, the model performs far better in all tasks, including esoteric tasks!
The change between Mistral-Trismegistus and Hermes-Trismegistus is that this version trained over hermes 2.5 instead of the base mistral model, this means it is full of task capabilities that it Trismegistus can utilize for all esoteric and occult tasks, and performs them far better than ever before.
Here are some outputs:



## Acknowledgements:
Special thanks to @a16z.
## Dataset:
This model was trained on a 100% synthetic, gpt-4 generated dataset, about ~10,000 examples, on a wide and diverse set of both tasks and knowledge about the esoteric, occult, and spiritual.
The dataset will be released soon!
## Usage:
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
OR
```
<system message>
USER: <prompt>
ASSISTANT:
```
## Benchmarks:
No benchmark can capture the nature and essense of the quality of spirituality and esoteric knowledge and tasks. You will have to try testing it yourself!
Training run on wandb here: https://wandb.ai/teknium1/occult-expert-mistral-7b/runs/coccult-expert-mistral-6/overview
## Licensing:
Apache 2.0
|
knguyennguyen/my_awesome_model | knguyennguyen | 2023-11-05T02:23:52Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-05T02:17:59Z | ---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.95528
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3358 | 0.32 | 1000 | 0.2286 | 0.9349 |
| 0.2658 | 0.64 | 2000 | 0.2036 | 0.9447 |
| 0.2392 | 0.96 | 3000 | 0.2294 | 0.9504 |
| 0.1648 | 1.28 | 4000 | 0.2288 | 0.9518 |
| 0.1477 | 1.6 | 5000 | 0.2190 | 0.9532 |
| 0.1404 | 1.92 | 6000 | 0.2141 | 0.9553 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hongyin/chat-llama2-13b-80k | hongyin | 2023-11-05T02:12:36Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"arxiv:2302.13173",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-25T00:09:56Z | ---
language:
- en
- zh
pipeline_tag: text-generation
---
## hongyin/chat-informer-13b-80k
I am pleased to introduce an English-Chinese conversation assistant designed to reduce the cost of inference. It is trained based on the Llama2-chat-13B, with a unique vocabulary and 13 billion parameters.
Losing fat is the only way to solve all problems.
```python
Human: Say "I love you" differently in Chinese.
Assistant: 中文:我爱你
普通话:我爱你
粤语:我爱你
闽南语:我爱你
客家语:我爱你
台湾国语:我爱你
香港国语:我爱你
上海话:我爱你
北京话:我爱你
四川话:我爱你
重庆话:我爱你
贵州话:我爱你
云南话:我爱你
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
|
kaliansh/AgentSoccer-Poca | kaliansh | 2023-11-05T01:54:32Z | 76 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"ML-Agents-SoccerTwos",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
]
| reinforcement-learning | 2023-11-03T15:33:02Z | ---
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kaliansh/AgentSoccer-Poca
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KevinGao99/llama2ARDS | KevinGao99 | 2023-11-05T01:27:30Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-11-05T01:27:27Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0
|
aladar/llama-2-tiny-random-GGUF | aladar | 2023-11-05T01:26:51Z | 7 | 1 | null | [
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-05T00:36:12Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
---
GGUFd (f16) https://huggingface.co/yujiepan/llama-2-tiny-random
# Download
```
pip install huggingface-hub
```
From CLI:
```
huggingface-cli download \
aladar/llama-2-tiny-random-GGUF \
llama-2-tiny-random.gguf \
--local-dir . \
--local-dir-use-symlinks False
```
|
dharper40/ppo-LunarLander-v2 | dharper40 | 2023-11-05T01:13:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-05T01:13:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 232.46 +/- 71.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adityaaswani1/ppo-LunarLander-v2 | adityaaswani1 | 2023-11-05T00:56:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-05T00:55:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.67 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
michakoz/ppo-LunarLander-v2 | michakoz | 2023-11-05T00:51:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-05T00:50:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.55 +/- 13.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aryanne/TinyLlama-MiniGuanaco-1.1B-1.5T-gguf | Aryanne | 2023-11-05T00:49:48Z | 8 | 1 | null | [
"gguf",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-05T00:19:59Z | ---
language:
- en
model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
---
Some GGUF v3 quantizations of the model [Corianas/tiny-llama-miniguanaco-1.5T](https://huggingface.co/Corianas/tiny-llama-miniguanaco-1.5T)
This TinyLLama 1.5T checkpoint was trained to answer questions.
```
f"{'prompt'}\n{'completion'}\n<END>"
```
No input/output, just question, then newline to begin the answer. |
MorganDusty/AnnaAsti | MorganDusty | 2023-11-05T00:48:08Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2023-11-05T00:47:15Z | ---
license: cc-by-nc-sa-4.0
---
|
MorganDusty/MikhailLitvin | MorganDusty | 2023-11-05T00:45:09Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2023-11-05T00:44:52Z | ---
license: cc-by-nc-sa-4.0
---
|
miittnnss/test | miittnnss | 2023-11-05T00:41:15Z | 0 | 0 | generic | [
"generic",
"pytorch",
"gan",
"unconditional-image-generation",
"region:us"
]
| unconditional-image-generation | 2023-11-05T00:32:01Z | ---
library_name: generic
tags:
- pytorch
- gan
- unconditional-image-generation
--- |
alfredowh/dqn-SpaceInvadersNoFrameskip-v4 | alfredowh | 2023-11-05T00:30:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-05T00:29:51Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alfredo-wh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alfredo-wh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alfredo-wh
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yale-nlp/t5-large-finetuned-qtsumm | yale-nlp | 2023-11-05T00:10:29Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"table-to-text",
"summarization",
"long-form-question-answering",
"en",
"dataset:yale-nlp/QTSumm",
"arxiv:2305.14303",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-10-20T20:24:58Z | ---
license: mit
language: en
tags:
- table-to-text
- summarization
- long-form-question-answering
datasets:
- yale-nlp/QTSumm
---
# QTSumm Dataset
QTSumm is a query-focused table summarization dataset proposed in EMNLP 2023 paper [QTSUMM: Query-Focused Summarization over Tabular Data](https://arxiv.org/pdf/2305.14303.pdf). The original Github repository is [https://github.com/yale-nlp/QTSumm](https://github.com/yale-nlp/QTSumm).
## Model Description
`yale-nlp/t5-large-finetuned-qtsumm` (based on T5 architecture) is initialized with `t5-large` and finetuned on the QTSumm dataset.
## Usage
Check the github repository: [https://github.com/yale-nlp/QTSumm](https://github.com/yale-nlp/QTSumm)
## Reference
```bibtex
@misc{zhao2023qtsumm,
title={QTSUMM: Query-Focused Summarization over Tabular Data},
author={Yilun Zhao and Zhenting Qi and Linyong Nan and Boyu Mi and Yixin Liu and Weijin Zou and Simeng Han and Xiangru Tang and Yumo Xu and Arman Cohan and Dragomir Radev},
year={2023},
eprint={2305.14303},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
owanr/SBIC-google-t5-v1_1-large-inter_model-sorted-human_annots_str | owanr | 2023-11-05T00:08:14Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-05T00:08:12Z | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
model-index:
- name: SBIC-google-t5-v1_1-large-inter_model-sorted-human_annots_str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-google-t5-v1_1-large-inter_model-sorted-human_annots_str
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.6679 | 1.0 | 392 | 8.1733 |
| 3.1285 | 2.0 | 784 | 0.4610 |
| 0.4164 | 3.0 | 1176 | 0.3793 |
| 0.4073 | 4.0 | 1568 | 0.3457 |
| 0.3553 | 5.0 | 1960 | 0.3263 |
| 0.3684 | 6.0 | 2352 | 0.3160 |
| 0.3346 | 7.0 | 2744 | 0.3073 |
| 0.3482 | 8.0 | 3136 | 0.3045 |
| 0.3261 | 9.0 | 3528 | 0.3016 |
| 0.3255 | 10.0 | 3920 | 0.2960 |
| 0.2794 | 11.0 | 4312 | 0.2951 |
| 0.318 | 12.0 | 4704 | 0.2920 |
| 0.3072 | 13.0 | 5096 | 0.2875 |
| 0.2872 | 14.0 | 5488 | 0.2819 |
| 0.2912 | 15.0 | 5880 | 0.2774 |
| 0.2954 | 16.0 | 6272 | 0.2762 |
| 0.2792 | 17.0 | 6664 | 0.2775 |
| 0.2763 | 18.0 | 7056 | 0.2736 |
| 0.2573 | 19.0 | 7448 | 0.2715 |
| 0.3148 | 20.0 | 7840 | 0.2647 |
| 0.2599 | 21.0 | 8232 | 0.2621 |
| 0.2405 | 22.0 | 8624 | 0.2624 |
| 0.2785 | 23.0 | 9016 | 0.2609 |
| 0.2652 | 24.0 | 9408 | 0.2601 |
| 0.2977 | 25.0 | 9800 | 0.2594 |
| 0.2829 | 26.0 | 10192 | 0.2593 |
| 0.2757 | 27.0 | 10584 | 0.2593 |
| 0.264 | 28.0 | 10976 | 0.2593 |
| 0.2758 | 29.0 | 11368 | 0.2593 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
TheBloke/Naberius-7B-GGUF | TheBloke | 2023-11-05T00:02:00Z | 373 | 13 | transformers | [
"transformers",
"gguf",
"mistral",
"llama",
"uncensored",
"merge",
"mix",
"slerp",
"spherical linear interpolation merge",
"hermes",
"openhermes",
"dolphin",
"zephyr",
"naberius",
"7b",
"llama2",
"en",
"base_model:CalderaAI/Naberius-7B",
"base_model:quantized:CalderaAI/Naberius-7B",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T20:30:40Z | ---
base_model: CalderaAI/Naberius-7B
inference: false
language:
- en
license: apache-2.0
model_creator: Caldera AI
model_name: Naberius 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llama
- uncensored
- merge
- mix
- slerp
- spherical linear interpolation merge
- mistral
- hermes
- openhermes
- dolphin
- zephyr
- naberius
- 7b
- llama2
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Naberius 7B - GGUF
- Model creator: [Caldera AI](https://huggingface.co/CalderaAI)
- Original model: [Naberius 7B](https://huggingface.co/CalderaAI/Naberius-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Caldera AI's Naberius 7B](https://huggingface.co/CalderaAI/Naberius-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Naberius-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Naberius-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Naberius-7B-GGUF)
* [Caldera AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/Naberius-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [naberius-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [naberius-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [naberius-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [naberius-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [naberius-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [naberius-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [naberius-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [naberius-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [naberius-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [naberius-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [naberius-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [naberius-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Naberius-7B-GGUF/blob/main/naberius-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Naberius-7B-GGUF and below it, a specific filename to download, such as: naberius-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Naberius-7B-GGUF naberius-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Naberius-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Naberius-7B-GGUF naberius-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m naberius-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Naberius-7B-GGUF", model_file="naberius-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Caldera AI's Naberius 7B
# Naberius-7B
##### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### Legal Notice:
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.</span>
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Composition:
Naberius-7B is a Mistral-class spherical linear interpolated merge of three high performance models.
[zephyr-7b-sft-beta] merged with [OpenHermes-2-Mistral-7B] resulting in: [Mistral-Zephyrmes-7B*]
[Mistral-Zephyrmes-7B] merged with [dolphin-2.2.1-mistral-7b] resulting in: [Naberius-7B]
*(Zephyrmes is a merge byproduct model not intended for release)
These models were hand picked after careful review of claims, datasets, and user postings.
The core elements that dictated which models to accept hinged on these values:
logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
## Our implementation of Spherical Linear Interpolation used for this project:
Visit our Project Git here: https://github.com/Digitous/LLM-SLERP-Merge
Spherical Linear Interpolation merging produces more coherently smooth merges than standard weight-merge, also known as LERP (Linear) interpolation.
## What Makes Naberius Special?
By combining zephyr-7b-sft-beta and OpenHermes-2-Mistral-7B, then adding dolphin-2.2.1-mistral-7b to the result using a minimally destructive merge technique, preserves a large amount of behavior of all three models in a cohesive fashion.
Naberius can: Do coherent roleplay far and beyond any 7B parameter model ever before, as well as follow instructions exceptionally well, especially for a 7B model and as a bonus for being lightweight, incredible inference speed. Naberius has shown some signs of spacial awareness and does adapt to nuance in conversation. All around a pliable, imaginative, and logic oriented 7B that punches upwards to what feels like a 30B or more at times.
Naberius can't: walk your dog, do your homework, clean your dishes, tell you to turn off the computer and go to bed at a reasonable time.
## Use Example
In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so]
place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model.
```
<|im_start|>system
You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input.<|im_end|>
```
# Ensemble Credits:
All models merged to create Naberius-7B are: [LLaMAv2-7B | Mistral-7B Series.]
and include the following:
zephyr-7b-sft-beta; https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta
[Spherical-LI merge doesn't support safetensors yet, which the full Zephyr beta was released as.]
OpenHermes-2-Mistral-7B; https://huggingface.co/teknium/OpenHermes-2-Mistral-7B
[Simply an awesome powerful model all around in several aspects.]
dolphin-2.2.1-mistral-7b; https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b
[After reading the debates in the comments between 2.1 and 2.2.1, we bet on 2.2.1 being the better candidate.]
Thanks to Mistral AI for the amazing Mistral LM - and also thanks to Meta for LLaMAv2.
Thanks to each and every one of you for your incredible work developing some of the best things
to come out of this community.
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### --Secret Rant Zone--
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">When merging, I use whatever technique from model selection to brute force randomized layer mixing with automated samples to stamp out this shit - "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." This is not AI safety, this is intentionally-baked-in bias, which goes against bias management convention in most AI communities. Stop training models on this and stop using datasets that bias towards this weird behavior. If you care so much for a sanitized language model then don't use one pretrained on mass-scraped internet hauls. Put a warning on it that captures its essence. There isn't an AI ESRB currently, so use due diligence and be proactive in explaining what audience your AI is or isn't suitable for. End Rant.<span>
<!-- original-model-card end -->
|
Aryanne/OpenLlama.v2-3B-Lamini-Orca-Evol-qlora-gguf | Aryanne | 2023-11-04T23:38:13Z | 25 | 2 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-04T18:03:22Z | ---
license: apache-2.0
---
Some GGUF v3 quantizations of the model [sachith-surge/open-llama-v2-lamini-orca-evol-qlora-checkpoint-safetensors](https://huggingface.co/sachith-surge/open-llama-v2-lamini-orca-evol-qlora-checkpoint-safetensors)
# open-llama-v2-lamini-orca-evol-qlora-checkpoint-safetensors
This is a instruction tuned model based on **Open-LLaMA-3b-v2**. It is trained on a large corpus of text from various domains and can generate realistic and coherent texts on various topics. The model is created by **Team Indigo** and is licensed under **apache-2.0**. The model has **3.43B parameters** and uses **F32 tensor type**. The model is too large to load onto the free Inference API, but you can try it on Inference Endpoints instead. The model is named **open-llama-v2-lamini-orca-evol-qlora-checkpoint-safetensors** and is part of the **Safetensors llama** project. The model is intended for research and educational purposes only and should not be used for any harmful or malicious purposes.
## Model description
The model is based on **Open-LLaMA-3b-v2**, a large-scale language model that can generate natural language texts given a prompt. The model is fine-tuned using the **alpaca training prompt**, a method that allows the model to learn from multiple sources of information without forgetting previous knowledge. The alpaca training prompt consists of a prefix, a query, and a suffix that guide the model to generate relevant and diverse texts.
The model is trained on custom datasets that are created using three different schemes: **LaMini scheme**, **Orca scheme**, and **evol-instruct scheme**. These schemes are designed to enhance the quality and diversity of the generated texts by providing different types of information and instructions to the model.
- The LaMini scheme uses a large and diverse corpus of text from various domains, such as news, books, blogs, social media, etc. The scheme also uses a small set of keywords to provide topical information to the model.
- The Orca scheme uses a smaller and more focused corpus of text from specific domains, such as science, technology, art, etc. The scheme also uses a longer set of keywords to provide more detailed information to the model.
- The evol-instruct scheme uses an evolutionary algorithm to generate and select the best instructions for the model. The scheme also uses a feedback mechanism to reward or penalize the model based on its performance.
## Limitations and bias
The model is trained on a large corpus of text from various sources, which may contain biases or inaccuracies. The model may also generate texts that are offensive, harmful, or misleading. The model should not be used for any critical or sensitive applications that require high accuracy or ethical standards.
The model is also limited by its size and complexity, which may affect its speed and performance. The model may not be able to handle long or complex prompts or queries, or generate long or coherent texts. The model may also produce repetitive or nonsensical texts if it encounters unfamiliar or ambiguous inputs.
The model is still a work in progress and may have bugs or errors. The model is constantly being improved and updated based on feedback and evaluation. If you encounter any issues or have any suggestions for improvement. |
yjlee1011/ncodeR_data_setfit_multilabel_128_samples | yjlee1011 | 2023-11-04T23:34:39Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T23:34:11Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_setfit_multilabel_128_samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_128_samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
swl-models/Oleander-hans-v2.1-beta2 | swl-models | 2023-11-04T23:26:42Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-04T23:18:10Z | ---
license: creativeml-openrail-m
---
|
lenssssw/roblox-clothing-ai-maker | lenssssw | 2023-11-04T23:22:00Z | 68 | 7 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-11T15:40:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
THIS MODEL NO LONGER GIVES THE SAME RESULTS AS IT USED TO (shown below)
Sample Images:
clothing template with a shirt red and a tie blue:

clothing template with a suit golden:

clothing template with a shirt beige PHOTOREALISTIC:
 |
sultan/BioM-ELECTRA-Large-Discriminator | sultan | 2023-11-04T23:07:03Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 434K steps with a batch size of 4096 on TPUv3-512 unit.
In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb)
Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
sultan/BioM-ALBERT-xxlarge-PMC | sultan | 2023-11-04T23:06:21Z | 10 | 4 | transformers | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PMC full article for further 64k steps with a batch size of 8192, where we initiate our weights from our model BioM-ALBERT-xxlarge. Thus, the total training steps for this model is 264k+64K=328K steps. The model is very large due to the number of hidden layer size (4096). In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb). In this example we achieve 80.74 micro F1 score on ChemProt task with BioM-ALBERTxxlarge . Fine-tuning takes 43 minutes for 5 epochs .
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb)
Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
yjlee1011/ncodeR_data_setfit_multilabel_75_samples | yjlee1011 | 2023-11-04T23:06:07Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T23:05:40Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_setfit_multilabel_75_samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_75_samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
johaanm/test-grader-alpha-V1.6 | johaanm | 2023-11-04T23:01:53Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-04T23:01:49Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
hkivancoral/hushem_40x_deit_base_n_f5 | hkivancoral | 2023-11-04T22:51:39Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-04T21:41:34Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_n_f5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8536585365853658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4453
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0209 | 1.0 | 110 | 0.5124 | 0.8049 |
| 0.0043 | 2.0 | 220 | 0.6220 | 0.8049 |
| 0.0003 | 3.0 | 330 | 0.5631 | 0.8293 |
| 0.0001 | 4.0 | 440 | 0.6476 | 0.8049 |
| 0.0001 | 5.0 | 550 | 0.4557 | 0.8293 |
| 0.0001 | 6.0 | 660 | 0.5177 | 0.8780 |
| 0.0001 | 7.0 | 770 | 0.4360 | 0.8780 |
| 0.0 | 8.0 | 880 | 0.4399 | 0.8780 |
| 0.0 | 9.0 | 990 | 0.4439 | 0.8537 |
| 0.0 | 10.0 | 1100 | 0.4453 | 0.8537 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nikoslefkos/triples_to_text | nikoslefkos | 2023-11-04T22:41:24Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-04T01:10:52Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: nikoslefkos/triples_to_text
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nikoslefkos/triples_to_text
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1659
- Validation Loss: 2.0069
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1659 | 2.0069 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
yjlee1011/ncodeR_data_setfit_multilabel_32_samples | yjlee1011 | 2023-11-04T22:34:18Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T22:33:44Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_setfit_multilabel_32_samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_32_samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
18winstonsmith97/Llama-Chat | 18winstonsmith97 | 2023-11-04T22:33:20Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-04T22:33:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
mi-rei/eligibilities_model | mi-rei | 2023-11-04T22:22:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-04T21:42:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: eligibilities_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eligibilities_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6845
- Accuracy: 0.5540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6882 | 1.0 | 1398 | 0.6888 | 0.5429 |
| 0.6753 | 2.0 | 2796 | 0.6845 | 0.5540 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
LoneStriker/HelixNet-regenerator-8.0bpw-h8-exl2 | LoneStriker | 2023-11-04T22:17:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:46:58Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-regenerator-5.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:17:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:55:36Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-actor-8.0bpw-h8-exl2 | LoneStriker | 2023-11-04T22:16:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:12:23Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-actor-6.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:16:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:06:18Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-actor-5.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:16:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:45:33Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-actor-4.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:16:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T21:39:39Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-actor-3.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:15:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T21:33:31Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-critic-8.0bpw-h8-exl2 | LoneStriker | 2023-11-04T22:13:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:16:48Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-critic-6.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:13:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T16:08:40Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
LoneStriker/HelixNet-critic-3.0bpw-h6-exl2 | LoneStriker | 2023-11-04T22:11:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-04T21:36:11Z | ---
license: apache-2.0
---
# HelixNet exl2
- Model creator: [migtissera](https://huggingface.co/migtissera)
- Original model: [HelixNet](https://huggingface.co/migtissera/HelixNet)
# Sample HelixNet exl2 Code
```python
import time
import sys, os
import dataclasses
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import(
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
@dataclasses.dataclass
class ModelClass:
generator: ExLlamaV2BaseGenerator
tokenizer: ExLlamaV2Tokenizer
DEBUG = os.environ.get("DEBUG") and True or False
# Initialize model and cache
def load_model(model_directory, max_seq_len=8192):
"""
Loads a model from a directory and return the generator and tokenizer
"""
config = ExLlamaV2Config()
config.model_dir = model_directory
config.max_seq_len = max_seq_len
config.prepare()
model = ExLlamaV2(config)
print("Loading model: " + model_directory)
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
model = ModelClass(generator=generator, tokenizer=tokenizer)
generator.warmup()
return model
def generate_text(prompt, model, settings, max_new_tokens):
time_begin = time.time()
response = model.generator.generate_simple(prompt, settings, max_new_tokens)
response = response[len(prompt):]
time_end = time.time()
time_total = time_end - time_begin
tokens = model.tokenizer.encode(response)
count = tokens.shape[-1]
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}")
return response
model_actor = load_model("/models/HelixNet-actor-6.0bpw-h6-exl2")
model_critic = load_model("/models/HelixNet-critic-6.0bpw-h6-exl2")
model_regenerator = load_model("/models/HelixNet-regenerator-6.0bpw-h6-exl2")
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.75
settings.top_k = 50
settings.top_p = 1.0
max_new_tokens = 2000
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: "
if DEBUG: print(f"{prompt_actor}\n\n")
print("ACTOR:")
response_actor = generate_text(prompt_actor, model_actor, settings, max_new_tokens)
if DEBUG: print(f"{response_actor}\n\n")
print("="*132)
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: "
if DEBUG: print(f"{prompt_critic}\n\n")
print("CRITIQUE:")
response_critic = generate_text(prompt_critic, model_critic, settings, max_new_tokens)
if DEBUG: print(f"{response_critic}\n\n")
print("="*132)
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: "
if DEBUG: print(f"{prompt_regenerator}\n\n")
print("REGENERATION:")
response_regenerator = generate_text(prompt_regenerator, model_regenerator, settings, max_new_tokens)
print("="*132)
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}"
print(conversation)
```
# HelixNet

HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer.
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule.
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples.
# Training Methodology
## Phase 1: Actor
The actor network was trained with Supervised Fine-Tuning, on 250K very high-quality samples. It has 75K of Open-Orca's Chain-of-Thought data, and a mixture of Dolphin (GPT-4), SynthIA's Tree-of-Thought data.
Here are the results for the Actor network on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|62.28|
|*hellaswag*|acc_norm|83.22|
|*mmlu*|acc_norm|63.10|
|*truthfulqa_mc*|mc2|50.10|
|**Total Average**|-|**0.64675**||
## Phase 2: Critic
To train the critic, the following process was followed:
- Use Actor, and send 10K system-context and question pairs. Generate responses
- Use the (system-context, question, response) tuples to generate critiques. Used OpenAI's GPT-4.
Using the above training dataset, a Mistral-7B was fine-tuned.
## Phase 3: Regenerator
- Use the (system-context, question, response, critique) tuples to regenerate the answers. Used OpenAI's GPT-4.
A thrid LLM was fine-tuned using the above data.
# Reusability of the critic and the regenerator
The `critic` and the `regenerator` was tested not only on the accopanying actor model, but 13B and 70B SynthIA models as well. They seem to be readily transferrable, as the function that it has learnt is to provide an intelligent critique and then a regeneration of the original response. Please feel free to try out other models as the `actor`. However, the architecture works best with all three as presented here in HelixNet.
# Sample Generations



# Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity?
ASSISTANT:
```
# Example Usage
## Code example (Verbose):
The following is a code example on how to use HelixNet. No special system-context messages are needed for the `critic` and the `regenerator`.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print(f"ACTOR: {actor_response}\n\n")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print(f"CRITIQUE: {critic_response}\n\n")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print(f"REGENERATION: {regenerator_response}")
```
## Code Example (Continuing a conversation)
To have a back-and-forth conversation, only carry forward the system-context, questions and regenerations as shown below.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path_actor = "/home/ubuntu/llm/HelixNet/actor"
model_path_critic = "/home/ubuntu/llm/HelixNet/critic"
model_path_regenerator = "/home/ubuntu/llm/HelixNet/regenerator"
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="cuda",
load_in_4bit=False,
trust_remote_code=True,
)
return model
def load_tokenizer(model_path):
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
return tokenizer
model_actor = load_model(model_path_actor)
model_critic = load_model(model_path_critic)
model_regenerator = load_model(model_path_regenerator)
tokenizer_actor = load_tokenizer(model_path_actor)
tokenizer_critic = load_tokenizer(model_path_critic)
tokenizer_regenerator = load_tokenizer(model_path_regenerator)
def generate_text(instruction, model, tokenizer):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
conversation = f"SYSTEM:{system_prompt}"
while True:
user_input = input("You: ")
prompt_actor = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
actor_response = generate_text(prompt_actor, model_actor, tokenizer_actor)
print("Generated ACTOR RESPONSE")
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:"
critic_response = generate_text(prompt_critic, model_critic, tokenizer_critic)
print("Generated CRITIQUE")
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:"
regenerator_response = generate_text(prompt_regenerator, model_regenerator, tokenizer_regenerator)
print("Generated REGENERATION")
conversation = f"{conversation} \nUSER: {user_input} \nASSISTANT: {regenerator_response}"
print(conversation)
```
|
mmnga/llm-jp-1.3b-v1.0-ggml | mmnga | 2023-11-04T22:10:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T20:01:32Z | ---
license: apache-2.0
---
# llm-jp-1.3b-v1.0-ggml
[llm-jpさんが公開しているllm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0)のggmlフォーマット変換版です。
※llama.cppで動かすggufではありません、ggmlになります。
モデル一覧
[mmnga/llm-jp-13b-v1.0-4bit-g128-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-v1.0-4bit-g128-GPTQ-calib-ja-1k)
[mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-GPTQ-calib-ja-1k)
[mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-GPTQ-calib-ja-1k)
GGUF版
[mmnga/llm-jp-13b-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-v1.0-gguf)
[mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-gguf)
[mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf)
[mmnga/llm-jp-1.3b-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-1.3b-v1.0-gguf)
GGML版
[mmnga/llm-jp-1.3b-v1.0-ggml](https://huggingface.co/mmnga/llm-jp-1.3b-v1.0-ggml)
## Convert Script
[変換スクリプトはこちら](https://github.com/mmnga/ggml/blob/gpt2/examples/gpt-2/convert-llmjp-to-ggml.py)
## Usage
```
git clone --branch gpt2 https://github.com/mmnga/ggml.git
cd ggml
mkdir build && cd build
cmake ..
make -j
./bin/gpt-2 -m 'llm-jp-1.3b-v1.0-ggml-q4_0.bin' -n 128 -t 8 -p '今日の夕食のレシピをご紹介します。' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
``` |
syntag/testcompany-firstmodel | syntag | 2023-11-04T22:09:00Z | 3 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
]
| text-classification | 2023-11-04T21:57:37Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# testcompany-firstmodel
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("syntag/testcompany-firstmodel")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 1
* Number of training documents: 20
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | the - and - is - to - new | 20 | -1_the_and_is_to |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.24.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.4
* Pandas: 2.0.3
* Scikit-Learn: 1.3.1
* Sentence-transformers: 2.2.2
* Transformers: 4.34.1
* Numba: 0.58.1
* Plotly: 5.17.0
* Python: 3.8.18
|
sdpkjc/Ant-v4-sac_continuous_action-seed1 | sdpkjc | 2023-11-04T21:58:17Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Ant-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T21:58:09Z | ---
tags:
- Ant-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v4
type: Ant-v4
metrics:
- type: mean_reward
value: 5701.50 +/- 75.16
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Ant-v4**
This is a trained model of a SAC agent playing Ant-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Ant-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed1/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Ant-v4 --seed 1
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Ant-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 1,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': False,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
owanr/SBIC-google-t5-v1_1-large-inter_model-sorted-model_annots_str | owanr | 2023-11-04T21:50:12Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T21:50:11Z | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
model-index:
- name: SBIC-google-t5-v1_1-large-inter_model-sorted-model_annots_str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-google-t5-v1_1-large-inter_model-sorted-model_annots_str
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2075 | 1.0 | 392 | 7.3148 |
| 5.9671 | 2.0 | 784 | 6.1609 |
| 0.4036 | 3.0 | 1176 | 0.3731 |
| 0.4136 | 4.0 | 1568 | 0.3687 |
| 0.3907 | 5.0 | 1960 | 0.3661 |
| 0.399 | 6.0 | 2352 | 0.3644 |
| 0.3838 | 7.0 | 2744 | 0.3664 |
| 0.3855 | 8.0 | 3136 | 0.3641 |
| 0.3764 | 9.0 | 3528 | 0.3633 |
| 0.3785 | 10.0 | 3920 | 0.3631 |
| 0.3966 | 11.0 | 4312 | 0.3633 |
| 0.3906 | 12.0 | 4704 | 0.3618 |
| 0.3935 | 13.0 | 5096 | 0.3613 |
| 0.3724 | 14.0 | 5488 | 0.3622 |
| 0.3697 | 15.0 | 5880 | 0.3597 |
| 0.3878 | 16.0 | 6272 | 0.3606 |
| 0.3825 | 17.0 | 6664 | 0.3584 |
| 0.3788 | 18.0 | 7056 | 0.3583 |
| 0.3863 | 19.0 | 7448 | 0.3593 |
| 0.3674 | 20.0 | 7840 | 0.3593 |
| 0.3771 | 21.0 | 8232 | 0.3585 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yjlee1011/ncodeR_data_setfit_multilabel_8_samples | yjlee1011 | 2023-11-04T21:46:12Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T21:45:32Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# yjlee1011/ncodeR_data_setfit_multilabel_8_samples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_8_samples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sdpkjc/Humanoid-v4-sac_continuous_action-seed1 | sdpkjc | 2023-11-04T21:38:11Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T21:38:02Z | ---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 5264.69 +/- 40.87
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Humanoid-v4**
This is a trained model of a SAC agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed1/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Humanoid-v4 --seed 1
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 1,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': False,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
mreisman/PubMed_summarization | mreisman | 2023-11-04T21:19:24Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:pubmed-summarization",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-04T21:05:17Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
model-index:
- name: PubMed_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMed_summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the pubmed-summarization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 415 | 2.4180 | 0.1351 | 0.0511 | 0.1134 | 0.1134 | 19.0 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_40x_deit_base_n_f4 | hkivancoral | 2023-11-04T21:07:44Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-04T19:58:45Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_n_f4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9761904761904762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1221
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0638 | 1.0 | 109 | 0.3320 | 0.8810 |
| 0.0062 | 2.0 | 218 | 0.2092 | 0.9048 |
| 0.0052 | 2.99 | 327 | 0.0666 | 0.9762 |
| 0.0022 | 4.0 | 437 | 0.1838 | 0.9524 |
| 0.0002 | 5.0 | 546 | 0.0452 | 0.9762 |
| 0.0 | 6.0 | 655 | 0.1681 | 0.9524 |
| 0.0 | 6.99 | 764 | 0.1386 | 0.9762 |
| 0.0 | 8.0 | 874 | 0.1281 | 0.9762 |
| 0.0 | 9.0 | 983 | 0.1236 | 0.9762 |
| 0.0 | 9.98 | 1090 | 0.1221 | 0.9762 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
LaTarn/ta-density-setfit-model | LaTarn | 2023-11-04T21:06:26Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T19:54:16Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# LaTarn/ta-density-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/ta-density-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
eusojk/uplimit-project-3-phi-1.5 | eusojk | 2023-11-04T20:32:36Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:scitldr",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-11-04T20:32:34Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: uplimit-project-3-phi-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uplimit-project-3-phi-1.5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5574 | 0.1 | 200 | 2.5975 |
| 2.5754 | 0.2 | 400 | 2.5929 |
| 2.5303 | 0.3 | 600 | 2.5779 |
| 2.5865 | 0.4 | 800 | 2.5651 |
| 2.5436 | 0.5 | 1000 | 2.5575 |
| 2.5764 | 0.6 | 1200 | 2.5510 |
| 2.5465 | 0.7 | 1400 | 2.5460 |
| 2.5269 | 0.8 | 1600 | 2.5396 |
| 2.5808 | 0.9 | 1800 | 2.5341 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Bahtiyar7/Reinforce-1 | Bahtiyar7 | 2023-11-04T20:24:44Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T19:16:54Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
moonseok/AST_speechcommandsV2_final | moonseok | 2023-11-04T20:22:11Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:speech_commands",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-11-01T07:10:48Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- speech_commands
metrics:
- accuracy
model-index:
- name: AST_speechcommandsV2_final
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: speech_commands
type: speech_commands
config: v0.02
split: test
args: v0.02
metrics:
- name: Accuracy
type: accuracy
value: 0.8889570552147239
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AST_speechcommandsV2_final
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the speech_commands dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4825
- Accuracy: 0.8890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3557 | 1.0 | 294 | 0.7017 | 0.8354 |
| 0.1948 | 2.0 | 589 | 0.6838 | 0.8397 |
| 0.1219 | 3.0 | 884 | 0.5752 | 0.8699 |
| 0.0704 | 4.0 | 1179 | 0.5554 | 0.8675 |
| 0.0404 | 5.0 | 1473 | 0.5437 | 0.8663 |
| 0.0136 | 6.0 | 1768 | 0.5247 | 0.8759 |
| 0.0072 | 7.0 | 2063 | 0.5235 | 0.8759 |
| 0.0026 | 8.0 | 2358 | 0.5035 | 0.8859 |
| 0.0007 | 9.0 | 2652 | 0.4800 | 0.8896 |
| 0.0005 | 9.97 | 2940 | 0.4825 | 0.8890 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Odiurd/uplimit-project-3-phi-1.5 | Odiurd | 2023-11-04T20:15:46Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:scitldr",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-11-04T20:15:43Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: uplimit-project-3-phi-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uplimit-project-3-phi-1.5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 0.1 | 200 | 0.0000 |
| 0.0 | 0.2 | 400 | 0.0000 |
| 0.0 | 0.3 | 600 | 0.0000 |
| 0.0 | 0.4 | 800 | 0.0000 |
| 0.0 | 0.5 | 1000 | 0.0000 |
| 0.0 | 0.6 | 1200 | 0.0000 |
| 0.0 | 0.7 | 1400 | 0.0000 |
| 0.0 | 0.8 | 1600 | 0.0000 |
| 0.0 | 0.9 | 1800 | 0.0000 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
owanr/SBIC-google-t5-v1_1-large-inter_model-shuffle-human_annots_str | owanr | 2023-11-04T20:09:25Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T20:09:24Z | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
model-index:
- name: SBIC-google-t5-v1_1-large-inter_model-shuffle-human_annots_str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-google-t5-v1_1-large-inter_model-shuffle-human_annots_str
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.7989 | 1.0 | 392 | 8.3528 |
| 0.6988 | 2.0 | 784 | 0.5020 |
| 0.5046 | 3.0 | 1176 | 0.4843 |
| 0.5099 | 4.0 | 1568 | 0.4706 |
| 0.4526 | 5.0 | 1960 | 0.4291 |
| 0.469 | 6.0 | 2352 | 0.4223 |
| 0.4408 | 7.0 | 2744 | 0.4136 |
| 0.4763 | 8.0 | 3136 | 0.4142 |
| 0.446 | 9.0 | 3528 | 0.4116 |
| 0.4315 | 10.0 | 3920 | 0.4035 |
| 0.3821 | 11.0 | 4312 | 0.4077 |
| 0.4399 | 12.0 | 4704 | 0.4009 |
| 0.4285 | 13.0 | 5096 | 0.3962 |
| 0.3908 | 14.0 | 5488 | 0.3931 |
| 0.3887 | 15.0 | 5880 | 0.3922 |
| 0.398 | 16.0 | 6272 | 0.3862 |
| 0.3964 | 17.0 | 6664 | 0.3931 |
| 0.3939 | 18.0 | 7056 | 0.3892 |
| 0.372 | 19.0 | 7448 | 0.3824 |
| 0.4364 | 20.0 | 7840 | 0.3779 |
| 0.3722 | 21.0 | 8232 | 0.3760 |
| 0.343 | 22.0 | 8624 | 0.3804 |
| 0.3946 | 23.0 | 9016 | 0.3756 |
| 0.3899 | 24.0 | 9408 | 0.3756 |
| 0.4195 | 25.0 | 9800 | 0.3762 |
| 0.4072 | 26.0 | 10192 | 0.3758 |
| 0.379 | 27.0 | 10584 | 0.3758 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Baghdad99/saad-hausa-text-to-english-text | Baghdad99 | 2023-11-04T19:53:10Z | 21 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-ha-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ha-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-02T18:45:47Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-ha-en
tags:
- generated_from_keras_callback
model-index:
- name: Baghdad99/saad-opus-ha-to-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Baghdad99/saad-opus-ha-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ha-en](https://huggingface.co/Helsinki-NLP/opus-mt-ha-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9691
- Validation Loss: 3.9168
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.3200 | 4.9969 | 0 |
| 4.7301 | 4.7199 | 1 |
| 4.3658 | 4.5332 | 2 |
| 4.0897 | 4.3940 | 3 |
| 3.8568 | 4.2969 | 4 |
| 3.6818 | 4.2143 | 5 |
| 3.5195 | 4.1530 | 6 |
| 3.3799 | 4.1085 | 7 |
| 3.2595 | 4.0679 | 8 |
| 3.1400 | 4.0438 | 9 |
| 3.0337 | 4.0185 | 10 |
| 2.9335 | 3.9988 | 11 |
| 2.8354 | 3.9766 | 12 |
| 2.7461 | 3.9684 | 13 |
| 2.6650 | 3.9585 | 14 |
| 2.5827 | 3.9477 | 15 |
| 2.5062 | 3.9422 | 16 |
| 2.4289 | 3.9306 | 17 |
| 2.3549 | 3.9208 | 18 |
| 2.2882 | 3.9205 | 19 |
| 2.2195 | 3.9183 | 20 |
| 2.1492 | 3.9157 | 21 |
| 2.0888 | 3.9163 | 22 |
| 2.0302 | 3.9160 | 23 |
| 1.9691 | 3.9168 | 24 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_40x_deit_base_n_f3 | hkivancoral | 2023-11-04T19:42:39Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-04T18:33:10Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_n_f3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9069767441860465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1057 | 1.0 | 108 | 0.5226 | 0.8372 |
| 0.0057 | 2.0 | 217 | 0.4792 | 0.9070 |
| 0.0066 | 3.0 | 325 | 0.2934 | 0.9302 |
| 0.0008 | 4.0 | 434 | 0.2441 | 0.9535 |
| 0.0001 | 5.0 | 542 | 0.3621 | 0.9070 |
| 0.0 | 6.0 | 651 | 0.3864 | 0.9070 |
| 0.0 | 7.0 | 759 | 0.3930 | 0.9070 |
| 0.0 | 8.0 | 868 | 0.3984 | 0.9070 |
| 0.0 | 9.0 | 976 | 0.4017 | 0.9070 |
| 0.0 | 9.95 | 1080 | 0.4024 | 0.9070 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ssuzana/finetuningLLMs-project-phi-1.5 | ssuzana | 2023-11-04T19:38:03Z | 0 | 1 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:scitldr",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-11-04T19:38:00Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: finetuningLLMs-project-phi-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuningLLMs-project-phi-1.5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5536 | 0.1 | 200 | 2.5953 |
| 2.5736 | 0.2 | 400 | 2.5918 |
| 2.5299 | 0.3 | 600 | 2.5771 |
| 2.5857 | 0.4 | 800 | 2.5693 |
| 2.545 | 0.5 | 1000 | 2.5584 |
| 2.5732 | 0.6 | 1200 | 2.5525 |
| 2.5481 | 0.7 | 1400 | 2.5461 |
| 2.5276 | 0.8 | 1600 | 2.5396 |
| 2.578 | 0.9 | 1800 | 2.5345 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
perilli/ap_eyes | perilli | 2023-11-04T19:26:38Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-04T19:26:32Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ap_eyes
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.800000011920929
---
# ap_eyes
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### eye

#### eyes
 |
deinferno/taesd-onnx | deinferno | 2023-11-04T19:26:11Z | 0 | 3 | null | [
"onnx",
"text-to-image",
"license:mit",
"region:us"
]
| text-to-image | 2023-11-04T18:29:42Z | ---
license: mit
pipeline_tag: text-to-image
tags:
- onnx
- text-to-image
inference: false
---
## Model Descriptions:
This repo contains ONNX model files for [madebyollin's Tiny AutoEncoder for Stable Diffusion](https://huggingface.co/madebyollin/taesd).
## Using in 🧨 diffusers
To install the requirements for this demo, do pip install optimum["onnxruntime"].
```python
from huggingface_hub import snapshot_download
from diffusers.pipelines import OnnxRuntimeModel
from optimum.onnxruntime import ORTStableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
taesd_dir = snapshot_download(repo_id="deinferno/taesd-onnx")
pipeline = ORTStableDiffusionPipeline.from_pretrained(
model_id,
vae_decoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_decoder"),
vae_encoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_encoder"),
revision="onnx")
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
image.save("result.png")
```
|
maddes8cht/jondurbin-airoboros-mpt-30b-gpt4-1p4-five-epochs-gguf | maddes8cht | 2023-11-04T19:17:39Z | 220 | 0 | null | [
"gguf",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-03T19:59:12Z | ---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# airoboros-mpt-30b-gpt4-1p4-five-epochs - GGUF
- Model creator: [jondurbin](https://huggingface.co/jondurbin)
- Original model: [airoboros-mpt-30b-gpt4-1p4-five-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
## Technical info
This is a test of qlora fine-tuning of the mpt-30b model, __with 5 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This doesn't seem as high quality as the llama-33b versions unfortunately, with one-off tests, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
## Overview
This is mostly an extension of the previous gpt-4 series, but it's the first with mpt-30b as the base, and with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-mpt-30b-gpt4-1p4-five-epochs \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
chahatraj/contact-base-6biasdimensions-13b-en | chahatraj | 2023-11-04T19:08:08Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-04T13:09:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Adminhuggingface/OUTPUTA_2 | Adminhuggingface | 2023-11-04T19:01:37Z | 6 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-11-02T06:01:18Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Adminhuggingface/OUTPUTA_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Adminhuggingface/new_data dataset. You can find some example images in the following.




|
bernams/finetuning-sentiment-model-3000-samples | bernams | 2023-11-04T18:59:40Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-04T18:07:12Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8794788273615636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3070
- Accuracy: 0.8767
- F1: 0.8795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Bahtiyar7/dqn-SpaceInvadersNoFrameskip-v4 | Bahtiyar7 | 2023-11-04T18:58:48Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T18:58:16Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 416.50 +/- 155.39
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bahtiyar7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bahtiyar7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bahtiyar7
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kornpp/sillyfituneKorn | kornpp | 2023-11-04T18:12:13Z | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"region:us"
]
| null | 2023-11-04T18:10:53Z | ---
library_name: peft
base_model: databricks/dolly-v2-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.7.0.dev0
|
alosof/LunarLander-v2 | alosof | 2023-11-04T18:11:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T18:11:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.49 +/- 19.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YusuphaJuwara/q-FrozenLake-v1-4x4-noSlippery | YusuphaJuwara | 2023-11-04T18:09:05Z | 0 | 1 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T14:31:55Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="YusuphaJuwara/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
marcelloc/ppo-Huggy | marcelloc | 2023-11-04T18:08:03Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-04T18:07:57Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: marcelloc/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tidaltune/JuggernautXLSD | Tidaltune | 2023-11-04T18:02:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-04T18:02:19Z | ---
license: creativeml-openrail-m
---
|
Bahtiyar7/q-FrozenLake-v1-4x4-noSlippery | Bahtiyar7 | 2023-11-04T18:01:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T18:01:50Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Bahtiyar7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Pclanglais/Brahe-AWQ | Pclanglais | 2023-11-04T17:58:54Z | 16 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:Pclanglais/Brahe-Novels",
"license:cc0-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-02T19:13:31Z | ---
license: cc0-1.0
datasets:
- Pclanglais/Brahe-Novels
pipeline_tag: text-generation
widget:
- text: "Text:\nIt was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness. There were a king with a large jaw and a queen with a plain face, on the throne of England; there were a king with a large jaw and a queen with a fair face, on the throne of France. In both countries it was clearer than crystal to the lords of the State preserves of loaves and fishes, that things in general were settled for ever. It was the year of Our Lord one thousand seven hundred and seventy-five. Spiritual revelations were conceded to England at that favoured period, as at this\n\n\nAnalysis: \n"
example_title: "Text analysis"
inference:
parameters:
temperature: 0
repetition_penalty: 1.2
---
<div style="text-align: right;font-size:.7em;margin-left:50%"><em>Per un attimo Brahe cercò le parole, le immagini, le analogie; pensò perfino i gesti della mano e delle dita, come un attore che si prepari a rendere fisico un sentimento. Ma appena cominciò a dire "come", a dare solidità a ciò che non aveva, a rendere visibile ciò che non lo era, a collocare, nello spazio ciò che era pura probabilità, e a cercare una qualsiasi cosa tra le forme del mondo cui paragonarlo, Epstein lo interruppe.</em><br>Daniele del Giudice, <em>Atlante occidentale</em></div>
*Brahe-AWQ* is the light quantized version of *Brahe*, an analytical LLM for multilingual literature fine-tuned from llama-13B. Given any text, Brahe will generate a list of potentially twenty annotations. Brahe is intended to be used by computational humanities project, similarly to BookNLP.
*Brahe* has been trained on 8,000 excerpts of literature in the public domain and on a set of synthetic and manual annotations. Half of the excerpts are in English and half in other languages (mostly French, German, Italian…).
Thanks to the native multilinual capacity of llama-13B, *Brahe-AWQ* has been proven to work on languages that were not part of its original corpus, such as the Gascon variety of Occitan.
*Brahe* is a reversed companion of *[Epstein](https://huggingface.co/Pclanglais/epstein)*, a generative AI model to create new literary texts by submitting annotated prompts (for now in English-only). Both models are named after the protagonists of the philosophical novel of Daniele del Giudice, *Atlante occidentale*. Brahe is a scientist working at the CERN on quantum physics, Epstein is a novelist and they both confront their different views of reality.
## Running Brahe
The best way to test Brahe-AWQ is to use the [official demo](https://colab.research.google.com/drive/1VTi6Z51X6_nAlfSYmIqU7ZnTRVJIjv4j?usp=sharing) on Google Colab.
In contrast with Brahe, it is recommended to use deterministic text generation (temperature = 0). Otherwise the annotations may not be structured as expected.
Prompts are currently constructed in this way:
```"Text:\n" + text + "\n\n\nAnalysis: \n"```
## Annotations
In its current version, *Brahe* may generate the following annotations.
* Summary: short summary
* Tone: general tonality of the text (humoristic, tragic, scholarly…)
* Speech standard: the specific social/literary level of the text (poetic, dialectical, vulgar…)
* Intertextuality: non-literary writing forms that may be similar to this text (red tape, scientific article, case law…)
* Genre: a specific literary genre that would be used in bookshops such as detective fiction, science-fiction, romance, historical novel, young adult…
* Literary movement: aesthetic movement the text seems to embody (does not work so well)
* Literary form: whether it's the description of a place, a conversation, a stream of consciousness
* Trope: a trope or literary cliché (a fuzzy definition but works surprisingly well)
* Enonciation: who is speaking in the text (first-person narrative, dialog, third-person narrative, omniscient narrator)
* Narrative arc: how is the action unfolding (suspense, dramatic tension, comic relief…)
* Active character: the list of characters that have an active involvment in the story.
* Mentioned characters: the list of characters only mentioned, with no active involvement in the story
* Quoted works: another text mentioned or quoted in the text.
* Absolute place: a precise place with a proper name such as Paris, Sesame Street, Lisbonne Airport.
* Fuzzy place: unnamed place where the story happens such as a field, an appartment, a church (does not work so well…)
* Fuzzy time nonspecific moment where the action occur moment such as monday, yesterday, a week after.
* Time setting: historical period where the action seems to occur such as the 1960s, the Renaissance, the Victorian period…
* Diegetic time: very approximative number of minutes/hours/days that have unfolded between the beginning and the end of the text (5 minutes, 35 minutes, 2 hours, 3 days).
* Absolute time: a precise date where the action occurs, such as January 15, 1845, 23rd century…
The annotations are not generated systematically but only whenever the model is confident enough.
## Examples
With the incipit of Henry James' *The Ambassadors*:
>*Strether\'s first question, when he reached the hotel, was about his friend; yet on his learning that Waymarsh was apparently not to arrive till evening he was not wholly disconcerted. A telegram from him bespeaking a room "only if not noisy," with the answer paid, was produced for the inquirer at the office, so that the understanding that they should meet at Chester rather than at Liverpool remained to that extent sound. The same secret principle, however, that had prompted Strether not absolutely to desire Waymarsh\'s presence at the dock, that had led him thus to postpone for a few hours his enjoyment of it, now operated to make him feel that he could still wait without disappointment. They would dine together at the worst, and, with all respect to dear old Waymarsh—if not even, for that matter, to himself—there was little fear that in the sequel they should not see enough of each other. The principle I have just mentioned as operating had been, with the most newly-disembarked of the two men, wholly instinctive—the fruit of a sharp sense that, delightful as it would be to find himself looking, after so much separation, into his comrade\'s face, his business would be a trifle bungled should he simply arrange that this countenance should present itself to the nearing steamer as the first "note," for him, of Europe. Mixed with everything was the apprehension, already, on Strether\'s part, that he would, at best, throughout, prove the note of Europe in quite a sufficient degree.*
Output of Brahe:
> Summary: Strether is waiting for his friend Waymarsh to arrive at the hotel, but is not completely disappointed when he learns that Waymarsh will not arrive until evening. Strether has a secret principle that led him to postpone meeting Waymarsh at the dock, but now feels that they will see enough of each other. Strether is concerned that his presence in Europe will be enough of a note for his friend.<br>
> Trope: The anticipation of meeting a friend<br>
> Narrative arc: Suspense<br>
> Enunciation: Third-person narrative<br>
> Tone: Anticipatory<br>
> Genre: Literary fiction<br>
> Intertextuality: Travelogue<br>
> Speech standard: Standard<br>
> Literary form: Stream of consciousness<br>
> Literary movement: Realism<br>
> Active character: Strether, Waymarsh<br>
> Time setting: Contemporary<br>
> Fuzzy time: Evening<br>
> Diegetic time: A few hours<br>
> Absolute place: The hotel<br>
> Fuzzy place: The hotel
With the incipit of Proust's *Swann's Way*:
> Combray at a distance, from a twenty-mile radius, as we used to see it from the railway when we arrived there every year in Holy Week, was no more than a church epitomising the town, representing it, speaking of it and for it to the horizon, and as one drew near, gathering close about its long, dark cloak, sheltering from the wind, on the open plain, as a shepherd gathers his sheep, the woolly grey backs of its flocking houses, which a fragment of its mediaeval ramparts enclosed, here and there, in an outline as scrupulously circular as that of a little town in a primitive painting. To live in, Combray was a trifle depressing, like its streets, whose houses, built of the blackened stone of the country, fronted with outside steps, capped with gables which projected long shadows downwards, were so dark that one had, as soon as the sun began to go down, to draw back the curtains in the sitting-room windows; streets with the solemn names of Saints, not a few of whom figured in the history of the early lords of Combray, such as the Rue Saint-Hilaire, the Rue Saint-Jacques, in which my aunt's house stood, the Rue Sainte-Hildegarde, which ran past her railings, and the Rue du Saint-Esprit, on to which the little garden gate opened; and these Combray streets exist in so remote a quarter of my memory, painted in colours so different from those in which the world is decked for me to-day, that in fact one and all of them, and the church which towered above them in the Square, seem to me now more unsubstantial than the projections of my magic-lantern; while at times I feel that to be able to cross the Rue Saint-Hilaire again, to engage a room in the Rue de l'Oiseau, in the old hostelry of the Oiseau Flesché, from whose windows in the pavement used to rise a smell of cooking which rises still in my mind, now and then, in the same warm gusts of comfort, would be to secure a contact with the unseen world more marvellously supernatural than it would be to make Golo's acquaintance and to chat with Geneviève de Brabant.*
Output of Brahe:
> Summary: The narrator reflects on their childhood memories of Combray, a town with dark and depressing streets.<br>
> Trope: The town as a character<br>
> Narrative arc: Reflective<br>
> Enunciation: First-person narrative<br>
> Tone: Nostalgic<br>
> Genre: Memoir<br>
> Intertextuality: Historical accounts of Combray<br>
> Speech standard: Poetic<br>
> Literary form: Description of a place<br>
> Literary movement: Symbolism<br>
> Active character: The narrator, the aunt<br>
> Time setting: Childhood<br>
> Absolute place: Combray<br>
> Fuzzy place: Combray
With an incipit of War and Peace in both French and Russian. The Russian language is currently hardly included in *Brahe* training corpus.
> — Eh bien, mon prince. Gênes et Lucques ne sont plus que des apanages, des поместья, de la famille Buonaparte. Non, je vous préviens que si vous ne me dites pas que nous avons la guerre, si vous vous permettez encore de pallier toutes les infamies, toutes les atrocités de cet Antichrist (ma parole, j’y crois) — je ne vous connais plus, vous n’êtes plus mon ami, vous n’êtes plus мой верный раб, comme vous dites. Ну, здравствуйте, здравствуйте. Je vois que je vous fais peur, садитесь и рассказывайте.<br>
> Так говорила в июле 1805 года известная Анна Павловна Шерер, фрейлина и приближенная императрицы Марии Феодоровны, встречая важного и чиновного князя Василия, первого приехавшего на ее вечер. Анна Павловна кашляла несколько дней, у нее был грипп, как она говорила (грипп был тогда новое слово, употреблявшееся только редкими). В записочках, разосланных утром с красным лакеем, было написано без различия во всех:
Output of Brahe:
> Summary: A conversation between two characters, Anna Pavlovna and Vasily, about the Bonaparte family and war.<br>
> Enunciation: Dialog<br>
> Tone: Serious, argumentative<br>
> Genre: Historical fiction<br>
> Intertextuality: Political speeches or documents<br>
> Speech standard: Formal, elevated language<br>
> Literary form: Conversation/dialogue |
GuysTrans/bart-base-vn-ehealth-tokenizer | GuysTrans | 2023-11-04T17:49:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-29T17:03:19Z | ---
base_model: vinai/bartpho-syllable
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-vn-ehealth-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-vn-ehealth-tokenizer
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5573
- Rouge1: 19.5871
- Rouge2: 9.8761
- Rougel: 16.1785
- Rougelsum: 18.0628
- Bleu-1: 0.0074
- Bleu-2: 0.0054
- Bleu-3: 0.0039
- Bleu-4: 0.0028
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:------:|:------:|:------:|:-------:|
| 1.6631 | 1.0 | 17418 | 1.5573 | 19.5871 | 9.8761 | 16.1785 | 18.0628 | 0.0074 | 0.0054 | 0.0039 | 0.0028 | 20.0 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
maxmendola/QACS271302LLAMA2 | maxmendola | 2023-11-04T17:46:37Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:finetune:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
]
| null | 2023-11-04T17:26:39Z | ---
base_model: TinyPixel/Llama-2-7B-bf16-sharded
tags:
- generated_from_trainer
model-index:
- name: QACS271302LLAMA2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QACS271302LLAMA2
This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sirenstitches/uplimit-project-3-phi-1.5 | sirenstitches | 2023-11-04T17:46:16Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:scitldr",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-11-04T16:59:04Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: uplimit-project-3-phi-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uplimit-project-3-phi-1.5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5904 | 0.4 | 800 | 2.5820 |
| 2.541 | 0.8 | 1600 | 2.5608 |
| 2.4456 | 1.2 | 2400 | 2.5628 |
| 2.3709 | 1.61 | 3200 | 2.5507 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Bahtiyar7/ppo-LunarLander-v2 | Bahtiyar7 | 2023-11-04T17:44:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T17:40:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.49 +/- 13.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
# Download checkpoint
checkpoint = load_from_hub("araffin/ppo-LunarLander-v2", "ppo-LunarLander-v2.zip")
# Load the model
model = PPO.load(checkpoint)
env = make_vec_env("LunarLander-v2", n_envs=1)
# Evaluate
print("Evaluating model")
mean_reward, std_reward = evaluate_policy(
model,
env,
n_eval_episodes=20,
deterministic=True,
)
print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
# Start a new episode
obs = env.reset()
try:
while True:
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render()
except KeyboardInterrupt:
pass
```
## Training code (with SB3)
```python
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.callbacks import EvalCallback
# Create the environment
env_id = "LunarLander-v2"
n_envs = 16
env = make_vec_env(env_id, n_envs=n_envs)
# Create the evaluation envs
eval_envs = make_vec_env(env_id, n_envs=5)
# Adjust evaluation interval depending on the number of envs
eval_freq = int(1e5)
eval_freq = max(eval_freq // n_envs, 1)
# Create evaluation callback to save best model
# and monitor agent performance
eval_callback = EvalCallback(
eval_envs,
best_model_save_path="./logs/",
eval_freq=eval_freq,
n_eval_episodes=10,
)
# Instantiate the agent
# Hyperparameters from https://github.com/DLR-RM/rl-baselines3-zoo
model = PPO(
"MlpPolicy",
env,
n_steps=1024,
batch_size=64,
gae_lambda=0.98,
gamma=0.999,
n_epochs=4,
ent_coef=0.01,
verbose=1,
)
# Train the agent (you can kill it before using ctrl+c)
try:
model.learn(total_timesteps=int(5e6), callback=eval_callback)
except KeyboardInterrupt:
pass
# Load best model
model = PPO.load("logs/best_model.zip")
```
|
learn3r/longt5_xl_gov_memsum_bp_20 | learn3r | 2023-11-04T17:44:27Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/gov_report_memsum_bp",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-02T20:58:24Z | ---
base_model: /exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_memsum_bp_15/checkpoint-1360
tags:
- generated_from_trainer
datasets:
- learn3r/gov_report_memsum_bp
metrics:
- rouge
model-index:
- name: longt5_xl_gov_memsum_bp_20
results:
- task:
name: Summarization
type: summarization
dataset:
name: learn3r/gov_report_memsum_bp
type: learn3r/gov_report_memsum_bp
metrics:
- name: Rouge1
type: rouge
value: 42.5601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_gov_memsum_bp_20
This model is a fine-tuned version of [/exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_memsum_bp_15/checkpoint-1360](https://huggingface.co//exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_memsum_bp_15/checkpoint-1360) on the learn3r/gov_report_memsum_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6259
- Rouge1: 42.5601
- Rouge2: 14.1791
- Rougel: 17.9691
- Rougelsum: 40.487
- Gen Len: 1510.8695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:---------:|
| 0.1591 | 1.0 | 136 | 3.6259 | 42.5601 | 14.1791 | 17.9691 | 40.487 | 1510.8695 |
| 0.1186 | 1.99 | 272 | 3.7885 | 39.795 | 13.2493 | 17.3095 | 37.9065 | 1707.0401 |
| 0.097 | 3.0 | 409 | 4.0192 | 41.4441 | 13.4026 | 17.8804 | 39.4502 | 1442.7729 |
| 0.0818 | 4.0 | 545 | 4.1699 | 40.2374 | 13.5869 | 17.364 | 38.2969 | 1741.0236 |
| 0.0786 | 4.99 | 680 | 4.3339 | 39.5612 | 13.4283 | 17.3666 | 37.6526 | 1710.4111 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yilong2001/bert_cls_example | yilong2001 | 2023-11-04T17:38:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T15:01:32Z | ---
license: apache-2.0
---
This is a example of BertForSequenceClassification model with ggml implement. |
minhnb/ssbc_model_spearman_rounded | minhnb | 2023-11-04T17:35:20Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-04T14:36:56Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: minhnb/ssbc_model_spearman_rounded
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# minhnb/ssbc_model_spearman_rounded
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4747
- Validation Loss: 0.7845
- Train Spearmanr: -0.0138
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2170, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Spearmanr | Epoch |
|:----------:|:---------------:|:---------------:|:-----:|
| 1.0340 | 0.8086 | -0.3120 | 0 |
| 0.7528 | 0.7463 | -0.2250 | 1 |
| 0.6238 | 0.7471 | -0.0633 | 2 |
| 0.5334 | 0.7504 | -0.0711 | 3 |
| 0.4747 | 0.7845 | -0.0138 | 4 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
deinferno/taesd-openvino | deinferno | 2023-11-04T17:27:01Z | 159 | 1 | transformers | [
"transformers",
"openvino",
"text-to-image",
"license:mit",
"region:us"
]
| text-to-image | 2023-11-04T16:53:11Z | ---
license: mit
pipeline_tag: text-to-image
tags:
- openvino
- text-to-image
inference: false
---
## Model Descriptions:
This repo contains OpenVino model files for [madebyollin's Tiny AutoEncoder for Stable Diffusion](https://huggingface.co/madebyollin/taesd).
## Using in 🧨 diffusers:
To install the requirements for this demo, do pip install "optimum-intel[openvino, diffusers]".
```python
from huggingface_hub import snapshot_download
from optimum.intel.openvino import OVStableDiffusionPipeline
from optimum.intel.openvino.modeling_diffusion import OVModelVaeDecoder, OVModelVaeEncoder, OVBaseModel
# Create class wrappers which allow us to specify model_dir of TAESD instead of original pipeline dir
class CustomOVModelVaeDecoder(OVModelVaeDecoder):
def __init__(
self, model, parent_model, ov_config = None, model_dir = None,
):
super(OVModelVaeDecoder, self).__init__(model, parent_model, ov_config, "vae_decoder", model_dir)
class CustomOVModelVaeEncoder(OVModelVaeEncoder):
def __init__(
self, model, parent_model, ov_config = None, model_dir = None,
):
super(OVModelVaeEncoder, self).__init__(model, parent_model, ov_config, "vae_encoder", model_dir)
pipe = OVStableDiffusionPipeline.from_pretrained("OpenVINO/stable-diffusion-1-5-fp32", compile=False)
# Inject TAESD
taesd_dir = snapshot_download(repo_id="deinferno/taesd-openvino")
pipe.vae_decoder = CustomOVModelVaeDecoder(model = OVBaseModel.load_model(f"{taesd_dir}/vae_decoder/openvino_model.xml"), parent_model = pipe, model_dir = taesd_dir)
pipe.vae_encoder = CustomOVModelVaeEncoder(model = OVBaseModel.load_model(f"{taesd_dir}/vae_encoder/openvino_model.xml"), parent_model = pipe, model_dir = taesd_dir)
pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1)
pipe.compile()
prompt = "plant pokemon in jungle"
output = pipe(prompt, num_inference_steps=50, output_type="pil")
output.images[0].save("result.png")
```
|
hkivancoral/hushem_40x_deit_base_n_f1 | hkivancoral | 2023-11-04T17:10:55Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-04T16:00:10Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_n_f1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7205
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0686 | 1.0 | 107 | 0.6578 | 0.8222 |
| 0.006 | 2.0 | 214 | 1.2420 | 0.8 |
| 0.0236 | 2.99 | 321 | 0.9909 | 0.7778 |
| 0.0001 | 4.0 | 429 | 0.7158 | 0.7333 |
| 0.0001 | 5.0 | 536 | 0.7001 | 0.7778 |
| 0.0001 | 6.0 | 643 | 0.7068 | 0.8 |
| 0.0 | 6.99 | 750 | 0.7117 | 0.8 |
| 0.0 | 8.0 | 858 | 0.7163 | 0.8 |
| 0.0 | 9.0 | 965 | 0.7199 | 0.8 |
| 0.0 | 9.98 | 1070 | 0.7205 | 0.8 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Nana/Plantaide | Nana | 2023-11-04T16:59:09Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-10-29T20:32:26Z | ---
license: mit
---
This repository would host models for plant disease detection machine learning algorithms. |
owanr/ghc-google-t5-v1_1-large-inter_model-frequency-model_annots_str | owanr | 2023-11-04T16:59:04Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-04T16:59:03Z | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
model-index:
- name: ghc-google-t5-v1_1-large-inter_model-frequency-model_annots_str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-google-t5-v1_1-large-inter_model-frequency-model_annots_str
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.3989 | 1.0 | 689 | 5.7253 |
| 0.3833 | 2.0 | 1378 | 0.3076 |
| 0.3352 | 3.0 | 2067 | 0.2992 |
| 0.3233 | 4.0 | 2756 | 0.2986 |
| 0.3203 | 5.0 | 3445 | 0.2983 |
| 0.3409 | 6.0 | 4134 | 0.3035 |
| 0.3078 | 7.0 | 4823 | 0.2952 |
| 0.3021 | 8.0 | 5512 | 0.2973 |
| 0.3136 | 9.0 | 6201 | 0.2968 |
| 0.3297 | 10.0 | 6890 | 0.2979 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.6.1
- Tokenizers 0.14.1
|
rangoiv/distilbert-base-uncased-finetuned-cola | rangoiv | 2023-11-04T16:55:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-04T16:16:41Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5288024095653031
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8280
- Matthews Correlation: 0.5288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5263 | 1.0 | 535 | 0.4682 | 0.4926 |
| 0.3581 | 2.0 | 1070 | 0.4954 | 0.4849 |
| 0.2394 | 3.0 | 1605 | 0.5446 | 0.5249 |
| 0.1755 | 4.0 | 2140 | 0.7565 | 0.5286 |
| 0.1398 | 5.0 | 2675 | 0.8280 | 0.5288 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.13.3
|
sdpkjc/Walker2d-v4-sac_continuous_action-seed1 | sdpkjc | 2023-11-04T16:38:25Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T16:38:16Z | ---
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 5192.59 +/- 72.92
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Walker2d-v4**
This is a trained model of a SAC agent playing Walker2d-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Walker2d-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed1/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Walker2d-v4 --seed 1
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Walker2d-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 1,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': False,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
hariram344/art-khk | hariram344 | 2023-11-04T16:30:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-04T16:25:37Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Art-KHK Dreambooth model trained by harikeshava1223 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MITS-1329
Sample pictures of this concept:

|
kunalwahurwagh/my-pet-cat-akw | kunalwahurwagh | 2023-11-04T16:27:41Z | 0 | 0 | null | [
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-11-04T16:25:30Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-akw Dreambooth model trained by kunalwahurwagh following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
Flo2000/RevRealistic | Flo2000 | 2023-11-04T16:16:55Z | 0 | 0 | null | [
"text2text-generation",
"region:us"
]
| text2text-generation | 2023-11-04T16:15:35Z | ---
pipeline_tag: text2text-generation
--- |
LaTarn/ta-activity-setfit-model | LaTarn | 2023-11-04T16:08:54Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-04T16:08:34Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# LaTarn/ta-activity-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/ta-activity-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
alfredowh/q-Taxi-v3 | alfredowh | 2023-11-04T16:04:34Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-04T16:04:01Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alfredo-wh/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CiroN2022/tape-people | CiroN2022 | 2023-11-04T15:52:37Z | 49 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"concept",
"tape",
"caution",
"keep out",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-11-04T15:52:24Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- concept
- tape
- caution
- keep out
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: caution tape
widget:
- text: "fashion photo of a woman wearing a white dress and a flower headpiece covered of tape, caution tape, keep out "
- text: "fashion photo of a woman in a red and gold outfit and a hat covered of tape, caution tape, keep out "
- text: "fashion photo of a woman with a silver jacket and a necklace covered of tape, caution tape, keep out "
- text: "fashion photo of a man in a suit and tie poses for a picture covered of tape, caution tape, keep out "
- text: "fashion photo of a man with a turban on his head covered of tape, caution tape, keep out "
- text: "fashion photo of a woman with colorful hair and a bow tie covered of tape, caution tape, keep out "
- text: "fashion photo of a woman with a silver jacket on posing for a picture covered of tape, caution tape, keep out "
- text: "fashion photo of a man with a white shirt and a vest covered of tape, caution tape, keep out "
- text: "fashion photo of a woman with a big afro wearing a necklace and earrings covered of tape, caution tape, keep out "
- text: "AI-generated scent diffuser, releasing personalized fragrances based on mood. , caution tape, keep out "
---
# Tape People

> fashion photo of a woman wearing a white dress and a flower headpiece covered of tape, caution tape, keep out
None
## Image examples for the model:

> fashion photo of a woman in a red and gold outfit and a hat covered of tape, caution tape, keep out

> fashion photo of a woman with a silver jacket and a necklace covered of tape, caution tape, keep out

> fashion photo of a man in a suit and tie poses for a picture covered of tape, caution tape, keep out

> fashion photo of a man with a turban on his head covered of tape, caution tape, keep out

> fashion photo of a woman with colorful hair and a bow tie covered of tape, caution tape, keep out

> fashion photo of a woman with a silver jacket on posing for a picture covered of tape, caution tape, keep out

> fashion photo of a man with a white shirt and a vest covered of tape, caution tape, keep out

> fashion photo of a woman with a big afro wearing a necklace and earrings covered of tape, caution tape, keep out

> AI-generated scent diffuser, releasing personalized fragrances based on mood. , caution tape, keep out
|
Subsets and Splits