modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 18:27:22
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
464 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 18:27:15
card
stringlengths
11
1.01M
aroot/eng-fra-simcse_longestplus_usbbu
aroot
2023-07-07T01:55:49Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T01:40:35Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1385 - Bleu: 32.2838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_longestplus_ssbbu
aroot
2023-07-07T01:55:42Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T01:40:30Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_ssbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_ssbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1376 - Bleu: 32.3517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ
bhenrym14
2023-07-07T01:52:30Z
9
7
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "arxiv:2306.15595", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T01:13:36Z
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- # NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ) LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-LoRA fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16 Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ ## Overview This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications: - Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b. - Training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours. ## How to Use The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set `alpha_value` to 8. ## Motivation Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k), [meta AI)](https://arxiv.org/abs/2306.15595)), and [(reddit thread)](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context, at least for the linear position interpolation technique. The superHOT LoRA is an adapter that has been finetuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? How does the NTK aware approach perform after finetuning? This is an experiment to explore this. ## Relative Performance (perplexity) | Model | Context (tokens) | Perplexity | | ---------------------------------------------------- | ----------- | ---------- | | TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 512 | 8.24 | | bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 512 | 6.80 | | **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **512** | **6.23** | | ---------------------------------------------------- | ----------- | ---------- | | TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.15 | | bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 2048 | 4.32 | | **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **2048** | **4.16** | | --------------------------------------------------- | ------------| ----------- | | TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 3072 | 5.04 | | bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ | 3072 | 4.26 | | **bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ** | **3072** | **4.12** | - The NTK variant (this model) outperforms both SuperHOT and the PI variants in terms of perplexity at all context lengths evaluated here. - How does this reduction in perplexity translate into actual performance lift on downstream tasks? I haven't used models with the SuperHOT LoRA enough to have any sense of performance differences, but feedback on the PI variant suggests it is particularly noticable at longer context lengths - This comparison isn't perfect. I did use the 1.4.1 dataset, the quantization method is slightly different. ## Quantization: The merged model was quantized with AutoGPTQ (bits = 4, group_size = -1, desc_act = True). ## Prompting: See original model card below. # Original model card: Jon Durbin's Airoboros 33B GPT4 1.4 __not yet tested!__ ## Overview This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-33b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
OubeidAllahjb/u
OubeidAllahjb
2023-07-07T01:48:54Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-07-07T01:45:39Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YakovElm/Jira_10_BERT_More_Properties
YakovElm
2023-07-07T01:44:59Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T01:44:24Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira_10_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira_10_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3867 - Train Accuracy: 0.8248 - Validation Loss: 0.4420 - Validation Accuracy: 0.7760 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5406 | 0.7786 | 0.8600 | 0.4921 | 0 | | 0.4617 | 0.7828 | 0.6068 | 0.4921 | 1 | | 0.3867 | 0.8248 | 0.4420 | 0.7760 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
manosp/textual_inversion_moto_toy
manosp
2023-07-07T01:37:44Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T22:25:35Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - manosp/textual_inversion_moto_toy These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Aeala/Enterredaas-33b-QLoRA
Aeala
2023-07-07T01:34:06Z
0
4
null
[ "region:us" ]
null
2023-07-07T01:26:05Z
## LoRA Info: Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is complete. Feel free to try it!~ **Important Note**: This was trained in the *Alpaca* format, so prompting should be something like: ``` ### Instruction: <system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt) ### Input: <prompt> (without the <>) ### Response: ``` Current upload: *possibly* final checkpoint ## Benchmarks **wikitext2:** Coming soon... **ptb-new:** Coming soon... **c4-new:** Coming soon...
Yangdf/moss_lora_guanaco_belle-merge_r-32_lr_1e-4_1w-steps
Yangdf
2023-07-07T01:32:01Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-07T01:30:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
YakovElm/Jira_5_BERT_More_Properties
YakovElm
2023-07-07T01:29:08Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T01:28:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira_5_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira_5_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3855 - Train Accuracy: 0.8311 - Validation Loss: 0.4479 - Validation Accuracy: 0.8360 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5306 | 0.7681 | 0.6994 | 0.4858 | 0 | | 0.4213 | 0.7796 | 0.6106 | 0.6625 | 1 | | 0.3855 | 0.8311 | 0.4479 | 0.8360 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
ayresflesch/ppo-Huggy
ayresflesch
2023-07-07T01:23:40Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-07T01:23:34Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ayresflesch/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
owanr/r1_iterater
owanr
2023-07-07T01:18:02Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "src", "tgt", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T01:06:29Z
--- language: - src - tgt license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: output_r1_iter_wo_p results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_r1_iter_wo_p This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1334 - Bleu: 0.0 - Gen Len: 2.432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 27 | 0.2728 | 0.0 | 2.9953 | | No log | 2.0 | 54 | 0.2650 | 0.0 | 2.6791 | | No log | 3.0 | 81 | 0.2637 | 0.0 | 2.1874 | | No log | 4.0 | 108 | 0.2418 | 0.0 | 2.2973 | | No log | 5.0 | 135 | 0.2738 | 0.0 | 2.2494 | | No log | 6.0 | 162 | 0.1914 | 0.0 | 2.3812 | | No log | 7.0 | 189 | 0.1641 | 0.0 | 2.3983 | | No log | 8.0 | 216 | 0.1695 | 0.0 | 2.3995 | | No log | 9.0 | 243 | 0.1521 | 0.0 | 2.4167 | | No log | 10.0 | 270 | 0.1569 | 0.0 | 2.4167 | | No log | 11.0 | 297 | 0.1615 | 0.0 | 2.4137 | | No log | 12.0 | 324 | 0.1473 | 0.0 | 2.4238 | | No log | 13.0 | 351 | 0.1376 | 0.0 | 2.4255 | | No log | 14.0 | 378 | 0.1495 | 0.0 | 2.419 | | No log | 15.0 | 405 | 0.1334 | 0.0 | 2.432 | | No log | 16.0 | 432 | 0.1474 | 0.0 | 2.4214 | | No log | 17.0 | 459 | 0.1484 | 0.0 | 2.4291 | | No log | 18.0 | 486 | 0.1407 | 0.0 | 2.4297 | | 0.1905 | 19.0 | 513 | 0.1568 | 0.0 | 2.4208 | | 0.1905 | 20.0 | 540 | 0.1631 | 0.0 | 2.4261 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Karum/Baji_keisuke
Karum
2023-07-07T01:13:30Z
0
0
nemo
[ "nemo", "audio-to-audio", "es", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:mozilla-foundation/common_voice_11_0", "license:creativeml-openrail-m", "region:us" ]
audio-to-audio
2023-07-07T01:11:06Z
--- license: creativeml-openrail-m datasets: - fka/awesome-chatgpt-prompts - mozilla-foundation/common_voice_11_0 language: - es - en library_name: nemo pipeline_tag: audio-to-audio ---
AustinCarthy/Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5
AustinCarthy
2023-07-07T01:11:37Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-06T21:45:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_benign_95K_top_p_0.75suffix dataset. It achieves the following results on the evaluation set: - Loss: 0.0623 - Accuracy: 0.9881 - F1: 0.8818 - Precision: 0.8347 - Recall: 0.9344 - Roc Auc Score: 0.9626 - Tpr At Fpr 0.01: 0.7796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0915 | 1.0 | 35625 | 0.0478 | 0.9880 | 0.8761 | 0.8605 | 0.8922 | 0.9425 | 0.7968 | | 0.0797 | 2.0 | 71250 | 0.0386 | 0.9897 | 0.8959 | 0.8654 | 0.9286 | 0.9607 | 0.8378 | | 0.0622 | 3.0 | 106875 | 0.0459 | 0.9876 | 0.8760 | 0.8335 | 0.923 | 0.9569 | 0.77 | | 0.0398 | 4.0 | 142500 | 0.0544 | 0.9882 | 0.8828 | 0.8370 | 0.9338 | 0.9624 | 0.8568 | | 0.0298 | 5.0 | 178125 | 0.0623 | 0.9881 | 0.8818 | 0.8347 | 0.9344 | 0.9626 | 0.7796 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-LoRA
bhenrym14
2023-07-07T01:00:27Z
0
1
null
[ "dataset:jondurbin/airoboros-gpt4-1.4.1", "region:us" ]
null
2023-07-06T14:40:53Z
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- # RoPE Scaled QLoRA Fine-tune of Llama-13b on airoboros-gpt4-1.4.1 (LoRA) Full model card with merged GPTQ 4bit quantized weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ fp16 merged weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-fp16 ## Overview This is [Jon Durbin's Airoboros 13B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) (LoRA weights) with several key modifications: - Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-13b. - Training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 - **This is a QLoRA fine-tune**. The original 13b model is a full fine-tune. It was trained on 1x RTX 6000 Ada for ~17 hours.
YakovElm/IntelDAOS_15_BERT_More_Properties
YakovElm
2023-07-07T00:57:23Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T00:56:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS_15_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS_15_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2132 - Train Accuracy: 0.9460 - Validation Loss: 0.4208 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2853 | 0.9200 | 0.3557 | 0.8859 | 0 | | 0.2168 | 0.9460 | 0.3869 | 0.8859 | 1 | | 0.2132 | 0.9460 | 0.4208 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ
bhenrym14
2023-07-07T00:56:05Z
7
4
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "arxiv:2306.15595", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T13:11:14Z
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- # RoPE Scaled QLoRA Fine-tune of Llama-13b on airoboros-gpt4-1.4.1 (GPTQ) LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-LoRA fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-fp16 ## Overview This is [Jon Durbin's Airoboros 13B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications: - Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-13b. - Training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 - **This is a QLoRA fine-tune**. The original 13b model is a full fine-tune. It was trained on 1x RTX 6000 Ada for ~17 hours. ## How to Use The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set max_seq_len to 8192 and compress_pos_emb to 4. If you wish to use AutoGPTQ/GPTQ-for-Llama instead, you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch.py) ## Motivation Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is an adapter that has been fine-tuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been fine-tuned with the scaled embeddings from the start? This is an experiment to explore this. ## Relative Performance (perplexity) | Model | Context (tokens) | Perplexity | | ---------------------------------------------------- | ----------- | ---------- | | TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 512 | **7.42** | | TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 512 | 8.86 | | **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 512 | 7.94 | | ---------------------------------------------------- | ----------- | ---------- | | TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 2048 | **5.02** | | TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.98 | | **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 2048 | 5.28 | | ---------------------------------------------------- | ----------- | ---------- | | TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 4096 | 9848.0 | | TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 4096 | 5.80 | | **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 4096 | **5.15** | - For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point. - In terms of perplexity, this model outperforms the SuperHOT variant at all tested context lengths. I haven't used models with the SuperHOT LoRA enough to have any sense of performance differences, but feedback on the 33b variant suggests it is particularly noticable at longer context lengths. - This comparison isn't perfect. I did use the 1.4.1 dataset, the quantization method is slightly different, and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences. This model could be a little undertrained. I'll update the weights if I end up training it longer and/or with better hyperparameters ## Quantization: The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). ## Prompting: See original model card below. # Original model card: Jon Durbin's Airoboros 13B GPT4 1.4 ## Overview This is a __full__ (not qlora) fine-tune 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-13b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
jjhonny/Reinforce-PixelCopter
jjhonny
2023-07-07T00:41:43Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-02T00:06:48Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.50 +/- 13.12 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
YakovElm/Hyperledger_20_BERT_More_Properties
YakovElm
2023-07-07T00:08:37Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T00:08:02Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger_20_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger_20_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2814 - Train Accuracy: 0.9149 - Validation Loss: 0.3208 - Validation Accuracy: 0.8983 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3024 | 0.9070 | 0.3293 | 0.8983 | 0 | | 0.2946 | 0.9149 | 0.3301 | 0.8983 | 1 | | 0.2814 | 0.9149 | 0.3208 | 0.8983 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
happyduck/alcafa_5.8b_10000
happyduck
2023-07-07T00:03:03Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-07T00:02:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
raghav-gaggar/stable-diffusion-thumbs-up
raghav-gaggar
2023-07-06T23:57:55Z
6
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T02:25:26Z
--- tags: - text-to-image - stable-diffusion --- Stable Diffusion model, fine-tuned for generating images of people with their thumbs up. How to use it: ```py from diffusers import StableDiffusionPipeline import torch from torchmetrics.functional.multimodal import clip_score from functools import partial model_ckpt = "raghav-gaggar/stable-diffusion-thumbs-up" sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") prompts = [ "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", "thumbs up", ] images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="numpy").images print(images.shape) clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") def calculate_clip_score(images, prompts): images_int = (images * 255).astype("uint8") clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() return round(float(clip_score), 4) sd_clip_score = calculate_clip_score(images, prompts) print(f"CLIP score: {sd_clip_score}") ``` Sample pictures of this concept: ![0](https://huggingface.co/raghav-gaggar/stable-diffusion-thumbs-up/resolve/main/sample_images/00002-1618841423.png) ![1](https://huggingface.co/raghav-gaggar/stable-diffusion-thumbs-up/resolve/main/sample_images/00004-877035622.png) ![2](https://huggingface.co/raghav-gaggar/stable-diffusion-thumbs-up/resolve/main/sample_images/00001-72164288.png) ![3](https://huggingface.co/raghav-gaggar/stable-diffusion-thumbs-up/resolve/main/sample_images/00003-1610016206.png)
garrettbaber/autotrain-twitter-gpt-fear-intensity-72781139029
garrettbaber
2023-07-06T23:54:17Z
119
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "autotrain", "text-regression", "unk", "dataset:garrettbaber/autotrain-data-twitter-gpt-fear-intensity", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T23:53:39Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain" datasets: - garrettbaber/autotrain-data-twitter-gpt-fear-intensity co2_eq_emissions: emissions: 0.03442887920623126 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 72781139029 - CO2 Emissions (in grams): 0.0344 ## Validation Metrics - Loss: 0.035 - MSE: 0.035 - MAE: 0.154 - R2: 0.108 - RMSE: 0.188 - Explained Variance: 0.109 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/garrettbaber/autotrain-twitter-gpt-fear-intensity-72781139029 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/autotrain-twitter-gpt-fear-intensity-72781139029", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("garrettbaber/autotrain-twitter-gpt-fear-intensity-72781139029", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
jordyvl/vit-base_tobacco
jordyvl
2023-07-06T23:51:02Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T13:12:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base_tobacco results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_tobacco This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7442 - Accuracy: 0.815 - Brier Loss: 0.3076 - Nll: 1.1877 - F1 Micro: 0.815 - F1 Macro: 0.7942 - Ece: 0.2072 - Aurc: 0.0734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 6 | 2.3082 | 0.085 | 0.9012 | 6.2672 | 0.085 | 0.0735 | 0.1625 | 0.9316 | | No log | 1.96 | 12 | 2.2872 | 0.14 | 0.8970 | 4.8533 | 0.14 | 0.0885 | 0.1958 | 0.8912 | | No log | 2.96 | 18 | 2.2562 | 0.225 | 0.8906 | 4.5559 | 0.225 | 0.1319 | 0.2527 | 0.8101 | | No log | 3.96 | 24 | 2.2107 | 0.265 | 0.8808 | 4.3151 | 0.265 | 0.1614 | 0.2710 | 0.6990 | | No log | 4.96 | 30 | 2.1433 | 0.3 | 0.8654 | 4.1825 | 0.3 | 0.1615 | 0.2943 | 0.6102 | | No log | 5.96 | 36 | 2.0764 | 0.325 | 0.8493 | 3.6715 | 0.325 | 0.1696 | 0.3160 | 0.4502 | | No log | 6.96 | 42 | 2.0012 | 0.375 | 0.8287 | 3.5534 | 0.375 | 0.1901 | 0.3542 | 0.3791 | | No log | 7.96 | 48 | 1.9197 | 0.41 | 0.8041 | 3.3582 | 0.41 | 0.2136 | 0.3528 | 0.3342 | | No log | 8.96 | 54 | 1.8379 | 0.45 | 0.7767 | 3.1997 | 0.45 | 0.2279 | 0.3709 | 0.2872 | | No log | 9.96 | 60 | 1.7538 | 0.535 | 0.7475 | 2.9586 | 0.535 | 0.3755 | 0.4024 | 0.2508 | | No log | 10.96 | 66 | 1.6634 | 0.57 | 0.7132 | 2.6969 | 0.57 | 0.4025 | 0.4182 | 0.2183 | | No log | 11.96 | 72 | 1.5952 | 0.61 | 0.6842 | 2.4519 | 0.61 | 0.4427 | 0.4153 | 0.1882 | | No log | 12.96 | 78 | 1.5205 | 0.655 | 0.6554 | 1.9703 | 0.655 | 0.5306 | 0.4572 | 0.1651 | | No log | 13.96 | 84 | 1.4566 | 0.67 | 0.6308 | 1.7832 | 0.67 | 0.5458 | 0.4240 | 0.1514 | | No log | 14.96 | 90 | 1.4009 | 0.685 | 0.6074 | 1.8217 | 0.685 | 0.5641 | 0.4221 | 0.1406 | | No log | 15.96 | 96 | 1.3520 | 0.7 | 0.5866 | 1.6223 | 0.7 | 0.5896 | 0.4107 | 0.1304 | | No log | 16.96 | 102 | 1.3220 | 0.7 | 0.5741 | 1.4452 | 0.7 | 0.5865 | 0.4029 | 0.1225 | | No log | 17.96 | 108 | 1.2764 | 0.705 | 0.5522 | 1.4534 | 0.705 | 0.6076 | 0.3805 | 0.1269 | | No log | 18.96 | 114 | 1.2448 | 0.72 | 0.5378 | 1.4843 | 0.72 | 0.6321 | 0.3724 | 0.1193 | | No log | 19.96 | 120 | 1.2049 | 0.74 | 0.5210 | 1.2527 | 0.74 | 0.6471 | 0.3947 | 0.1039 | | No log | 20.96 | 126 | 1.1712 | 0.74 | 0.5057 | 1.1657 | 0.74 | 0.6464 | 0.3833 | 0.0955 | | No log | 21.96 | 132 | 1.1453 | 0.735 | 0.4936 | 1.0277 | 0.735 | 0.6597 | 0.3628 | 0.1015 | | No log | 22.96 | 138 | 1.1094 | 0.745 | 0.4771 | 1.0003 | 0.745 | 0.6667 | 0.3841 | 0.0938 | | No log | 23.96 | 144 | 1.0803 | 0.75 | 0.4628 | 1.0334 | 0.75 | 0.6972 | 0.3490 | 0.0891 | | No log | 24.96 | 150 | 1.0658 | 0.755 | 0.4559 | 1.0092 | 0.755 | 0.6937 | 0.3536 | 0.0925 | | No log | 25.96 | 156 | 1.0345 | 0.765 | 0.4423 | 0.9971 | 0.765 | 0.7356 | 0.3661 | 0.0852 | | No log | 26.96 | 162 | 1.0133 | 0.76 | 0.4323 | 0.9206 | 0.76 | 0.7302 | 0.3343 | 0.0791 | | No log | 27.96 | 168 | 0.9927 | 0.775 | 0.4225 | 0.9015 | 0.775 | 0.7433 | 0.3457 | 0.0794 | | No log | 28.96 | 174 | 0.9789 | 0.765 | 0.4152 | 0.8946 | 0.765 | 0.7282 | 0.3337 | 0.0818 | | No log | 29.96 | 180 | 0.9509 | 0.78 | 0.4025 | 0.9323 | 0.78 | 0.7565 | 0.3135 | 0.0733 | | No log | 30.96 | 186 | 0.9388 | 0.79 | 0.3968 | 0.8616 | 0.79 | 0.7642 | 0.3353 | 0.0694 | | No log | 31.96 | 192 | 0.9316 | 0.78 | 0.3927 | 0.8636 | 0.78 | 0.7588 | 0.3426 | 0.0739 | | No log | 32.96 | 198 | 0.9197 | 0.79 | 0.3876 | 0.8581 | 0.79 | 0.7656 | 0.3042 | 0.0800 | | No log | 33.96 | 204 | 0.9020 | 0.775 | 0.3792 | 0.8458 | 0.775 | 0.7543 | 0.2872 | 0.0744 | | No log | 34.96 | 210 | 0.8833 | 0.785 | 0.3694 | 0.8288 | 0.785 | 0.7619 | 0.3305 | 0.0663 | | No log | 35.96 | 216 | 0.8684 | 0.795 | 0.3624 | 0.8462 | 0.795 | 0.7779 | 0.3184 | 0.0690 | | No log | 36.96 | 222 | 0.8608 | 0.79 | 0.3584 | 0.8860 | 0.79 | 0.7707 | 0.2790 | 0.0709 | | No log | 37.96 | 228 | 0.8586 | 0.79 | 0.3587 | 0.8954 | 0.79 | 0.7724 | 0.3153 | 0.0754 | | No log | 38.96 | 234 | 0.8470 | 0.79 | 0.3515 | 0.8822 | 0.79 | 0.7684 | 0.3075 | 0.0726 | | No log | 39.96 | 240 | 0.8288 | 0.79 | 0.3434 | 0.8192 | 0.79 | 0.7700 | 0.2700 | 0.0648 | | No log | 40.96 | 246 | 0.8255 | 0.8 | 0.3426 | 0.8191 | 0.8000 | 0.7808 | 0.2760 | 0.0727 | | No log | 41.96 | 252 | 0.8247 | 0.8 | 0.3411 | 0.8876 | 0.8000 | 0.7737 | 0.2903 | 0.0701 | | No log | 42.96 | 258 | 0.8196 | 0.8 | 0.3389 | 0.8841 | 0.8000 | 0.7786 | 0.2768 | 0.0727 | | No log | 43.96 | 264 | 0.8118 | 0.805 | 0.3351 | 0.9510 | 0.805 | 0.7806 | 0.2620 | 0.0685 | | No log | 44.96 | 270 | 0.8127 | 0.795 | 0.3352 | 1.0119 | 0.795 | 0.7705 | 0.2650 | 0.0707 | | No log | 45.96 | 276 | 0.7968 | 0.8 | 0.3285 | 1.0041 | 0.8000 | 0.7788 | 0.2734 | 0.0665 | | No log | 46.96 | 282 | 0.7946 | 0.81 | 0.3274 | 1.0647 | 0.81 | 0.7921 | 0.2765 | 0.0703 | | No log | 47.96 | 288 | 0.7996 | 0.805 | 0.3298 | 1.0108 | 0.805 | 0.7867 | 0.2772 | 0.0714 | | No log | 48.96 | 294 | 0.7971 | 0.805 | 0.3283 | 1.0728 | 0.805 | 0.7816 | 0.2756 | 0.0732 | | No log | 49.96 | 300 | 0.7950 | 0.8 | 0.3278 | 1.0694 | 0.8000 | 0.7758 | 0.2540 | 0.0750 | | No log | 50.96 | 306 | 0.7826 | 0.8 | 0.3222 | 1.0211 | 0.8000 | 0.7784 | 0.2596 | 0.0643 | | No log | 51.96 | 312 | 0.7933 | 0.795 | 0.3273 | 1.0680 | 0.795 | 0.7712 | 0.2619 | 0.0764 | | No log | 52.96 | 318 | 0.7883 | 0.805 | 0.3247 | 1.0730 | 0.805 | 0.7834 | 0.2426 | 0.0712 | | No log | 53.96 | 324 | 0.7811 | 0.815 | 0.3219 | 1.0623 | 0.815 | 0.7913 | 0.2259 | 0.0716 | | No log | 54.96 | 330 | 0.7784 | 0.815 | 0.3203 | 1.0657 | 0.815 | 0.7917 | 0.2797 | 0.0690 | | No log | 55.96 | 336 | 0.7827 | 0.81 | 0.3219 | 1.0770 | 0.81 | 0.7885 | 0.2491 | 0.0752 | | No log | 56.96 | 342 | 0.7701 | 0.815 | 0.3166 | 1.0614 | 0.815 | 0.7913 | 0.2664 | 0.0689 | | No log | 57.96 | 348 | 0.7748 | 0.815 | 0.3187 | 1.0699 | 0.815 | 0.7913 | 0.2487 | 0.0722 | | No log | 58.96 | 354 | 0.7669 | 0.815 | 0.3155 | 1.0607 | 0.815 | 0.7919 | 0.2482 | 0.0685 | | No log | 59.96 | 360 | 0.7721 | 0.81 | 0.3180 | 1.0746 | 0.81 | 0.7859 | 0.2385 | 0.0730 | | No log | 60.96 | 366 | 0.7645 | 0.815 | 0.3145 | 1.0650 | 0.815 | 0.7913 | 0.2468 | 0.0688 | | No log | 61.96 | 372 | 0.7672 | 0.815 | 0.3157 | 1.0782 | 0.815 | 0.7913 | 0.2228 | 0.0728 | | No log | 62.96 | 378 | 0.7625 | 0.82 | 0.3139 | 1.0673 | 0.82 | 0.8025 | 0.2323 | 0.0688 | | No log | 63.96 | 384 | 0.7627 | 0.81 | 0.3144 | 1.1893 | 0.81 | 0.7892 | 0.2236 | 0.0710 | | No log | 64.96 | 390 | 0.7629 | 0.815 | 0.3141 | 1.1934 | 0.815 | 0.7972 | 0.2277 | 0.0707 | | No log | 65.96 | 396 | 0.7569 | 0.81 | 0.3118 | 1.1003 | 0.81 | 0.7866 | 0.2577 | 0.0696 | | No log | 66.96 | 402 | 0.7619 | 0.815 | 0.3136 | 1.1365 | 0.815 | 0.7919 | 0.2562 | 0.0732 | | No log | 67.96 | 408 | 0.7565 | 0.815 | 0.3114 | 1.1325 | 0.815 | 0.7919 | 0.2467 | 0.0694 | | No log | 68.96 | 414 | 0.7558 | 0.815 | 0.3117 | 1.1895 | 0.815 | 0.7972 | 0.2453 | 0.0705 | | No log | 69.96 | 420 | 0.7550 | 0.815 | 0.3111 | 1.1924 | 0.815 | 0.7972 | 0.2107 | 0.0709 | | No log | 70.96 | 426 | 0.7573 | 0.805 | 0.3123 | 1.1886 | 0.805 | 0.7795 | 0.2476 | 0.0737 | | No log | 71.96 | 432 | 0.7521 | 0.81 | 0.3099 | 1.1911 | 0.81 | 0.7866 | 0.2117 | 0.0698 | | No log | 72.96 | 438 | 0.7542 | 0.81 | 0.3112 | 1.1878 | 0.81 | 0.7827 | 0.2332 | 0.0726 | | No log | 73.96 | 444 | 0.7509 | 0.815 | 0.3096 | 1.1880 | 0.815 | 0.7899 | 0.2364 | 0.0709 | | No log | 74.96 | 450 | 0.7526 | 0.81 | 0.3105 | 1.1889 | 0.81 | 0.7827 | 0.2453 | 0.0724 | | No log | 75.96 | 456 | 0.7488 | 0.81 | 0.3090 | 1.1869 | 0.81 | 0.7827 | 0.2285 | 0.0699 | | No log | 76.96 | 462 | 0.7506 | 0.815 | 0.3097 | 1.1901 | 0.815 | 0.7934 | 0.2547 | 0.0721 | | No log | 77.96 | 468 | 0.7505 | 0.81 | 0.3098 | 1.1876 | 0.81 | 0.7827 | 0.2110 | 0.0724 | | No log | 78.96 | 474 | 0.7487 | 0.815 | 0.3089 | 1.1885 | 0.815 | 0.7934 | 0.2319 | 0.0715 | | No log | 79.96 | 480 | 0.7472 | 0.81 | 0.3083 | 1.1877 | 0.81 | 0.7827 | 0.2310 | 0.0714 | | No log | 80.96 | 486 | 0.7494 | 0.81 | 0.3094 | 1.1877 | 0.81 | 0.7827 | 0.2462 | 0.0738 | | No log | 81.96 | 492 | 0.7466 | 0.815 | 0.3082 | 1.1888 | 0.815 | 0.7922 | 0.2181 | 0.0709 | | No log | 82.96 | 498 | 0.7467 | 0.81 | 0.3083 | 1.1874 | 0.81 | 0.7827 | 0.2454 | 0.0714 | | 0.7129 | 83.96 | 504 | 0.7479 | 0.815 | 0.3088 | 1.1888 | 0.815 | 0.7922 | 0.2272 | 0.0741 | | 0.7129 | 84.96 | 510 | 0.7456 | 0.81 | 0.3080 | 1.1853 | 0.81 | 0.7847 | 0.2358 | 0.0719 | | 0.7129 | 85.96 | 516 | 0.7465 | 0.815 | 0.3082 | 1.1908 | 0.815 | 0.7922 | 0.2322 | 0.0721 | | 0.7129 | 86.96 | 522 | 0.7454 | 0.805 | 0.3081 | 1.1848 | 0.805 | 0.7819 | 0.2262 | 0.0719 | | 0.7129 | 87.96 | 528 | 0.7471 | 0.815 | 0.3086 | 1.1894 | 0.815 | 0.7922 | 0.2351 | 0.0741 | | 0.7129 | 88.96 | 534 | 0.7459 | 0.815 | 0.3082 | 1.1885 | 0.815 | 0.7922 | 0.2159 | 0.0726 | | 0.7129 | 89.96 | 540 | 0.7435 | 0.815 | 0.3072 | 1.1861 | 0.815 | 0.7922 | 0.2291 | 0.0712 | | 0.7129 | 90.96 | 546 | 0.7454 | 0.81 | 0.3080 | 1.1876 | 0.81 | 0.7847 | 0.2180 | 0.0733 | | 0.7129 | 91.96 | 552 | 0.7461 | 0.815 | 0.3083 | 1.1883 | 0.815 | 0.7942 | 0.2308 | 0.0743 | | 0.7129 | 92.96 | 558 | 0.7451 | 0.815 | 0.3079 | 1.1883 | 0.815 | 0.7922 | 0.2330 | 0.0734 | | 0.7129 | 93.96 | 564 | 0.7434 | 0.815 | 0.3073 | 1.1863 | 0.815 | 0.7942 | 0.2217 | 0.0720 | | 0.7129 | 94.96 | 570 | 0.7446 | 0.815 | 0.3077 | 1.1882 | 0.815 | 0.7942 | 0.2400 | 0.0731 | | 0.7129 | 95.96 | 576 | 0.7450 | 0.815 | 0.3079 | 1.1882 | 0.815 | 0.7942 | 0.2144 | 0.0735 | | 0.7129 | 96.96 | 582 | 0.7440 | 0.815 | 0.3075 | 1.1871 | 0.815 | 0.7942 | 0.2348 | 0.0731 | | 0.7129 | 97.96 | 588 | 0.7441 | 0.815 | 0.3076 | 1.1876 | 0.815 | 0.7942 | 0.2225 | 0.0732 | | 0.7129 | 98.96 | 594 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 | | 0.7129 | 99.96 | 600 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
GeorgeDam/ppo-Huggy
GeorgeDam
2023-07-06T23:44:03Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T23:43:59Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: GeorgeDam/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Raizel123/Feliclora
Raizel123
2023-07-06T23:40:44Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T23:32:38Z
--- license: creativeml-openrail-m ---
zwpython/zw-chinese-vicuna-7B-v1.3
zwpython
2023-07-06T23:17:43Z
0
1
null
[ "region:us" ]
null
2023-07-06T23:12:58Z
全球首发,vicuna-7B-v1.3中文ok,母版是vicuna-7B-v1.3正式版。 更多参见:https://github.com/ziwang-com/chinese-StableVicuna 和:zw公众号 为响应国家AI大战略需求,提高国内AI、GPT初创团队的竞争力,不要输在起跑线上。 zw-vicuna系列zw中文汉化版,首度提供免费下载通道。 百度网盘提取码:hiks 链接:https://pan.baidu.com/s/1EH19ablXVLYQP1f-IaPS-Q?pwd=hiks 如有更改,最新下载地址请参见QQ群文件:655402626(GPT+千人QQ大群) zw-vicuna中文汉化版,模型文件是ggml版格式 cpu+gpu版本,llamacpp运行,win,linux,mac-os通吃。 具体细节参见:https://github.com/ggerganov/llama.cpp Prompt template提示词模板: A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input USER: prompt ASSISTANT: 更多细节和技术参数,参见: 官方原版:https://huggingface.co/lmsys/vicuna-7b-v1.3 Github项目: https://github.com/ziwang-com/chinese-StableVicuna
garrettbaber/twitter-roberta-base-sadness-intensity
garrettbaber
2023-07-06T23:08:47Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "text-regression", "sadness", "emotion", "emotion intensity", "unk", "dataset:SemEval-2018-Task-1-Text-Regression-Task", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T23:04:50Z
--- tags: - text-regression - sadness - emotion - emotion intensity language: - unk widget: - text: I'm feeling down datasets: - SemEval-2018-Task-1-Text-Regression-Task co2_eq_emissions: emissions: 0.025884770512937715 --- # twitter-roberta-base-sadness-intensity This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression). Warning: Hosted inference API produces inaccurate values # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 72772139027 - CO2 Emissions (in grams): 0.0259 ## Validation Metrics - Loss: 0.011 - MSE: 0.011 - MAE: 0.079 - R2: 0.726 - RMSE: 0.103 - Explained Variance: 0.727 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I'm feeling down"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-sadness-intensity ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity") tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity") inputs = tokenizer("I'm feeling down", return_tensors="pt") outputs = model(**inputs) ```
andmusician/WizardLM-7B-GPTQ
andmusician
2023-07-06T23:01:47Z
77
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-04T11:47:08Z
--- license: apache-2.0 datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered inference: false --- # WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored' version of WizardLM](https://huggingface.co/ehartford/WizardLM-7B-Uncensored). It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to remove all the "I'm sorry.." type ChatGPT responses](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered). ## Other repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ) * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML) * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-7B-uncensored-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-7B-uncensored-GPTQ`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Provided files **Compatible file - WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors** In the `main` branch - the default one - you will find `WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors` This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. * `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with text-generation-webui one-click-installers * Parameters: Groupsize = 128g. No act-order. * Command used to create the GPTQ: ``` python llama.py models/ehartford_WizardLM-7B-Uncensored c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/eric-gptq/WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors ``` # Eric's original model card This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out, including Rohan, TheBloke, and Caseus # WizardLM's original model card Overview of Evol-Instruct Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_overall.png) ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)
Meina/MeinaUnreal_V4
Meina
2023-07-06T23:01:40Z
31
3
diffusers
[ "diffusers", "safetensors", "art", "anime", "meina", "unreal", "semirealistic", "2.5d", "sexy", "fantasy", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T22:44:41Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - art - anime - meina - unreal - semirealistic - 2.5d - sexy - fantasy --- MeinaUnreal objetive is to be able to do anime art with a 2.5d feeling. ( the VAE is already baked in the model ) For examples and prompts, please checkout: https://civitai.com/models/18798/meinaunreal I have a discord server where you can post images that you generated, discuss prompt and/or ask for help. https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3 And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models! You may also try this model using Sinkin.ai: https://sinkin.ai/m/PREaKGN Recommendations of use: Enable Quantization in K samplers. Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes! Recommended parameters: Sampler: DPM++ 2M Karras: 20 to 40 steps. Sampler: DPM++ SDE Karras: 20 to 30 steps. CFG Scale: 7. Resolutions: 512x768, 512x1024 for Portrait! Resolutions: 768x512, 1024x512, 1536x512 for Landscape! Hires.fix: R-ESRGAN 4x+Anime6b, with 15 steps at 0.3 denoising. Clip Skip: 2. Negatives: ' (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers), '
yongsun-yoon/minilmv2-bertscore-distilled
yongsun-yoon
2023-07-06T23:00:56Z
140
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-06-30T22:04:37Z
This is a distilled BERTScore model. Please read [this post](https://medium.com/@yongsun.yoon/bertscore-knowledge-distillation-42721b3508e2) for details. ```python from bert_score import BERTScorer texts1 = ['This is a text.'] texts2 = ['This is another text.'] scorer = BERTScorer(model_type='yongsun-yoon/minilmv2-bertscore-distilled', num_layers=6) P, R, F = scorer.score(texts1, texts2) ```
TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML
TheBloke
2023-07-06T22:51:43Z
0
2
null
[ "license:other", "region:us" ]
null
2023-07-06T22:46:13Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Chaoyi Wu's PMC_LLAMA 7B 10 Epoch GGML These files are GGML format model files for [Chaoyi Wu's PMC_LLAMA 7B 10 Epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Chaoyi Wu's PMC_LLAMA 7B 10 Epoch This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset. Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs. The model was trained with the following hyperparameters: * Epochs: **10** * Batch size: 128 * Cutoff length: 512 * Learning rate: 2e-5 Each epoch we sample 512 tokens per paper for training. The model can be loaded as follows: ``` import transformers import torch tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch') model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch') sentence = 'Hello, doctor' batch = tokenizer( sentence, return_tensors="pt", add_special_tokens=False ) with torch.no_grad(): generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50) print('model predict: ',tokenizer.decode(generated[0])) ```
cactusfriend/nightmare-promptgen-XL
cactusfriend
2023-07-06T22:16:34Z
127
5
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neo", "text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-26T14:07:40Z
--- license: openrail pipeline_tag: text-generation library_name: transformers widget: - text: "a photograph of" example_title: "photo" - text: "a bizarre cg render" example_title: "render" - text: "the spaghetti" example_title: "meal?" - text: "a (detailed+ intricate)+ picture" example_title: "weights" - text: "photograph of various" example_title: "variety" inference: parameters: temperature: 2.6 max_new_tokens: 250 --- Experimental 'XL' version of [Nightmare InvokeAI Prompts](https://huggingface.co/cactusfriend/nightmare-invokeai-prompts). Very early version and may be deleted.
Shularp/Helsinki_en-mul_test
Shularp
2023-07-06T22:13:45Z
23
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-04T07:48:06Z
--- tags: - generated_from_trainer model-index: - name: Helsinki_en-mul_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Helsinki_en-mul_test This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.4474 | 1.0 | 22199 | 1.6242 | | 1.8308 | 2.0 | 44398 | 1.4888 | | 1.3957 | 3.0 | 66597 | 1.4573 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sd-concepts-library/dongqichang
sd-concepts-library
2023-07-06T22:07:36Z
0
1
null
[ "base_model:stabilityai/stable-diffusion-2", "base_model:finetune:stabilityai/stable-diffusion-2", "license:mit", "region:us" ]
null
2023-07-06T22:07:35Z
--- license: mit base_model: stabilityai/stable-diffusion-2 --- ### dongqichang on Stable Diffusion This is the `<dqc>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<dqc> 0](https://huggingface.co/sd-concepts-library/dongqichang/resolve/main/concept_images/0.jpeg) ![<dqc> 1](https://huggingface.co/sd-concepts-library/dongqichang/resolve/main/concept_images/3.jpeg) ![<dqc> 2](https://huggingface.co/sd-concepts-library/dongqichang/resolve/main/concept_images/1.jpeg) ![<dqc> 3](https://huggingface.co/sd-concepts-library/dongqichang/resolve/main/concept_images/2.jpeg)
jaderfigueredo/distilbert-base-uncased-finetuned-cola
jaderfigueredo
2023-07-06T22:03:25Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T12:22:59Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: jaderfigueredo/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jaderfigueredo/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1906 - Validation Loss: 0.5356 - Train Matthews Correlation: 0.5288 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5205 | 0.4630 | 0.4449 | 0 | | 0.3218 | 0.4463 | 0.5481 | 1 | | 0.1906 | 0.5356 | 0.5288 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
juancopi81/lmd-8bars-2048-epochs20_v3
juancopi81
2023-07-06T21:58:13Z
136
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T23:21:51Z
--- tags: - generated_from_trainer model-index: - name: lmd-8bars-2048-epochs20_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmd-8bars-2048-epochs20_v3 This model is a fine-tuned version of [juancopi81/lmd-8bars-2048-epochs20_v2](https://huggingface.co/juancopi81/lmd-8bars-2048-epochs20_v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 4 - seed: 1 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0054 | 0.5 | 4994 | 0.9774 | | 0.9784 | 1.0 | 9988 | 0.9563 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bfriederich/distilbert-base-uncased-finetuned-emotion
bfriederich
2023-07-06T21:57:04Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T17:37:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9228964935077866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2191 - Accuracy: 0.923 - F1: 0.9229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7996 | 1.0 | 250 | 0.3156 | 0.9025 | 0.8987 | | 0.2491 | 2.0 | 500 | 0.2191 | 0.923 | 0.9229 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_central_usrb
aroot
2023-07-06T21:50:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T21:31:40Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central_usrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central_usrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1496 - Bleu: 31.8498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
JoshELambert/weakgov
JoshELambert
2023-07-06T21:47:24Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T21:21:31Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
nkpz/bayling-13b-v1.1-gptq-32g
nkpz
2023-07-06T21:40:09Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T21:11:59Z
--- license: gpl-3.0 --- 4-bit (32 groupsize) quantized files for [ICTNLP/bayling-13b-v1.1](https://huggingface.co/ICTNLP/bayling-13b-v1.1) `BayLing (百聆, bǎi líng) is an instruction-following LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.` Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors
TheBloke/Selfee-13B-SuperHOT-8K-GGML
TheBloke
2023-07-06T21:38:08Z
0
1
null
[ "license:other", "region:us" ]
null
2023-07-06T17:16:57Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 13B GGML These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/selfee-13b-fp16). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/selfee-13b-delta) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | selfee-13b-superhot-8k.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | selfee-13b-superhot-8k.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | selfee-13b-superhot-8k.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | selfee-13b-superhot-8k.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | selfee-13b-superhot-8k.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 selfee-13b-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Kaist AI's Selfee 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 13B GGML This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta). It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ) * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaist AI's Selfee 13B <p align="center" width="100%"> <a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a> </p> # SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) ## News [May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details. ## Overview This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents: - The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)). - The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)). - The training process for the model ([detail](#training), [code](train)). - The inference process for the model ([detail](#inference), [code](inference)). - The evaluation method and dataset ([detail](#evaluation), [code](evaluation)). This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌 **We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.** ## Data Release For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question. For other datsets, we do not need special data collection method. ## Data Generation Process To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br> Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br> Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br> Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair. We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br> You can see the details with command [here](data_augmentation/README.md).<br> *We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them! ## Training We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br> To reproduce the training procedure, here are the steps. <br> ``` pip install -r requirements.txt ``` ``` torchrun --nproc_per_node=4 train/train_mem.py \ --model_name_or_path llama-7b \ --data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \ --bf16 True \ --output_dir ckpt/selfee-7b \ --num_train_epochs 3 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 2 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 5000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "shard_grad_op auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --training_objective full \ ``` The hyperparameters are as follows, following Vicuna and Alpaca. | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 | ## Inference <b>Restoring checkpoint using diff</b><br> We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command ``` python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta ``` <b>Autonomous Inference Mode</b><br> Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br> For autonomous inference mode, ``` python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl" ``` <b>Revision Enforce Inference Mode</b><br> We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer. For revision enforce inference mode, use the `max-num-revision` argument. ``` python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3 ``` ## Evaluation Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br> We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br> ### GPT-4 Automatic Evaluation First, you need to get your API key to get access to the GPT-4 API. ``` export OPENAI_API_KEYS={personal_key} ``` To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`), ``` python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl ``` To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed. ``` python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl ``` ## Limitations Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt. ## Online demo Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>! #### How to launch the demo yourself To serve the web demo yourself, run the following commands: 1. Run the controller ``` python3 -m serve.controller ``` 2. Run the model worker ``` python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b ``` 3. Run the web server ``` python3 -m serve.gradio_web_server --share ``` You can find the serving code [here](serve). ### Team members <a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/> (* denotes equal contribution) ### Release We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface. ### License The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us. ### Citation Please cite if you use the data or code in this repo. ``` @misc{selfee2023, author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon}, title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation}, url = {https://kaistai.github.io/SelFee/}, month = {May}, year = {2023}, howpublished = {Blog post} } ```
ahmedALM1221/convnextv2-large-1k-224-finetuned-eurosat-50
ahmedALM1221
2023-07-06T21:34:41Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-04T14:11:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnextv2-large-1k-224-finetuned-eurosat-50 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Augmented-Final split: train args: Augmented-Final metrics: - name: Accuracy type: accuracy value: 0.959917780061665 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-large-1k-224-finetuned-eurosat-50 This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1615 - Accuracy: 0.9599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.9 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9003 | 1.0 | 122 | 1.8892 | 0.3587 | | 1.6055 | 2.0 | 244 | 1.5936 | 0.5766 | | 1.2982 | 3.0 | 366 | 1.2568 | 0.6639 | | 1.0273 | 4.0 | 488 | 0.9558 | 0.7431 | | 0.7863 | 5.0 | 610 | 0.7440 | 0.7965 | | 0.8142 | 6.0 | 732 | 0.6206 | 0.8304 | | 0.6118 | 7.0 | 854 | 0.4672 | 0.8684 | | 0.4776 | 8.0 | 976 | 0.3756 | 0.9075 | | 0.4176 | 9.0 | 1098 | 0.3065 | 0.9301 | | 0.282 | 10.0 | 1220 | 0.2586 | 0.9404 | | 0.306 | 11.0 | 1342 | 0.2098 | 0.9476 | | 0.2006 | 12.0 | 1464 | 0.1615 | 0.9599 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
am-infoweb/question-answering-roberta-anu-SQuAD-v2
am-infoweb
2023-07-06T21:32:52Z
106
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "Question Answering", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-06T21:27:17Z
--- license: apache-2.0 tags: - Question Answering metrics: - squad model-index: - name: anuragsingh28/question-answering-roberta-anu-s-v2 results: [] --- # Question Answering The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text, answer span & confidence score.<br> Model is encoder-only (deepset/roberta-base-squad2) with QuestionAnswering LM Head, fine-tuned on SQUADx dataset with **exact_match:** 84.83 & **f1:** 91.80 performance scores. Please follow this link for [Encoder based Question Answering V1](https://huggingface.co/anuragsingh28/question-answering-roberta-anu-s-v2/) Example code: ``` from transformers import pipeline model_checkpoint = "anuragsingh28/question-answering-roberta-anu-s-v2" context = """ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """ question = "Which deep learning libraries back 🤗 Transformers?" question_answerer = pipeline("question-answering", model=model_checkpoint) question_answerer(question=question, context=context) ``` ## Training and evaluation data SQUAD Split ## Training procedure Preprocessing: 1. SQUAD Data longer chunks were sub-chunked with input context max-length 384 tokens and stride as 128 tokens. 2. Target answers readjusted for sub-chunks, sub-chunks with no-answers or partial answers were set to target answer span as (0,0) Metrics: 1. Adjusted accordingly to handle sub-chunking. 2. n best = 20 3. skip answers with length zero or higher than max answer length (30) ### Training hyperparameters Custom Training Loop: The following hyperparameters were used during training: - learning_rate: 2e-5 - train_batch_size: 32 - eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results {'exact_match': 84.83443708609272, 'f1': 91.79987545811638} ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.0
YakovElm/Apache_20_BERT_More_Properties
YakovElm
2023-07-06T21:18:12Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T21:17:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache_20_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache_20_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1597 - Train Accuracy: 0.9624 - Validation Loss: 0.3500 - Validation Accuracy: 0.9055 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1743 | 0.9589 | 0.3407 | 0.9055 | 0 | | 0.1613 | 0.9624 | 0.3694 | 0.9055 | 1 | | 0.1597 | 0.9624 | 0.3500 | 0.9055 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_random_ssrb
aroot
2023-07-06T21:15:22Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T20:54:29Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random_ssrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random_ssrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8934 - Bleu: 4.1639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
rodrigoclira/rl_course_vizdoom_health_gathering_supreme
rodrigoclira
2023-07-06T21:11:06Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T21:10:59Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 8.25 +/- 3.07 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r rodrigoclira/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
JoshELambert/fishpop
JoshELambert
2023-07-06T20:51:49Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T20:00:19Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
espnet/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw
espnet
2023-07-06T20:46:31Z
1
0
espnet
[ "espnet", "audio", "self-supervised-learning", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2023-01-04T13:59:15Z
--- tags: - espnet - audio - self-supervised-learning language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 SSL model ### `simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw` This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 753f40d61813436d4e76660904d02eaed7a6649e pip install -e . cd egs2/librispeech/ssl1 ./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw ``` ## SSL config <details><summary>expand</summary> ``` config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it1.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw ngpu: 1 seed: 0 num_workers: 64 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 49251 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 250 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 45000000 valid_batch_bins: null train_shape_file: - exp/hubert_iter1_stats_raw/train/speech_shape - exp/hubert_iter1_stats_raw/train/text_shape.word valid_shape_file: - exp/hubert_iter1_stats_raw/valid/speech_shape - exp/hubert_iter1_stats_raw/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 400 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960/wav.scp - speech - sound - - dump/raw/train_960/text.km.kmeans_iter1_hubert_train_960_portion0.1 - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text.km.kmeans_iter1_hubert_train_960_portion0.1 - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 32000 token_list: - '386' - '160' - '89' - '3' - '448' - '431' - '319' - '247' - '256' - '23' - '267' - '274' - '479' - '227' - '197' - '74' - '362' - '159' - '190' - '275' - '241' - '147' - '242' - '105' - '7' - '320' - '311' - '327' - '130' - '485' - '427' - '22' - '493' - '254' - '451' - '399' - '342' - '443' - '38' - '33' - '53' - '238' - '86' - '61' - '263' - '218' - '316' - '350' - '96' - '492' - '341' - '496' - '325' - '462' - '24' - '328' - '133' - '407' - '41' - '304' - '373' - '167' - '352' - '456' - '149' - '279' - '84' - '217' - '494' - '139' - '381' - '416' - '305' - '446' - '337' - '228' - '35' - '372' - '55' - '237' - '66' - '13' - '188' - '291' - '43' - '132' - '232' - '144' - '497' - '318' - '0' - '31' - '49' - '400' - '10' - '406' - '398' - '154' - '300' - '226' - '93' - '348' - '82' - '2' - '423' - '113' - '395' - '92' - '394' - '293' - '62' - '137' - '476' - '216' - '432' - '155' - '29' - '369' - '64' - '163' - '389' - '278' - '25' - '164' - '310' - '213' - '126' - '331' - '414' - '11' - '404' - '185' - '365' - '484' - '409' - '17' - '193' - '178' - '273' - '37' - '390' - '128' - '170' - '203' - '298' - '229' - '383' - '67' - '27' - '118' - '72' - '142' - '73' - '65' - '231' - '104' - '124' - '428' - '345' - '230' - '287' - '175' - '294' - '184' - '97' - '48' - '457' - '288' - '204' - '379' - '107' - '200' - '99' - '269' - '442' - '353' - '129' - '445' - '51' - '360' - '80' - '83' - '201' - '223' - '312' - '69' - '30' - '202' - '70' - '286' - '236' - '50' - '123' - '88' - '205' - '151' - '127' - '186' - '367' - '299' - '313' - '220' - '206' - '297' - '422' - '71' - '44' - '281' - '91' - '57' - '408' - '112' - '26' - '145' - '16' - '75' - '235' - '183' - '222' - '171' - '121' - '250' - '472' - '195' - '94' - '357' - '393' - '380' - '370' - '363' - '103' - '396' - '468' - '346' - '40' - '180' - '42' - '351' - '450' - '477' - '239' - '143' - '361' - '314' - '392' - '161' - '473' - '198' - '194' - '371' - '433' - '56' - '444' - '138' - '157' - '245' - '140' - '165' - '412' - '354' - '9' - '333' - '85' - '176' - '323' - '301' - '215' - '264' - '434' - '489' - '355' - '488' - '382' - '177' - '268' - '290' - '114' - '266' - '334' - '356' - '90' - '244' - '259' - '368' - '6' - '303' - '478' - '199' - '376' - '480' - '401' - '1' - '168' - '453' - '19' - '54' - '221' - '100' - '4' - '495' - '77' - '240' - '45' - '481' - '224' - '20' - '120' - '58' - '162' - '12' - '109' - '491' - '115' - '397' - '340' - '196' - '68' - '34' - '415' - '429' - '421' - '475' - '335' - '338' - '172' - '39' - '258' - '330' - '246' - '425' - '296' - '125' - '60' - '52' - '271' - '173' - '469' - '289' - '439' - '207' - '487' - '272' - '332' - '284' - '308' - '388' - '95' - '248' - '101' - '36' - '14' - '315' - '262' - '146' - '343' - '79' - '426' - '21' - '253' - '63' - '292' - '81' - '385' - '309' - '366' - '116' - '131' - '87' - '449' - '283' - '214' - '474' - '329' - '471' - '225' - '108' - '136' - '148' - '306' - '150' - '378' - '460' - '307' - '141' - '98' - '436' - '402' - '192' - '8' - '483' - '440' - '47' - '466' - '486' - '5' - '257' - '447' - '377' - '111' - '251' - '490' - '265' - '438' - '158' - '384' - '135' - '102' - '276' - '211' - '219' - '187' - '347' - '32' - '182' - '169' - '410' - '455' - '461' - '482' - '374' - '463' - '452' - '59' - '152' - '174' - '418' - '166' - '470' - '459' - '153' - '179' - '498' - '430' - '419' - '467' - '208' - '326' - '210' - '270' - '243' - '255' - '233' - '261' - '336' - '282' - '234' - '464' - '181' - '156' - '359' - '454' - '420' - '28' - '249' - '106' - '302' - '191' - '209' - '46' - '117' - '403' - '280' - '324' - '458' - '134' - '122' - '212' - '18' - '437' - '78' - '375' - '252' - '405' - '295' - '435' - '317' - '260' - '364' - '322' - '15' - '339' - '413' - '465' - '285' - '189' - '417' - '344' - '110' - '119' - '277' - '499' - '358' - '411' - '387' - '349' - '424' - '391' - '76' - '441' - '321' - <unk> - <sos/eos> init: null collate_fn_conf: label_downsampling: 1 pad: false rand_crop: true input_size: 1 num_classes: 500 use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' pred_masked_weight: 1.0 pred_nomask_weight: 0.0 loss_weights: 0.0 frontend: null frontend_conf: {} specaug: null specaug_conf: {} normalize: null normalize_conf: {} preencoder: null preencoder_conf: {} encoder: torchaudio_hubert encoder_conf: encoder_projection_dropout: 0.1 encoder_attention_dropout: 0.1 encoder_ff_interm_dropout: 0.0 encoder_dropout: 0.1 encoder_layer_drop: 0.05 model: torchaudio model_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
aroot/eng-fra-simcse_random_usrb
aroot
2023-07-06T20:43:01Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T20:22:39Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random_usrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_usrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1494 - Bleu: 32.0417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
RomaNest/ppo-LunarLander-v2
RomaNest
2023-07-06T20:41:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T20:41:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 218.04 +/- 72.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
espnet/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
espnet
2023-07-06T20:38:53Z
1
0
espnet
[ "espnet", "audio", "self-supervised-learning", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-12-31T03:54:13Z
--- tags: - espnet - audio - self-supervised-learning language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 SSL model ### `simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw` This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 753f40d61813436d4e76660904d02eaed7a6649e pip install -e . cd egs2/librispeech/ssl1 ./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw ``` ## SSL config <details><summary>expand</summary> ``` config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it0.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw ngpu: 1 seed: 0 num_workers: 64 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 45091 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 250 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 48000000 valid_batch_bins: null train_shape_file: - exp/hubert_iter0_stats_raw/train/speech_shape - exp/hubert_iter0_stats_raw/train/text_shape.word valid_shape_file: - exp/hubert_iter0_stats_raw/valid/speech_shape - exp/hubert_iter0_stats_raw/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 400 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960/wav.scp - speech - sound - - dump/raw/train_960/text.km.kmeans_iter0_mfcc_train_960_portion0.1 - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text.km.kmeans_iter0_mfcc_train_960_portion0.1 - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 32000 token_list: - '81' - '5' - '79' - '84' - '27' - '35' - '67' - '56' - '10' - '99' - '24' - '3' - '48' - '8' - '42' - '16' - '32' - '31' - '47' - '43' - '20' - '73' - '49' - '86' - '18' - '64' - '34' - '59' - '95' - '0' - '52' - '44' - '61' - '57' - '30' - '1' - '93' - '6' - '69' - '19' - '7' - '65' - '28' - '89' - '2' - '96' - '91' - '72' - '38' - '78' - '26' - '13' - '39' - '94' - '4' - '88' - '85' - '51' - '82' - '41' - '50' - '21' - '80' - '97' - '87' - '25' - '54' - '12' - '40' - '60' - '29' - '11' - '53' - '71' - '83' - '74' - '68' - '55' - '62' - '76' - '45' - '75' - '92' - '46' - '36' - '66' - '22' - '77' - '23' - '63' - '37' - '58' - '33' - '15' - '17' - '90' - '98' - '14' - '70' - '9' - <unk> - <sos/eos> init: null collate_fn_conf: label_downsampling: 2 pad: false rand_crop: true input_size: 1 num_classes: 100 use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' pred_masked_weight: 1.0 pred_nomask_weight: 0.0 loss_weights: 0.0 frontend: null frontend_conf: {} specaug: null specaug_conf: {} normalize: null normalize_conf: {} preencoder: null preencoder_conf: {} encoder: torchaudio_hubert encoder_conf: encoder_projection_dropout: 0.1 encoder_attention_dropout: 0.1 encoder_ff_interm_dropout: 0.0 encoder_dropout: 0.1 encoder_layer_drop: 0.05 model: torchaudio model_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
WALIDALI/oumadvenly
WALIDALI
2023-07-06T20:38:46Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T20:33:28Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### oumadvenly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML
TheBloke
2023-07-06T20:38:02Z
0
4
null
[ "license:other", "region:us" ]
null
2023-07-06T18:47:36Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's WizardLM-7B-V1.0-Uncensored GGML These files are GGML format model files for [Eric Hartford's WizardLM-7B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | wizardlm-7b-v1.0-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Eric Hartford's WizardLM-7B-V1.0-Uncensored This is a retraining of https://huggingface.co/WizardLM/WizardLM-7B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias. Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Unlike WizardLM/WizardLM-7B-V1.0, but like WizardLM/WizardLM-13B-V1.0 and WizardLM/WizardLM-33B-V1.0, this model is trained with Vicuna-1.1 style prompts. ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
cjohlmacher/unit2-taxi-overly-confident
cjohlmacher
2023-07-06T20:24:09Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T20:24:07Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: unit2-taxi-overly-confident results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-overly-confident", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
whywynn/ppo-Huggy
whywynn
2023-07-06T20:22:34Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T20:22:25Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: whywynn/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
UGARIT/flair_grc_multi_ner
UGARIT
2023-07-06T20:21:08Z
7
1
flair
[ "flair", "pytorch", "token-classification", "ner", "grc", "region:us" ]
token-classification
2022-11-05T08:42:03Z
--- language: - grc tags: - flair - token-classification - ner widget: - ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς . --- # Named Entity Recognition for Ancient Greek Pretrained NER tagging model for ancient Greek # Scores & Tagset <details> ### Training | | Precision | Recall | F1-score | Support | |------|:---------:|:--------:|:--------:|:-------:| | PER | 93.39% | 96.33% | 94.84% | 2127 | | MISC | 84.69% | 92.50% | 88.42% | 933 | | LOC | 89.55% | 77.32% | 82.99% | 388 | ### Evaluation | | Precision | Recall | F1-score | Support | |------|:---------:|:--------:|:--------:|:-------:| | PER | 90.48% | 91.94% | 91.20% | 124 | | MISC | 89.29% | 94.34% | 91.74% | 159 | | LOC | 82.69% | 65.15% | 72.88% | 66 | </details> # Usage ```python from flair.data import Sentence from flair.models import SequenceTagger tagger = SequenceTagger.load("UGARIT/flair_grc_bert_ner") sentence = Sentence('ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .') tagger.predict(sentence) for entity in sentence.get_spans('ner'): print(entity) ``` # Citation *if you use this model, please consider citing [this work](https://www.researchgate.net/publication/365131651_Transformer-Based_Named_Entity_Recognition_for_Ancient_Greek):* ```latex @unpublished{yousefetal22 author = "Yousef, Tariq and Palladino, Chiara and Jänicke, Stefan", title = "Transformer-Based Named Entity Recognition for Ancient Greek", year = {2022}, month = {11}, doi = "10.13140/RG.2.2.34846.61761" url = {https://www.researchgate.net/publication/365131651_Transformer-Based_Named_Entity_Recognition_for_Ancient_Greek} }
cjohlmacher/unit2-taxi-2
cjohlmacher
2023-07-06T20:20:42Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T20:18:26Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: unit2-taxi-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PhysHunter/bert-finetuned-ner
PhysHunter
2023-07-06T20:15:38Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-06T14:17:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.928983358049102 - name: Recall type: recall value: 0.9488387748232918 - name: F1 type: f1 value: 0.9388060944134544 - name: Accuracy type: accuracy value: 0.9858568316948254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0658 - Precision: 0.9290 - Recall: 0.9488 - F1: 0.9388 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0863 | 1.0 | 1756 | 0.0697 | 0.9110 | 0.9317 | 0.9212 | 0.9815 | | 0.0327 | 2.0 | 3512 | 0.0690 | 0.9297 | 0.9482 | 0.9388 | 0.9858 | | 0.0164 | 3.0 | 5268 | 0.0658 | 0.9290 | 0.9488 | 0.9388 | 0.9859 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_central_ssrb
aroot
2023-07-06T20:12:02Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T19:47:43Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central_ssrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central_ssrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1471 - Bleu: 31.8498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
YakovElm/Apache_15_BERT_More_Properties
YakovElm
2023-07-06T20:11:25Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T20:10:50Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache_15_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache_15_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1866 - Train Accuracy: 0.9542 - Validation Loss: 0.3421 - Validation Accuracy: 0.8924 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1956 | 0.9535 | 0.3893 | 0.8924 | 0 | | 0.1874 | 0.9542 | 0.3804 | 0.8924 | 1 | | 0.1866 | 0.9542 | 0.3421 | 0.8924 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
igoroliveira/distilbert-base-uncased-finetuned-cola
igoroliveira
2023-07-06T20:09:07Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T19:11:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: igoroliveira/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # igoroliveira/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1959 - Validation Loss: 0.5357 - Train Matthews Correlation: 0.5177 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5247 | 0.4570 | 0.4887 | 0 | | 0.3259 | 0.4597 | 0.5101 | 1 | | 0.1959 | 0.5357 | 0.5177 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-mya-simcse_random_usblu
aroot
2023-07-06T20:03:33Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T19:41:57Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random_usblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random_usblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8860 - Bleu: 4.2989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_central_usblu
aroot
2023-07-06T19:53:25Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T19:34:40Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central_usblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central_usblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1457 - Bleu: 32.1118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
nkpz/Lawyer-Vicuna-200-gptq-32g
nkpz
2023-07-06T19:53:24Z
5
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T19:35:31Z
--- license: other --- 4-bit (32 groupsize) quantized files for [Devden/Lawyer-Vicuna-200](https://huggingface.co/Devden/Lawyer-Vicuna-200) Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors
andressrg/textual_inversion_cat
andressrg
2023-07-06T19:48:48Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T18:08:33Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - andressrg/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
JoshELambert/poverty
JoshELambert
2023-07-06T19:40:16Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T19:12:23Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp8dwsurb_/JoshELambert/poverty This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp8dwsurb_/JoshELambert/poverty") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
andressrg/textual_inversion_meal
andressrg
2023-07-06T19:28:50Z
16
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T18:45:40Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
RogerB/afriberta_small-finetuned-kintweetsB
RogerB
2023-07-06T19:25:00Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T19:16:16Z
--- tags: - generated_from_trainer model-index: - name: afriberta_small-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta_small-finetuned-kintweetsB This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6414 | 1.0 | 900 | 3.3043 | | 3.3705 | 2.0 | 1800 | 3.1980 | | 3.3101 | 3.0 | 2700 | 3.1867 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Jinouga/sagiri-yamada-asaemon-v1
Jinouga
2023-07-06T19:13:58Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-05T18:27:28Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### sagiri-yamada-asaemon-v1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
heka-ai/gpl-140k
heka-ai
2023-07-06T19:09:09Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-06T19:09:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # heka-ai/gpl-140k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('heka-ai/gpl-140k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('heka-ai/gpl-140k') model = AutoModel.from_pretrained('heka-ai/gpl-140k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/gpl-140k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
KevinQuijano/model-dreambooth-chair-2.0
KevinQuijano
2023-07-06T19:07:15Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T18:13:41Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a sennagamer chair tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - KevinQuijano/model-dreambooth-chair-2.0 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a sennagamer chair using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
kimnguyenwork/ppo-LunarLander-v2
kimnguyenwork
2023-07-06T19:07:15Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T19:06:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.51 +/- 17.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
chh6/ppo-LunarLander-v2
chh6
2023-07-06T19:03:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T18:53:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.77 +/- 12.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RogerB/afriberta_large-finetuned-kintweetsB
RogerB
2023-07-06T19:02:49Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T18:50:03Z
--- license: mit tags: - generated_from_trainer model-index: - name: afriberta_large-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta_large-finetuned-kintweetsB This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4115 | 1.0 | 900 | 3.1168 | | 3.1329 | 2.0 | 1800 | 3.0010 | | 3.0656 | 3.0 | 2700 | 2.9788 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
YakovElm/Apache_10_BERT_More_Properties
YakovElm
2023-07-06T19:02:27Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T19:01:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache_10_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache_10_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2335 - Train Accuracy: 0.9383 - Validation Loss: 0.4310 - Validation Accuracy: 0.8644 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2462 | 0.9333 | 0.4480 | 0.8644 | 0 | | 0.2321 | 0.9383 | 0.4521 | 0.8644 | 1 | | 0.2335 | 0.9383 | 0.4310 | 0.8644 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_central_ssblu
aroot
2023-07-06T19:00:09Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T18:38:04Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2711 - Bleu: 2.6084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
JoshELambert/marginalization
JoshELambert
2023-07-06T19:00:00Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T18:48:46Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpfs9t4b8m/JoshELambert/marginalization This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpfs9t4b8m/JoshELambert/marginalization") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
jjhonny/rl_course_vizdoom_health_gathering_supreme
jjhonny
2023-07-06T18:57:20Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T18:57:13Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.13 +/- 4.89 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r jjhonny/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Yati12/PPO_LunarLander-v2
Yati12
2023-07-06T18:57:16Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T18:56:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.17 +/- 23.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ALM-AHME/beit-large-patch16-224-finetuned-eurosat-50
ALM-AHME
2023-07-06T18:56:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T17:08:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: beit-large-patch16-224-finetuned-eurosat-50 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Augmented-Final split: train args: Augmented-Final metrics: - name: Accuracy type: accuracy value: 0.9856115107913669 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-large-patch16-224-finetuned-eurosat-50 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0568 - Accuracy: 0.9856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.9 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7148 | 1.0 | 122 | 1.6402 | 0.3916 | | 1.1543 | 2.0 | 244 | 1.0718 | 0.6208 | | 0.8948 | 3.0 | 366 | 0.7228 | 0.7564 | | 0.6348 | 4.0 | 488 | 0.5327 | 0.8160 | | 0.647 | 5.0 | 610 | 0.4081 | 0.8551 | | 0.3244 | 6.0 | 732 | 0.2965 | 0.9096 | | 0.305 | 7.0 | 854 | 0.2515 | 0.9342 | | 0.3522 | 8.0 | 976 | 0.1667 | 0.9568 | | 0.1782 | 9.0 | 1098 | 0.1494 | 0.9568 | | 0.1849 | 10.0 | 1220 | 0.0972 | 0.9712 | | 0.1814 | 11.0 | 1342 | 0.0559 | 0.9846 | | 0.1682 | 12.0 | 1464 | 0.0568 | 0.9856 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hopkins/eng-kor-common.simcse.roberta-large
hopkins
2023-07-06T18:50:41Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T18:33:09Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-kor-common.simcse.roberta-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9976 - Bleu: 7.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
EllaHong/datamap_kullm-polyglot-5.8b_exp2
EllaHong
2023-07-06T18:49:54Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-06T18:49:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
RogerB/afro-xlmr-small-finetuned-kintweetsB
RogerB
2023-07-06T18:48:30Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T18:33:31Z
--- license: afl-3.0 tags: - generated_from_trainer model-index: - name: afro-xlmr-small-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-small-finetuned-kintweetsB This model is a fine-tuned version of [Davlan/afro-xlmr-small](https://huggingface.co/Davlan/afro-xlmr-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.808 | 1.0 | 900 | 1.6132 | | 1.7073 | 2.0 | 1800 | 1.5754 | | 1.6585 | 3.0 | 2700 | 1.5900 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ibibek/vicuna-13B-gptq-new
ibibek
2023-07-06T18:48:21Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-06T18:41:05Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
aroot/eng-guj-simcse_random_ssblu
aroot
2023-07-06T18:45:32Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T18:24:22Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_random_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_random_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2774 - Bleu: 2.7054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
RogerB/afro-xlmr-mini-finetuned-kintweetsB
RogerB
2023-07-06T18:32:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T18:16:39Z
--- license: afl-3.0 tags: - generated_from_trainer model-index: - name: afro-xlmr-mini-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-mini-finetuned-kintweetsB This model is a fine-tuned version of [Davlan/afro-xlmr-mini](https://huggingface.co/Davlan/afro-xlmr-mini) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1709 | 1.0 | 900 | 2.8941 | | 3.0063 | 2.0 | 1800 | 2.8361 | | 2.9479 | 3.0 | 2700 | 2.7810 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
BrainTheos/whisper-tiny-en-us
BrainTheos
2023-07-06T18:27:29Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-06T17:32:37Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-en-us results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.36046511627906974 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-en-us This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.7173 - Wer Ortho: 36.1373 - Wer: 0.3605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0007 | 17.86 | 500 | 0.7173 | 36.1373 | 0.3605 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_central_ssblu
aroot
2023-07-06T18:24:41Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T18:06:23Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1498 - Bleu: 31.6893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hopkins/eng-ind-common.simcse.roberta-large
hopkins
2023-07-06T18:21:32Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T18:03:59Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-ind-common.simcse.roberta-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-ind-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7793 - Bleu: 22.3669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Panchovix/airoboros-33b-gpt4-1.2-PI-8192-LoRA-4bit-32g
Panchovix
2023-07-06T18:11:35Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-04T18:43:19Z
--- license: other --- [airoboros-13b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2) merged with bhenrym14's [airoboros-33b-gpt4-1.4.1-PI-8192-LoRA](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA), quantized at 4 bit. More info about the LoRA [Here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16). This is an alternative to SuperHOT 8k LoRA trained with LoRA_rank 64, and airoboros 1.4.1 dataset. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use compress_pos_emb = 4 for any context up to 8192 context. If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
aroot/eng-fra-simcse_random_ssblu
aroot
2023-07-06T18:11:02Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T17:52:40Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1512 - Bleu: 31.7456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Panchovix/h2ogpt-research-oig-oasst1-512-30b-SuperHOT-8k-4bit-32g
Panchovix
2023-07-06T18:10:00Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-26T07:26:29Z
--- license: other --- [h2ogpt-research-oig-oasst1-512-30b ](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/tree/main) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use compress_pos_emb = 4 for any context up to 8192 context. If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k-4bit-32g
Panchovix
2023-07-06T18:09:53Z
4
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-26T04:45:45Z
--- license: other --- [WizardLM-33B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use compress_pos_emb = 4 for any context up to 8192 context. If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
Panchovix/tulu-30b-SuperHOT-8K-4bit-32g
Panchovix
2023-07-06T18:09:41Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-27T00:59:40Z
--- license: other --- [Tulu-30B-SuperHOT-8K-GPTQ by TheBloke](https://huggingface.co/TheBloke/Tulu-30B-SuperHOT-8K-fp16) quantized at 4 bit. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use compress_pos_emb = 4 for any context up to 8192 context. If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
RogerB/afro-xlmr-large-finetuned-kintweetsB
RogerB
2023-07-06T18:03:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T17:18:14Z
--- license: mit tags: - generated_from_trainer model-index: - name: afro-xlmr-large-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-large-finetuned-kintweetsB This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1027 | 1.0 | 3000 | 1.9350 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hopkins/eng-fra-common.simcse.roberta-large
hopkins
2023-07-06T18:02:47Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T17:44:05Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-common.simcse.roberta-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1339 - Bleu: 33.2260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
jcramirezpr/sd-class-butterflies-32-small
jcramirezpr
2023-07-06T18:01:33Z
31
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-07-06T18:01:24Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('jcramirezpr/sd-class-butterflies-32-small') image = pipeline().images[0] image ```
hopkins/eng-deu-common.simcse.roberta-large
hopkins
2023-07-06T17:51:37Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T17:37:45Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-deu-common.simcse.roberta-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6605 - Bleu: 21.3413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
WaefreBeorn/AVGN_James_Rolfe
WaefreBeorn
2023-07-06T17:49:37Z
0
1
null
[ "region:us" ]
null
2023-07-06T17:41:14Z
Here is a multi stage trained model of AVGN using the "Hudson Hawk" Episode of about 16 minutes of speaking and variety yelling. Works good for singing and I include a 150 epoch checkpoint of the v1 which sometimes sounds better than the 1,000 epoch v2 --- license: mit ---
hopkins/eng-mya-common
hopkins
2023-07-06T17:48:57Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T17:28:32Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-common results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-common This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8424 - Bleu: 4.9087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
mucktiymuck/treacefalcon-instruct
mucktiymuck
2023-07-06T17:46:53Z
12
0
transformers
[ "transformers", "pytorch", "coreml", "RefinedWebModel", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T17:44:21Z
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: true widget: - text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?" example_title: "Abu Dhabi Trip" - text: "What's the Everett interpretation of quantum mechanics?" example_title: "Q/A: Quantum & Answers" - text: "Give me a list of the top 10 dive sites you would recommend around the world." example_title: "Diving Top 10" - text: "Can you tell me more about deep-water soloing?" example_title: "Extreme sports" - text: "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?" example_title: "Twitter Helper" - text: "What are the responsabilities of a Chief Llama Officer?" example_title: "Trendy Jobs" license: apache-2.0 --- # ✨ Falcon-7B-Instruct **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B-Instruct? * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).** * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). 💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). 🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct. # Model Card for Falcon-7B-Instruct ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0; - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets. | **Data source** | **Fraction** | **Tokens** | **Description** | |--------------------|--------------|------------|-----------------------------------| | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat | | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct | | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct | | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. Note that this model variant is not optimized for NLP benchmarks. ## Technical Specifications For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances. #### Software Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B-Instruct is made available under the Apache 2.0 license. ## Contact [email protected]