modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 18:27:22
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
464 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 18:27:15
card
stringlengths
11
1.01M
Graphcore/distilroberta-base-ipu
Graphcore
2023-07-07T10:48:00Z
2
0
null
[ "optimum_graphcore", "arxiv:1907.11692", "license:apache-2.0", "region:us" ]
null
2023-03-29T12:19:25Z
--- license: apache-2.0 --- # Graphcore/distilroberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description This model is a distilled version of the [RoBERTa-base model](https://arxiv.org/abs/1907.11692). ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [distilroberta-base](https://huggingface.co/distilroberta-base) model on Graphcore IPUs. ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/distilroberta-base-ipu") ```
Graphcore/vit-base-ipu
Graphcore
2023-07-07T10:47:23Z
13
1
null
[ "optimum_graphcore", "arxiv:2010.11929", "region:us" ]
null
2022-03-02T23:29:04Z
# Graphcore/vit-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description The Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining. It uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train. Paper link : [AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE](https://arxiv.org/pdf/2010.11929.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the ViT base model (e.g. [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) or [deit-base-patch16-384](https://huggingface.co/facebook/deit-base-patch16-384)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/vit-base-ipu") ```
xian79/ml-agent-SnowballTarget
xian79
2023-07-07T10:44:04Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-07T10:44:02Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: xian79/ml-agent-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
xian79/ppo-SnowballTarget
xian79
2023-07-07T10:42:34Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-07T10:38:58Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: xian79/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Chickenfish/MonicaA
Chickenfish
2023-07-07T10:29:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T10:28:02Z
--- license: creativeml-openrail-m ---
Binaryy/xlm-roberta-large-finetuned-cola
Binaryy
2023-07-07T10:20:49Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T09:19:37Z
--- license: mit tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: xlm-roberta-large-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-cola This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1456 - Matthews Correlation: 0.9419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4465 | 1.0 | 606 | 0.4478 | 0.5033 | | 0.364 | 2.0 | 1212 | 0.2318 | 0.8500 | | 0.2294 | 3.0 | 1818 | 0.1767 | 0.9045 | | 0.16 | 4.0 | 2424 | 0.1353 | 0.9343 | | 0.0739 | 5.0 | 3030 | 0.1456 | 0.9419 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Intel/whisper-small-int8-static-inc
Intel
2023-07-07T10:17:34Z
5
0
transformers
[ "transformers", "onnx", "whisper", "automatic-speech-recognition", "int8", "ONNX", "PostTrainingStatic", "Intel® Neural Compressor", "neural-compressor", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-07T10:06:37Z
--- license: apache-2.0 datasets: - librispeech_asr metrics: - wer pipeline_tag: automatic-speech-recognition tags: - automatic-speech-recognition - int8 - ONNX - PostTrainingStatic - Intel® Neural Compressor - neural-compressor library_name: transformers --- ## Model Details: INT8 Whisper small Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning. This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command: ```shell optimum-cli export onnx --model openai/whisper-small whisper-small-with-past/ --task automatic-speech-recognition-with-past --opset 13 ``` | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel | | Date | May 15, 2022 | | Version | 1 | | Type | Speech Recognition | | Paper or Other Resources | - | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-small-int8-static/discussions)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the raw model for automatic speech recognition inference | | Primary intended users | Anyone doing automatic speech recognition inference | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ### How to use Download the model by cloning the repository: ```shell git clone https://huggingface.co/Intel/whisper-small-int8-static ``` Evaluate the model with below code: ```python import os from evaluate import load from datasets import load_dataset from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig model_name = 'openai/whisper-small' model_path = 'whisper-small-int8-static' processor = WhisperProcessor.from_pretrained(model_name) model = WhisperForConditionalGeneration.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) wer = load("wer") librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") from optimum.onnxruntime import ORTModelForSpeechSeq2Seq from transformers import PretrainedConfig model_config = PretrainedConfig.from_pretrained(model_name) predictions = [] references = [] sessions = ORTModelForSpeechSeq2Seq.load_model( os.path.join(model_path, 'encoder_model.onnx'), os.path.join(model_path, 'decoder_model.onnx'), os.path.join(model_path, 'decoder_with_past_model.onnx')) model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2]) for idx, batch in enumerate(librispeech_test_clean): audio = batch["audio"] input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features reference = processor.tokenizer._normalize(batch['text']) references.append(reference) predicted_ids = model.generate(input_features)[0] transcription = processor.decode(predicted_ids) prediction = processor.tokenizer._normalize(transcription) predictions.append(prediction) wer_result = wer.compute(references=references, predictions=predictions) print(f"Result wer: {wer_result * 100}") accuracy = 1 - wer_result print("Accuracy: %.5f" % accuracy) ``` ## Metrics (Model Performance): | Model | Model Size (GB) | wer | |---|:---:|:---:| | FP32 |2.4|3.45| | INT8 |0.6|3.44|
BadreddineHug/donut-base-ocr11
BadreddineHug
2023-07-07T10:12:25Z
74
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T09:28:43Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-ocr11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-ocr11 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
shishir-ml/my_awesome_qa_model
shishir-ml
2023-07-07T10:01:55Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T07:06:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: shishir-ml/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # shishir-ml/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6237 - Validation Loss: 1.8651 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5474 | 2.3653 | 0 | | 1.9089 | 1.8651 | 1 | | 1.6237 | 1.8651 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
nomsgadded/textual_inversion_shark
nomsgadded
2023-07-07T10:01:05Z
36
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T08:40:14Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - nomsgadded/textual_inversion_shark These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went
jordyvl
2023-07-07T09:52:44Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T07:43:27Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0783 - Accuracy: 0.71 - Exit 0 Accuracy: 0.115 - Exit 1 Accuracy: 0.1575 - Exit 2 Accuracy: 0.185 - Exit 3 Accuracy: 0.0875 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7602 | 0.1125 | 0.0925 | 0.0675 | 0.0875 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7309 | 0.115 | 0.1175 | 0.0675 | 0.1075 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6967 | 0.1325 | 0.095 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6631 | 0.17 | 0.085 | 0.0575 | 0.1275 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6242 | 0.205 | 0.085 | 0.0575 | 0.1225 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5736 | 0.2175 | 0.0875 | 0.0825 | 0.12 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5410 | 0.215 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5229 | 0.2325 | 0.1 | 0.0925 | 0.13 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.4841 | 0.2525 | 0.1 | 0.1 | 0.1325 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4382 | 0.29 | 0.1 | 0.1025 | 0.1325 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.3823 | 0.3 | 0.1 | 0.1275 | 0.1325 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3389 | 0.3275 | 0.1 | 0.1175 | 0.1225 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.3002 | 0.35 | 0.0975 | 0.12 | 0.1225 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.2421 | 0.36 | 0.0975 | 0.125 | 0.1275 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.2026 | 0.3575 | 0.1025 | 0.13 | 0.125 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.1712 | 0.375 | 0.105 | 0.1375 | 0.125 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.0999 | 0.4075 | 0.1 | 0.145 | 0.125 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0414 | 0.4225 | 0.1025 | 0.145 | 0.1275 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 1.9981 | 0.4375 | 0.0975 | 0.1425 | 0.13 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.9369 | 0.4575 | 0.1025 | 0.14 | 0.1425 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.8903 | 0.4975 | 0.1025 | 0.14 | 0.145 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.8242 | 0.525 | 0.1025 | 0.1425 | 0.15 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.7520 | 0.5325 | 0.11 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.7203 | 0.5525 | 0.1125 | 0.1475 | 0.1525 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.6753 | 0.565 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.6245 | 0.575 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.5832 | 0.61 | 0.11 | 0.15 | 0.1525 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.5404 | 0.61 | 0.11 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.4958 | 0.6125 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.4613 | 0.6325 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 30.72 | 62 | 1.4479 | 0.63 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 | | No log | 31.72 | 64 | 1.4101 | 0.64 | 0.1125 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 32.72 | 66 | 1.3699 | 0.655 | 0.1125 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 33.72 | 68 | 1.3427 | 0.6725 | 0.115 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 34.72 | 70 | 1.3161 | 0.6825 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 35.72 | 72 | 1.2896 | 0.7025 | 0.115 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 36.72 | 74 | 1.2720 | 0.705 | 0.11 | 0.1525 | 0.185 | 0.0625 | 0.0625 | | No log | 37.72 | 76 | 1.2471 | 0.71 | 0.11 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 38.72 | 78 | 1.2307 | 0.71 | 0.11 | 0.155 | 0.1775 | 0.0625 | 0.0625 | | No log | 39.72 | 80 | 1.2174 | 0.7175 | 0.1125 | 0.155 | 0.1825 | 0.0625 | 0.0625 | | No log | 40.72 | 82 | 1.1991 | 0.705 | 0.1125 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 41.72 | 84 | 1.1867 | 0.71 | 0.1175 | 0.1525 | 0.18 | 0.065 | 0.0625 | | No log | 42.72 | 86 | 1.1764 | 0.7025 | 0.115 | 0.1525 | 0.18 | 0.0675 | 0.0625 | | No log | 43.72 | 88 | 1.1601 | 0.715 | 0.115 | 0.1525 | 0.1825 | 0.0725 | 0.0625 | | No log | 44.72 | 90 | 1.1410 | 0.7175 | 0.115 | 0.1525 | 0.18 | 0.075 | 0.0625 | | No log | 45.72 | 92 | 1.1408 | 0.71 | 0.115 | 0.155 | 0.1825 | 0.075 | 0.0625 | | No log | 46.72 | 94 | 1.1443 | 0.7075 | 0.115 | 0.155 | 0.1825 | 0.0775 | 0.0625 | | No log | 47.72 | 96 | 1.1364 | 0.705 | 0.115 | 0.155 | 0.1775 | 0.0825 | 0.0625 | | No log | 48.72 | 98 | 1.1251 | 0.71 | 0.115 | 0.155 | 0.175 | 0.085 | 0.0625 | | No log | 49.72 | 100 | 1.1113 | 0.7175 | 0.115 | 0.155 | 0.1775 | 0.085 | 0.0625 | | No log | 50.72 | 102 | 1.1040 | 0.7175 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 51.72 | 104 | 1.0972 | 0.715 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 52.72 | 106 | 1.0938 | 0.7175 | 0.115 | 0.1575 | 0.1825 | 0.0875 | 0.0625 | | No log | 53.72 | 108 | 1.0931 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 54.72 | 110 | 1.0887 | 0.7075 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 55.72 | 112 | 1.0865 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 56.72 | 114 | 1.0828 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 57.72 | 116 | 1.0801 | 0.7075 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 58.72 | 118 | 1.0786 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 59.72 | 120 | 1.0783 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
hafeezmhk6/mt5-base-ver6.15
hafeezmhk6
2023-07-07T09:50:03Z
48
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T09:48:16Z
--- metrics: - bleu - character - chrf pipeline_tag: text-classification ---
metafresh89/qr-code
metafresh89
2023-07-07T09:48:46Z
16
4
diffusers
[ "diffusers", "safetensors", "ctrl", "stable-diffusion", "controlnet", "image-to-image", "en", "license:openrail++", "endpoints_compatible", "region:us" ]
image-to-image
2023-07-07T09:24:46Z
--- tags: - stable-diffusion - controlnet - image-to-image license: openrail++ language: - en library_name: diffusers pipeline_tag: image-to-image duplicated_from: DionTimmer/controlnet_qrcode-control_v1p_sd15 --- # QR Code Conditioned ControlNet Models for Stable Diffusion 1.5 ![1](https://www.dropbox.com/s/fxyuqpot2z2ftty/5.png?raw=1) ## Model Description This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v1.5. The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, this 1.5 version model was also trained on the same dataset for those who are using the older version. ## How to use with Diffusers ```bash pip -q install diffusers transformers accelerate torch xformers ``` ```python import torch from PIL import Image from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler from diffusers.utils import load_image controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v1p_sd15", torch_dtype=torch.float16) pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 ) pipe.enable_xformers_memory_efficient_attention() pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() def resize_for_condition_image(input_image: Image, resolution: int): input_image = input_image.convert("RGB") W, H = input_image.size k = float(resolution) / min(H, W) H *= k W *= k H = int(round(H / 64.0)) * 64 W = int(round(W / 64.0)) * 64 img = input_image.resize((W, H), resample=Image.LANCZOS) return img # play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image # qr code image source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png") # initial image, anything init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg") condition_image = resize_for_condition_image(source_image, 768) init_image = resize_for_condition_image(init_image, 768) generator = torch.manual_seed(123121231) image = pipe(prompt="a bilboard in NYC with a qrcode", negative_prompt="ugly, disfigured, low quality, blurry, nsfw", image=init_image, control_image=condition_image, width=768, height=768, guidance_scale=20, controlnet_conditioning_scale=1.5, generator=generator, strength=0.9, num_inference_steps=150, ) image.images[0] ``` ## Performance and Limitations These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).** To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork. ## Installation The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application. For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail. Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
Shridipta-06/a2c-AntBulletEnv-v0
Shridipta-06
2023-07-07T09:47:46Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-25T20:42:39Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1587.94 +/- 648.76 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
digiplay/SoapMix2.5D_v1
digiplay
2023-07-07T09:46:19Z
324
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-20T08:41:17Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/29862?modelVersionId=35949 Sample image I made : ![3e6c93ee-a0c2-47fd-8324-799dc675f7b0.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/ms6ieqyPX8eeAhJr26CWh.jpeg) ![3c96f464-e830-42d1-ad9b-66423c9ce8e9.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/Tu-zAWf-Qg0leoA_2bSvH.jpeg) Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/88a992a8-37b2-487b-f0e7-8e5c97b9cf00/width=1024/719830179c49570eb4c0c8eb7a4f118728b9964217fd971aa5474dd854d04af3.jpeg)
Arup-Dutta-Bappy/bert-large-uncased-finetuned-squad
Arup-Dutta-Bappy
2023-07-07T09:42:01Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-04T10:31:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-large-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-squad This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KPrashanth/Pixelcopter
KPrashanth
2023-07-07T09:33:18Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T09:33:14Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 43.70 +/- 32.93 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
KINGeorge2000/sentiment_roberta_yu
KINGeorge2000
2023-07-07T09:31:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-23T05:49:16Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sentiment_roberta_yu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_roberta_yu This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2580 - Accuracy: 0.6668 - F1: 0.6668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
RHP27042002/AI_NFT_generator
RHP27042002
2023-07-07T09:23:52Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "text-generation", "en", "dataset:OpenAssistant/oasst1", "license:mit", "region:us" ]
text-generation
2023-07-07T09:09:28Z
--- license: mit datasets: - OpenAssistant/oasst1 language: - en metrics: - character pipeline_tag: text-generation tags: - code library_name: adapter-transformers --- // SPDX-License-Identifier: UNLICENSED pragma solidity ^0.8.0; import "@openzeppelin/contracts/utils/Counters.sol"; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import "@openzeppelin/contracts/token/ERC721/extensions/ERC721URIStorage.sol"; contract NFT is ERC721URIStorage { using Counters for Counters.Counter; Counters.Counter private _tokenIds; address public owner; uint256 public cost; constructor( string memory _name, string memory _symbol, uint256 _cost ) ERC721(_name, _symbol) { owner = msg.sender; cost = _cost; } function mint(string memory tokenURI) public payable { require(msg.value >= cost); _tokenIds.increment(); uint256 newItemId = _tokenIds.current(); _mint(msg.sender, newItemId); _setTokenURI(newItemId, tokenURI); } function totalSupply() public view returns (uint256) { return _tokenIds.current(); } function withdraw() public { require(msg.sender == owner); (bool success, ) = owner.call{value: address(this).balance}(""); require(success); } }
Uminosachi/realisticVisionV30_v30VAE-inpainting
Uminosachi
2023-07-07T09:15:20Z
35
2
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T23:54:35Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [realisticVisionV30_v30VAE-inpainting](https://civitai.com/models/4201?modelVersionId=105723).
Uminosachi/Deliberate-inpainting
Uminosachi
2023-07-07T09:13:29Z
30
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-03T12:23:02Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [Deliberate-inpainting](https://huggingface.co/XpucT/Deliberate).
Abzu/mpt-30b-chat-q8
Abzu
2023-07-07T09:09:05Z
19
1
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2010.04245", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2023-07-07T09:02:25Z
--- license: cc-by-nc-sa-4.0 datasets: - camel-ai/code - ehartford/wizard_vicuna_70k_unfiltered - anon8231489123/ShareGPT_Vicuna_unfiltered - teknium1/GPTeacher/roleplay-instruct-v2-final - teknium1/GPTeacher/codegen-isntruct - timdettmers/openassistant-guanaco - camel-ai/math - project-baize/baize-chatbot/medical_chat_data - project-baize/baize-chatbot/quora_chat_data - project-baize/baize-chatbot/stackoverflow_chat_data - camel-ai/biology - camel-ai/chemistry - camel-ai/ai_society - jondurbin/airoboros-gpt4-1.2 - LongConversations - camel-ai/physics tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-30B-Chat MPT-30B-Chat is a chatbot-like model for dialogue generation. It was built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai), [GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets. * License: _CC-By-NC-SA-4.0_ (non-commercial use only) * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date June 22, 2023 ## Model License _CC-By-NC-SA-4.0_ (non-commercial use only) ## Documentation * [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Dialogue **ksreenivasan**: > I want you to come up with a tweet based on this summary of the article: > > We are excited to extend the MosaicML Foundation Series with MPT-30B, a new commercially licensed open-source model that is significantly more powerful, and outperforms the > original GPT-3. In addition, we are releasing two fine-tuned models, MPT-30B-Instruct and MPT-30B-Chat, that are built on top of MPT-30B and excel at short-form instruction > following and multi-turn conversations, respectively. > All MPT-30B models come with special features that differentiate them from other LLMs, including an 8k token context window (which can be further extended via finetuning; > see MPT-7B-StoryWriter), support for context-length extrapolation via ALiBi, and efficient inference + training performance via FlashAttention. It also has strong coding > abilities thanks to its pretraining data mixture. **MPT-30B-Chat**: > "Revolutionize your AI capabilities with MosaicML's latest addition, MPT-30B! Outperforming GPT-3, this open-source model is significantly more powerful and comes with > finely-tuned models for exceptional instruction following and multi-turn conversations. Unleash the potential of an 8k token context window, ALiBi support, and FlashAttention. > Experience the future of AI with MosaicML! #AI #ML #innovation" **ksreenivasan**: > Is there anything you want to say to MosaicML about this exciting news? **MPT-30B-Chat**: > As your AI marketing assistant, I would say that this is truly exciting news for the AI community! MosaicML's latest addition, MPT-30B, is poised to revolutionize the industry with its powerful open-source capabilities and finely-tuned models for instruction following and multi-turn conversations. > The special features, including the 8k token context window, ALiBi support, and FlashAttention, are sure to set MosaicML's MPT-30B apart from other LLMs. > Congratulations to the MosaicML team on this incredible achievement! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b-chat', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b-chat' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b-chat' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Data Mix The model was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | |-------------|----------------------------|------------| | Airoboros/GPT4-1.2 | 26.4M | 1.71% | | Baize | 55.0M | 3.57% | | Camel | 301M | 19.54% | | GPTeacher | 7.56M | 0.49% | | Guanaco | 15.6M | 1.02% | | LongCoversations | 18.4M | 1.19% | | ShareGPT | 821M | 53.24% | | WizardLM | 297M | 19.23% | "LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date. ### Training Configuration This model was trained on 64 H100s for about 7.6 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B-Chat was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
digiplay/SoapMix2.5D_v2
digiplay
2023-07-07T09:04:51Z
285
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-20T08:41:36Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/29862?modelVersionId=39125 Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d0e364b4-3a53-4c8f-d248-3335dc23bd00/width=1024/00015-3123836998.jpeg)
Uminosachi/revAnimated_v121Inp-inpainting
Uminosachi
2023-07-07T08:59:43Z
383
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-02T02:48:40Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [ReV Animated v1.2.1-inp](https://civitai.com/models/7371?modelVersionId=43978).
aroot/eng-mya-simcse_longestplus_usrl
aroot
2023-07-07T08:55:32Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T08:34:40Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longestplus_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longestplus_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8757 - Bleu: 4.1877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
TheBloke/falcon-40b-sft-top1-560-GGML
TheBloke
2023-07-07T08:45:54Z
4
6
transformers
[ "transformers", "falcon", "sft", "en", "de", "es", "fr", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "region:us" ]
null
2023-07-04T23:29:19Z
--- license: apache-2.0 language: - en - de - es - fr tags: - sft inference: false datasets: - OpenAssistant/oasst1 --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Open Assistant's Falcon 40B SFT OASST-TOP1 GGML These files are GGCC format model files for [Open Assistant's Falcon 40B SFT OASST-TOP1](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560). These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp. GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp). Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon. ## Repositories available * [2, 3, 4, 5, 6, 8-bit GGCT models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-40b-sft-top1-560-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560) ## Prompt template ``` <|prompter|>prompt<|endoftext|><|assistant|> ``` <!-- compatibility_ggml start --> ## Compatibility To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps: ``` git clone https://github.com/cmp-nct/ggllm.cpp cd ggllm.cpp rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release ``` Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.' Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example: ``` bin/falcon_main -t 8 -ngl 100 -b 1 -m falcon-40b-top1-560.ggccv1.q4_K.bin -p "<|prompter|>write a story about llamas<|endoftext|><|assistant|>" ``` You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used. Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have. `-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter. Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | falcon-40b-top1-560.ggccv1.q2_K.bin | q2_K | 2 | 13.74 GB | 16.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | falcon-40b-top1-560.ggccv1.q3_K.bin | q3_K_S | 3 | 17.98 GB | 20.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | falcon-40b-top1-560.ggccv1.q4_K.bin | q4_K_S | 4 | 23.54 GB | 26.04 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | falcon-40b-top1-560.ggccv1.q5_K.bin | q5_K_S | 5 | 28.77 GB | 31.27 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | falcon-40b-top1-560.ggccv1.q6_K.bin | q6_K | 6 | 34.33 GB | 36.83 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | falcon-40b-top1-560.ggccv1.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Spiking Neurons AB, Kevin Schuppel, Cory Kujawski, senxiiz, Luke Pendergrass, John Villwock, Ghost , Alex , Sean Connelly, Space Cruiser, Eugene Pentland, Pyrater, Matthew Berman, Dave, Derek Yates, Jonathan Leane, Viktor Bowallius, Michael Levine, Joseph William Delisle, Fred von Graf, Asp the Wyvern, Nikolai Manek, Pierre Kircher, webtim, K, RoA, Karl Bernard, Artur Olbinski, Rainer Wilmers, Ai Maven, Nathan LeClaire, Ajan Kanaga, Stephen Murray, Edmond Seymore, zynix , Imad Khwaja, John Detwiler, Randy H, subjectnull, Alps Aficionado, Greatston Gnanesh, Trenton Dambrowitz, Junyu Yang, Raven Klaugh, biorpg, Deep Realms, vamX, Talal Aujan, Johann-Peter Hartmann, WelcomeToTheClub, Chris McCloskey, Luke, chris gileta, terasurfer , Iucharbius , Preetika Verma, Willem Michiel, Fen Risland, SuperWojo, Khalefa Al-Ahmad, Daniel P. Andersen, Gabriel Puliatti, Illia Dulskyi, Willian Hasse, Oscar Rangel, ya boyyy, Mano Prime, Lone Striker, Kalila Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OpenAssistant's Falcon 40B SFT OASST-TOP1 # Open-Assistant Falcon 40B SFT OASST-TOP1 Model This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM. It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens. ## Model Details - **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b) - **Model type:** Causal decoder-only transformer language model - **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-03_OpenAssistant_falcon-40b-sft-top1-560_sampling_noprefix2.json) - **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/) - **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/3lr77x4h) (Checkpoint: 560 steps) - **License:** Apache 2.0 - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) ## Prompting Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token. Input prompt example: ``` <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> ``` The input ends with the `<|assistant|>` token to signal that the model should start generating the assistant reply. ## Configuration Details Model: ``` falcon-40b: dtype: bf16 log_dir: "falcon_log_40b" learning_rate: 5e-6 model_name: "tiiuae/falcon-40b" deepspeed_config: configs/zero3_config_falcon.json output_dir: falcon weight_decay: 0.0 max_length: 2048 warmup_steps: 20 gradient_checkpointing: true gradient_accumulation_steps: 1 per_device_train_batch_size: 18 per_device_eval_batch_size: 10 eval_steps: 80 save_steps: 80 num_train_epochs: 8 save_total_limit: 4 use_flash_attention: false residual_dropout: 0.3 residual_dropout_lima: true sort_by_length: false save_strategy: steps ``` Dataset: ``` oasst-top1: datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0 input_file_path: 2023-05-06_OASST_labels.jsonl.gz val_split: 0.05 top_k: 1 ```
Abzu/mpt-30b-q8
Abzu
2023-07-07T08:41:54Z
21
3
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:allenai/c4", "dataset:mc4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack-dedup", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2023-07-07T08:35:33Z
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - allenai/c4 - mc4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack-dedup - allenai/s2orc inference: false --- # MPT-30B MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision. This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-30B is: * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-30B: The following models are finetuned on MPT-30B: * [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following. Built by finetuning MPT-30B on several carefully curated datasets. * License: _CC-BY-SA-3.0_ * [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai), [GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat) ## Model Date June 22, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 | | c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 | | The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 | | RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 | | Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 | | RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 | | RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)). ### Training Configuration The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform): (i) First it was trained on 440 A100-40GBs with a batch size of 1760. (ii) Then, on 216 A100-40GBs with a batch size of 1728. (iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens. The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
KJan05/Pixelcopter-PLE-v0
KJan05
2023-07-07T08:31:22Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T08:30:39Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 11.70 +/- 11.12 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
soduhh/mt5-small-finetuned-amazon-en-fr
soduhh
2023-07-07T08:30:20Z
5
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T07:02:53Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: soduhh/mt5-small-finetuned-amazon-en-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # soduhh/mt5-small-finetuned-amazon-en-fr This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.9132 - Validation Loss: 3.2661 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 11184, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.1676 | 4.1323 | 0 | | 5.6798 | 3.6659 | 1 | | 4.9731 | 3.5322 | 2 | | 4.5665 | 3.4177 | 3 | | 4.2967 | 3.3513 | 4 | | 4.1126 | 3.3000 | 5 | | 3.9828 | 3.2671 | 6 | | 3.9132 | 3.2661 | 7 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
tyavika/LR1E5_BS32_Distilbert-QA-Pytorch-FULL
tyavika
2023-07-07T08:21:29Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T05:05:25Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR1E5_BS32_Distilbert-QA-Pytorch-FULL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR1E5_BS32_Distilbert-QA-Pytorch-FULL This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6135 | 1.0 | 1645 | 1.3826 | | 1.2998 | 2.0 | 3290 | 1.2342 | | 1.11 | 3.0 | 4935 | 1.1911 | | 0.9527 | 4.0 | 6580 | 1.1765 | | 0.8626 | 5.0 | 8225 | 1.1848 | | 0.7854 | 6.0 | 9870 | 1.2043 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
poopostresearch/dark_elf_test
poopostresearch
2023-07-07T08:19:20Z
0
0
null
[ "region:us" ]
null
2023-07-07T08:14:45Z
RVC model trained on dumer voices from morrowind. 300 epochs
vibhav18/InsuranceMicroLLM
vibhav18
2023-07-07T08:14:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-07T08:10:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
DXD-FYP/Covid-19
DXD-FYP
2023-07-07T08:11:35Z
0
0
fastai
[ "fastai", "image-classification", "region:us" ]
image-classification
2023-07-07T07:38:02Z
--- pipeline_tag: image-classification library_name: fastai ---
Abzu/mpt-7b-instruct-q8
Abzu
2023-07-07T08:10:56Z
148
2
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "dataset:mosaicml/dolly_hhrlhf", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2010.04245", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2023-07-07T08:07:38Z
--- license: cc-by-sa-3.0 datasets: - mosaicml/dolly_hhrlhf tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-7B-Instruct MPT-7B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date May 5, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Question/Instruction **Longboi24**: > What is a quoll? **MPT-7B-Instruct**: >A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-instruct', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ### Formatting This model was trained on data formatted in the dolly-15k format: ```python INSTRUCTION_KEY = "### Instruction:" RESPONSE_KEY = "### Response:" INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request." PROMPT_FOR_GENERATION_FORMAT = """{intro} {instruction_key} {instruction} {response_key} """.format( intro=INTRO_BLURB, instruction_key=INSTRUCTION_KEY, instruction="{instruction}", response_key=RESPONSE_KEY, ) example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering." fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## PreTraining Data For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ### Training Configuration This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
sdadas/polish-distilroberta
sdadas
2023-07-07T08:05:48Z
129
1
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "pl", "license:lgpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: lgpl-3.0 language: - pl ---
KJan05/ppo-SnowballTarget
KJan05
2023-07-07T07:59:35Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-06T10:37:37Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KJan05/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ce-dric/Reinforce-cartpole-01
ce-dric
2023-07-07T07:49:58Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T07:49:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
SATOU0ZHU/anythingv5-Prt-RE
SATOU0ZHU
2023-07-07T07:46:43Z
31
1
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T04:57:51Z
diffusers version of anything v5
kmariunas/2023-07-05-cased
kmariunas
2023-07-07T07:44:43Z
103
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-07T06:47:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 108 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 40, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 429.20000000000005, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
nolanaatama/shrkmfbkhllv1stgnrvcv2300pchsyy5
nolanaatama
2023-07-07T07:43:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T07:40:09Z
--- license: creativeml-openrail-m ---
AntonyG/fine-tune-wav2vec2-large-xls-r-1b-sw
AntonyG
2023-07-07T07:35:56Z
26
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-03-20T06:25:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: fine-tune-wav2vec2-large-xls-r-1b-sw results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: sw split: test[:1%] args: sw metrics: - name: Wer type: wer value: 0.5834348355663824 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-wav2vec2-large-xls-r-300m-sw This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 swahili dataset. It achieves the following results on the evaluation set: - Loss: 1.2834 - Wer: 0.5834 ## Model description This model is fine-tuned for general swahili speech recognition tasks. You can watch our hour long [webinar](https://drive.google.com/file/d/1OkLx3d9xivdyxH8yYsZtwObhEX5Ptn5y/view?usp=drive_link) and see the [slides](https://docs.google.com/presentation/d/1sExJLwZLMNMKGnpuxy-ttF5KqDXJyKK2jNNTUabo5_Q/edit?usp=sharing) on this work. ## Intended uses & limitations The intention is to transcribe general swahili speeches. With further development, we'll fine-tune the model for domain-specific (we are focused on hospital tasks) swahili conversations. ## Training and evaluation data To appreciate the transformation we did on the data, you can read our [blog on data preparation](https://medium.com/@gitau_am/from-raw-data-to-accurate-speech-recognition-asr-my-journey-of-data-preparation-df3a1b0dee3a). ## Training procedure We also [documented](https://medium.com/@gitau_am/exploring-asr-model-development-fine-tuning-xls-r-wav2vec2-model-with-swahili-data-b95134d116b8) some lessons from the fine-tuning exercise. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 9 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.72 | 200 | 3.0092 | 1.0 | | 4.1305 | 3.43 | 400 | 2.9159 | 1.0 | | 4.1305 | 5.15 | 600 | 1.4301 | 0.7040 | | 0.9217 | 6.87 | 800 | 1.3143 | 0.6529 | | 0.9217 | 8.58 | 1000 | 1.2834 | 0.5834 | ### Framework versions - Transformers 4.27.0 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
aroot/eng-fra-simcse_longest_usrl
aroot
2023-07-07T07:35:54Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:16:50Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1250 - Bleu: 32.6481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_longestplus_ssrl
aroot
2023-07-07T07:35:27Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:16:34Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1355 - Bleu: 32.4402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_longest_ssrl
aroot
2023-07-07T07:32:30Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:13:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1282 - Bleu: 32.2561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
KJan05/Pyramids-Training-v1
KJan05
2023-07-07T07:32:21Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T07:32:15Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KJan05/Pyramids-Training-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Propofol/0707_2_finetuned-finetuned-localization
Propofol
2023-07-07T07:23:46Z
103
0
transformers
[ "transformers", "pytorch", "esm", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T05:36:20Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: 0707_2_finetuned-finetuned-localization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0707_2_finetuned-finetuned-localization This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.1445 - Accuracy: 0.4167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9296 | 1.0 | 2500 | 1.2921 | 0.4267 | | 0.6704 | 2.0 | 5000 | 1.6807 | 0.432 | | 0.3695 | 3.0 | 7500 | 2.3376 | 0.4187 | | 0.1416 | 4.0 | 10000 | 3.6342 | 0.424 | | 0.031 | 5.0 | 12500 | 4.1445 | 0.4167 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Bugsys0302/POVBGV2
Bugsys0302
2023-07-07T07:03:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T06:59:06Z
--- license: creativeml-openrail-m ---
AustinCarthy/Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5
AustinCarthy
2023-07-07T06:59:41Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-07T03:33:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75domain dataset. It achieves the following results on the evaluation set: - Loss: 0.0229 - Accuracy: 0.9976 - F1: 0.9748 - Precision: 0.9962 - Recall: 0.9542 - Roc Auc Score: 0.9770 - Tpr At Fpr 0.01: 0.9358 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.008 | 1.0 | 35625 | 0.0214 | 0.9961 | 0.9572 | 0.9983 | 0.9194 | 0.9597 | 0.9208 | | 0.0059 | 2.0 | 71250 | 0.0239 | 0.9959 | 0.9557 | 0.9963 | 0.9182 | 0.9590 | 0.8816 | | 0.0041 | 3.0 | 106875 | 0.0247 | 0.9968 | 0.9651 | 0.9955 | 0.9364 | 0.9681 | 0.9088 | | 0.0001 | 4.0 | 142500 | 0.0260 | 0.9971 | 0.9687 | 0.9962 | 0.9426 | 0.9712 | 0.9298 | | 0.0011 | 5.0 | 178125 | 0.0229 | 0.9976 | 0.9748 | 0.9962 | 0.9542 | 0.9770 | 0.9358 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
squeeze-ai-lab/sq-opt-6.7b-w4-s50
squeeze-ai-lab
2023-07-07T06:58:29Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-07T05:50:45Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 4-bit * **Sparsity Level:** 0.5% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
aroot/eng-mya-simcse_longestplus_usrb
aroot
2023-07-07T06:58:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T06:37:16Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longestplus_usrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longestplus_usrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8846 - Bleu: 4.2095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_longestplus_ssrb
aroot
2023-07-07T06:58:00Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T06:36:57Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longestplus_ssrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longestplus_ssrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8875 - Bleu: 4.1475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
IIC/xlm-roberta-large-socialdisner
IIC
2023-07-07T06:43:47Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "biomedical", "clinical", "spanish", "xlm-roberta-large", "token-classification", "es", "dataset:IIC/socialdisner", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-26T08:01:49Z
--- language: es tags: - biomedical - clinical - spanish - xlm-roberta-large license: mit datasets: - "IIC/socialdisner" metrics: - f1 model-index: - name: IIC/xlm-roberta-large-socialdisner results: - task: type: token-classification dataset: name: socialdisner type: IIC/socialdisner split: test metrics: - name: f1 type: f1 value: 0.941 pipeline_tag: token-classification --- # xlm-roberta-large-socialdisner This model is a finetuned version of xlm-roberta-large for the socialdisner dataset used in a benchmark in the paper TODO. The model has a F1 of 0.941 Please refer to the original publication for more information TODO LINK ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 64 | | learning rate | 3e-05 | | classifier dropout | 0.2 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtex TODO ```
IIC/XLM_R_Galen-socialdisner
IIC
2023-07-07T06:43:35Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "biomedical", "clinical", "spanish", "XLM_R_Galen", "token-classification", "es", "dataset:IIC/socialdisner", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-26T08:06:59Z
--- language: es tags: - biomedical - clinical - spanish - XLM_R_Galen license: mit datasets: - "IIC/socialdisner" metrics: - f1 model-index: - name: IIC/XLM_R_Galen-socialdisner results: - task: type: token-classification dataset: name: socialdisner type: IIC/socialdisner split: test metrics: - name: f1 type: f1 value: 0.919 pipeline_tag: token-classification --- # XLM_R_Galen-socialdisner This model is a finetuned version of XLM_R_Galen for the socialdisner dataset used in a benchmark in the paper TODO. The model has a F1 of 0.919 Please refer to the original publication for more information TODO LINK ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 16 | | learning rate | 4e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtex TODO ```
squeeze-ai-lab/sq-opt-13b-w3-s50
squeeze-ai-lab
2023-07-07T06:42:22Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-07T05:51:11Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.5% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
02shanky/test_model_graphics_classification
02shanky
2023-07-07T06:30:07Z
289
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T05:38:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: test_model_graphics_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9842271293375394 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_graphics_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1381 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1521 | 0.98 | 44 | 0.1381 | 0.9842 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longest_usrb
aroot
2023-07-07T06:21:26Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T05:59:25Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longest_usrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longest_usrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2361 - Bleu: 2.8995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
YakovElm/Qt_15_BERT_More_Properties
YakovElm
2023-07-07T06:19:38Z
66
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T06:19:03Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt_15_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt_15_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2332 - Train Accuracy: 0.9367 - Validation Loss: 0.1937 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2409 | 0.9367 | 0.2001 | 0.9505 | 0 | | 0.2357 | 0.9367 | 0.1992 | 0.9505 | 1 | | 0.2332 | 0.9367 | 0.1937 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
squeeze-ai-lab/sq-opt-2.7b-w4-s50
squeeze-ai-lab
2023-07-07T06:14:28Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-07T05:50:23Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 4-bit * **Sparsity Level:** 0.5% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
sinny/dqn-SpaceInvadersNoFrameskip-v4
sinny
2023-07-07T05:54:45Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T05:54:24Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 1135.50 +/- 198.46 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sinny -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sinny -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sinny ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
j-hartmann/MindMiner-Binary
j-hartmann
2023-07-07T05:44:00Z
109
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - roberta widget: - text: "Alexa is part of our family. She is simply amazing!" - text: "I use my smart assistant for may things. It's incredibly useful." --- This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes: - high mind perception 👩 - low mind perception 🤖 The model was fine-tuned on 997 manually annotated open-ended survey responses. The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline). Hartmann, J., Bergner, A., & Hildebrand, C. (2023). MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer-Smart Object Relationships. Journal of Consumer Psychology, Forthcoming.
remshu-inc/mmark
remshu-inc
2023-07-07T05:43:28Z
3
0
tf-keras
[ "tf-keras", "license:mit", "region:us" ]
null
2023-07-05T11:06:24Z
--- license: mit --- Модель предназначена для решения задачи определения оценки за ученический текст на немецком языке. Модель представляет собой полносвязную нейронную сеть с 10-ю входными нейронами, 25 нейронами в первом скрытом слое, 11 нейронами во втором скрытом слое, 4 – нейронами в третьем скрытом слое, 1 нейроном в выходном слое. На вход поступают нормированные на количество токенов в тексте значения: * количество грамматических ошибок в тексте; * количество лексических ошибок в тексте; * количество синтаксических ошибок в тексте; * количество орфографических ошибок в тексте; * количество дискурсивных ошибок в тексте; * количество пропусков слов в тексте; * количество лишних слов в тексте; * количество ошибок с уровнем грубости 1; * количество ошибок с уровнем грубости 2; * количество ошибок с уровнем грубости 3. На выходе модель выдает значение оценки за текст по 12-балльной шкале. 1 соответствует минимальной оценке, 12 -- максимальной. Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
remshu-inc/mencoder
remshu-inc
2023-07-07T05:42:25Z
108
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-05T10:42:07Z
--- license: mit --- Модель предназначена для решения задачи определения грубости ошибки в предложении ученического текста на немецком языке. Модель была получена в результате дообучения модели «[dbmdz/convbert-base-german-europeana-cased](https://huggingface.co/dbmdz/convbert-base-german-europeana-cased)» на данных корпуса [ПАКТ](https://pact.ai.petrsu.ru/app). На вход модели поступают два предложения на немецком языке. Первое предложение с ошибкой, второе -- с исправленной ошибкой. Модель выдает значение близости двух предложений. Если выданное значение близко к 0,98, то считается, что ошибка не влияет на понимание смысла предложения (грубость уровня 1), если выданное значение близко к 0,93, то считается, что ошибка ухудшает понимание смысла предложения (грубость уровня 2), если выданное значение близко к 0,87, то считается, что смысл предложения непонятен или искажен (грубость уровня 3). Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
aroot/eng-fra-simcse_longestplus_ssrb
aroot
2023-07-07T05:41:54Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T05:23:01Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_ssrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_ssrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1362 - Bleu: 32.1757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_longest_ssrb
aroot
2023-07-07T05:37:44Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T05:19:00Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_ssrb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_ssrb This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1262 - Bleu: 32.1631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Propofol/_finetuned-finetuned-localization
Propofol
2023-07-07T05:31:05Z
103
0
transformers
[ "transformers", "pytorch", "esm", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T04:41:17Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: _finetuned-finetuned-localization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # _finetuned-finetuned-localization This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4382 - Accuracy: 0.436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1122 | 1.0 | 2500 | 1.1513 | 0.4287 | | 1.0035 | 2.0 | 5000 | 1.2395 | 0.4507 | | 0.7167 | 3.0 | 7500 | 1.4382 | 0.436 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
YakovElm/Qt_10_BERT_More_Properties
YakovElm
2023-07-07T05:23:48Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T05:23:07Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt_10_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt_10_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2782 - Train Accuracy: 0.9210 - Validation Loss: 0.2251 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2894 | 0.9186 | 0.2234 | 0.9416 | 0 | | 0.2786 | 0.9210 | 0.2266 | 0.9416 | 1 | | 0.2782 | 0.9210 | 0.2251 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Shularp/Helsinki_en-mul_test_01
Shularp
2023-07-07T05:12:24Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T03:13:03Z
--- tags: - generated_from_trainer model-index: - name: Helsinki_en-mul_test_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Helsinki_en-mul_test_01 This model is a fine-tuned version of [Shularp/Helsinki_en-mul_test](https://huggingface.co/Shularp/Helsinki_en-mul_test) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2373 | 1.0 | 4777 | 1.1392 | | 1.1799 | 2.0 | 9554 | 1.0504 | | 0.984 | 3.0 | 14331 | 1.0276 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
irfan62622/dqn-SpaceInvadersNoFrameskip-v4
irfan62622
2023-07-07T05:11:20Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T05:10:41Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 561.50 +/- 151.81 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga irfan62622 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga irfan62622 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga irfan62622 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
aroot/eng-mya-simcse_longest_usblu
aroot
2023-07-07T05:09:11Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T04:47:40Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longest_usblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longest_usblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8459 - Bleu: 4.4306 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_longestplus_usblu
aroot
2023-07-07T05:08:26Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T04:47:11Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longestplus_usblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longestplus_usblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8787 - Bleu: 4.3723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
TeaTM/DialoGPT-large-bushcat
TeaTM
2023-07-07T04:53:42Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "DialoGPT", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-21T17:31:21Z
--- tags: - conversational - DialoGPT language: - en --- # Bushcat DialoGPT-Large Model A personified DialoGPT fork for a side project. Conversational for an entertainment chatbot. Large "smarter" model based on DialoGPT-Large. iI you use this, this is the recommended version (compared to **TeaTM/DialoGPT-small-bushcat**). The character plays the persona of a cat in a bush that is overly positive. Just for fun. Works great in Transformers & PyTorch. # NOTE: This model is no longer being updated. There are better models and frameworks for custom, smarter characters. # This is mostly "for fun" and is fairly lightweight compared to larger models. Good for small test projects.
TeaTM/DialoGPT-small-bushcat
TeaTM
2023-07-07T04:52:37Z
131
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "DialoGPT", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-19T22:25:09Z
--- tags: - conversational - DialoGPT language: - en --- # Bushcat DialoGPT-small Model A smaller personified DialoGPT fork for a side project. Conversational for an entertainment chatbot. Smaller model based on DialoGPT-small. Recommended to use the **TeaTM/DialoGPT-large-bushcat** model on my Hugging Face page. The large model is bigger in size but also significantly smarter. The character plays the persona of a cat in a bush that is overly positive. Just for fun. Has high perplexity, be warned. Works great in Transformers & PyTorch. # NOTE: This model is no longer being updated. There are better models and frameworks for custom, smarter characters. # This is mostly "for fun" and is fairly lightweight compared to larger models. Good for small test projects.
pundapog/DialoGPT-medium-ethanbot
pundapog
2023-07-07T04:45:16Z
131
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T03:52:57Z
--- tags: - conversational library_name: transformers ---
nomsgadded/textual_inversion
nomsgadded
2023-07-07T04:31:51Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T03:42:52Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - nomsgadded/textual_inversion These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
YakovElm/Qt_5_BERT_More_Properties
YakovElm
2023-07-07T04:28:37Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T04:27:58Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt_5_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt_5_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3382 - Train Accuracy: 0.8943 - Validation Loss: 0.2633 - Validation Accuracy: 0.9294 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3488 | 0.8862 | 0.2583 | 0.9294 | 0 | | 0.3401 | 0.8943 | 0.2680 | 0.9294 | 1 | | 0.3382 | 0.8943 | 0.2633 | 0.9294 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longestplus_ssblu
aroot
2023-07-07T04:23:37Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T04:01:39Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longestplus_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longestplus_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2846 - Bleu: 2.6912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
l3cube-pune/MarathiSentiment
l3cube-pune
2023-07-07T04:01:02Z
118
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "albert", "text-classification", "mr", "dataset:L3CubeMahaSent", "arxiv:2103.11408", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: mr tags: - albert license: cc-by-4.0 datasets: - L3CubeMahaSent widget: - text: "I like you. </s></s> I love you." --- ## MarathiSentiment ** An updated and better version of this model covering multiple domains is shared here: <a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> marathi-sentiment-md </a> ** <br> MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408) ``` @inproceedings{kulkarni2021l3cubemahasent, title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset}, author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj}, booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis}, pages={213--220}, year={2021} } ```
Dynosaur/dynosaur-llama-7b-superni
Dynosaur
2023-07-07T03:57:41Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T23:19:48Z
--- license: apache-2.0 --- This repo contains the weight difference for dynosaur-llama-7b-superni that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights. To recover the full dynosaur-llama-7b-superni weights, follow the steps: ``` 1. Convert Meta's released weights into huggingface format. Follow this guide: https://huggingface.co/docs/transformers/main/model_doc/llama You may refer to https://huggingface.co/huggyllama/llama-7b if you get some trouble in the conversion. (You should only use this repository if you have been granted access to the llama model.) 2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at: https://huggingface.co/Dynosaur/dynosaur-llama-7b-superni 3. Run this function with the correct paths. E.g., python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights> ```
aroot/eng-fra-simcse_longest_usblu
aroot
2023-07-07T03:51:39Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T03:32:13Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_usblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_usblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1221 - Bleu: 32.5700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_longestplus_ssblu
aroot
2023-07-07T03:47:27Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T03:28:36Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1389 - Bleu: 32.4429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
chasmiccoder/ppo-LunarLander-v2
chasmiccoder
2023-07-07T03:47:17Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T03:46:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.82 +/- 17.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aroot/eng-fra-simcse_longest_ssblu
aroot
2023-07-07T03:46:56Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T03:27:46Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_ssblu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_ssblu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1296 - Bleu: 32.4007 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
mitra-mir/setfit_model_Independence_labelindepandance_epochs2
mitra-mir
2023-07-07T03:44:35Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-07T03:44:24Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 20 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
YakovElm/MariaDB_20_BERT_More_Properties
YakovElm
2023-07-07T03:33:21Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T03:32:46Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB_20_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB_20_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2058 - Train Accuracy: 0.9356 - Validation Loss: 0.1361 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2792 | 0.9180 | 0.1586 | 0.9698 | 0 | | 0.2219 | 0.9356 | 0.1362 | 0.9698 | 1 | | 0.2058 | 0.9356 | 0.1361 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
AustinCarthy/Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
AustinCarthy
2023-07-07T03:33:18Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-07T01:19:42Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75domain dataset. It achieves the following results on the evaluation set: - Loss: 0.0248 - Accuracy: 0.9971 - F1: 0.9693 - Precision: 0.9939 - Recall: 0.9458 - Roc Auc Score: 0.9728 - Tpr At Fpr 0.01: 0.9312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0153 | 1.0 | 21554 | 0.0251 | 0.9950 | 0.9443 | 0.9980 | 0.896 | 0.9480 | 0.8982 | | 0.0084 | 2.0 | 43108 | 0.0175 | 0.9970 | 0.9675 | 0.9914 | 0.9448 | 0.9722 | 0.9184 | | 0.0041 | 3.0 | 64662 | 0.0135 | 0.9975 | 0.9737 | 0.9873 | 0.9606 | 0.9800 | 0.904 | | 0.0013 | 4.0 | 86216 | 0.0210 | 0.9969 | 0.9668 | 0.9922 | 0.9426 | 0.9711 | 0.9174 | | 0.0015 | 5.0 | 107770 | 0.0248 | 0.9971 | 0.9693 | 0.9939 | 0.9458 | 0.9728 | 0.9312 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nesanchezo/model_prueba
nesanchezo
2023-07-07T03:28:51Z
162
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T20:13:39Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: model_prueba results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_prueba This model is a fine-tuned version of [farleyknight-org-username/vit-base-mnist](https://huggingface.co/farleyknight-org-username/vit-base-mnist) on the handwriten-Numbers dataset. It achieves the following results on the evaluation set: - Loss: 0.1889 - Accuracy: 0.9606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
Aeala/Enterredaas-33b-4bit
Aeala
2023-07-07T03:28:32Z
9
4
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T00:18:50Z
4-bit GPTQ quantization of [Enterredaas-33b](https://huggingface.co/Aeala/Enterredaas-33b-QLoRA) **Important Note**: This was trained in the *Alpaca* format, so prompting should be something like: ``` ### Instruction: <system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt) ### Input: <prompt> (without the <>) ### Response: ```
PixelPerfect/PixelPerfect
PixelPerfect
2023-07-07T03:24:19Z
31
1
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-23T18:38:48Z
PixelPerfect Text-to-Image Model!
aroot/eng-mya-simcse_longest_ssbbu
aroot
2023-07-07T03:13:56Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:52:38Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longest_ssbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longest_ssbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8530 - Bleu: 4.2452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_longest_usbbu
aroot
2023-07-07T03:13:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:52:34Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longest_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longest_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8564 - Bleu: 4.1828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_longestplus_usbbu
aroot
2023-07-07T03:09:49Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:48:19Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longestplus_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longestplus_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8896 - Bleu: 4.1199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hoanghoavienvo/roberta-base-detect-depression-large-dataset
hoanghoavienvo
2023-07-07T02:37:46Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T04:05:59Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-base-detect-depression-large-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-detect-depression-large-dataset This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5713 - Accuracy: 0.785 - F1: 0.8432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6291 | 1.0 | 1157 | 0.6365 | 0.675 | 0.7860 | | 0.6281 | 2.0 | 2314 | 0.6803 | 0.602 | 0.7509 | | 0.6344 | 3.0 | 3471 | 0.6679 | 0.612 | 0.7557 | | 0.6367 | 4.0 | 4628 | 0.6746 | 0.6 | 0.7500 | | 0.6193 | 5.0 | 5785 | 0.5713 | 0.785 | 0.8432 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
YakovElm/MariaDB_5_BERT_More_Properties
YakovElm
2023-07-07T02:35:57Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T02:34:44Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB_5_BERT_More_Properties results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB_5_BERT_More_Properties This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2840 - Train Accuracy: 0.8946 - Validation Loss: 0.2635 - Validation Accuracy: 0.9322 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3608 | 0.8912 | 0.2538 | 0.9322 | 0 | | 0.3038 | 0.8954 | 0.2533 | 0.9322 | 1 | | 0.2840 | 0.8946 | 0.2635 | 0.9322 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longest_usbbu
aroot
2023-07-07T02:32:34Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:10:09Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longest_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longest_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2487 - Bleu: 2.8287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longest_ssbbu
aroot
2023-07-07T02:32:31Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:10:14Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longest_ssbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longest_ssbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2332 - Bleu: 2.7555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longestplus_usbbu
aroot
2023-07-07T02:27:38Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T02:09:52Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longestplus_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longestplus_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2984 - Bleu: 2.6234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
JLuisVM/eye3
JLuisVM
2023-07-07T02:14:46Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-07-07T02:14:29Z
--- license: bigscience-openrail-m ---
bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-LoRA
bhenrym14
2023-07-07T02:07:42Z
0
2
null
[ "dataset:jondurbin/airoboros-gpt4-1.4.1", "region:us" ]
null
2023-07-07T01:47:41Z
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- # NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (LoRA) LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16 Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA ## Overview This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (LoRA) with several key modifications: - Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b. - Training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours. ## NTK Patch To use with HF transformers, AutoGPTQ, etc. See [NTK monkey patch](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_ntk_monkey_patch.py).
digiplay/helloworld_v3
digiplay
2023-07-07T02:03:57Z
521
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T18:45:45Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info :https://civitai.com/models/23168/hello-world Original Author's DEMO image: ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f9359968-8a3f-4512-228c-5bb95f4c5d00/304962.jpeg)
aroot/eng-fra-simcse_longest_usbbu
aroot
2023-07-07T01:56:04Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T01:40:36Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longest_usbbu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longest_usbbu This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1308 - Bleu: 32.3213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3