modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 18:27:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 18:27:48
card
stringlengths
11
1.01M
carbon225/vit-base-patch16-224-hentai
carbon225
2023-07-04T14:50:00Z
225
19
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "art", "anime", "visual-novel", "nsfw", "dataset:carbon225/vndb_img", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-30T12:06:40Z
--- license: cc0-1.0 widget: - src: >- https://huggingface.co/carbon225/vit-base-patch16-224-hentai/resolve/main/samples/1.jpeg - src: >- https://huggingface.co/carbon225/vit-base-patch16-224-hentai/resolve/main/samples/2.jpeg datasets: - carbon225/vndb_img tags: - art - anime - visual-novel - nsfw --- # ViT for NSFW classification ## Model info This is Google's [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) finetuned for flagging images according to [vndb.org](https://vndb.org/d19) with 3 classes: - safe - suggestive - explicit ## Training data The model was trained on the vndb.org [database dump](https://vndb.org/d14) using full size screenshots (`sf` in the database dump). The dataset can be loaded from [carbon225/vndb_img](https://huggingface.co/datasets/carbon225/vndb_img). ## Intended use The model can be used for flagging anime-style images for sexual content. It can also be finetuned on other tasks related to anime images.
leofn3/autotrain-racismo
leofn3
2023-07-04T14:43:08Z
81
0
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "autotrain", "unk", "dataset:leofn3/autotrain-data-racismo-sandbox", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T14:37:11Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "Negra melodia que vem do sangue do coraรงรฃo" datasets: - leofn3/autotrain-data-racismo-sandbox co2_eq_emissions: emissions: 0.9388908689973346 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 72132138873 - CO2 Emissions (in grams): 0.9389 ## Validation Metrics - Loss: 0.562 - Accuracy: 0.833 - Precision: 1.000 - Recall: 0.667 - AUC: 0.901 - F1: 0.800 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/leofn3/autotrain-racismo-sandbox-72132138873 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("leofn3/autotrain-racismo-sandbox-72132138873", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("leofn3/autotrain-racismo-sandbox-72132138873", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
dcarpintero/ppo-Pyramids
dcarpintero
2023-07-04T14:39:33Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-04T14:39:30Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dcarpintero/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
osunlp/BioVocabBERT
osunlp
2023-07-04T14:26:56Z
117
3
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "arxiv:2306.17649", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-05T17:57:26Z
This biomedical language model uses a specialized biomedical tokenizer which is more closely aligned with human-morphological judgements than previous biomedical tokenizers such as PubMedBERT. Details about our tokenizer design, pre-training procedure and downstream results can be found in our [BioNLP @ ACL 2023 paper](http://arxiv.org/pdf/2306.17649.pdf) --- license: apache-2.0 ---
Apoorvakoira/wizabc
Apoorvakoira
2023-07-04T14:23:44Z
8
1
transformers
[ "transformers", "gpt_bigcode", "text-generation", "arxiv:2306.08568", "license:bigcode-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-04T13:45:23Z
--- license: bigcode-openrail-m --- This is the Full-Weight of WizardCoder. **Repository**: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder **Twitter**: https://twitter.com/WizardLM_AI/status/1669109414559911937 **Paper**: [WizardCoder: Empowering Code Large Language Models with Evol-Instruct](https://arxiv.org/abs/2306.08568) # WizardCoder: Empowering Code Large Language Models with Evol-Instruct To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set. ## News - ๐Ÿ”ฅ Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs. - ๐Ÿ”ฅ We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), and [Paper](). - &#x1F4E3; Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time. ## Comparing WizardCoder with the Closed-Source Models. ๐Ÿ”ฅ The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a> </p> โ—**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).** ## Comparing WizardCoder with the Open-Source Models. The following table clearly demonstrates that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. โ—**If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.** | Model | HumanEval Pass@1 | MBPP Pass@1 | |------------------|------------------|-------------| | CodeGen-16B-Multi| 18.3 |20.9 | | CodeGeeX | 22.9 |24.4 | | LLaMA-33B | 21.7 |30.2 | | LLaMA-65B | 23.7 |37.7 | | PaLM-540B | 26.2 |36.8 | | PaLM-Coder-540B | 36.0 |47.0 | | PaLM 2-S | 37.6 |50.0 | | CodeGen-16B-Mono | 29.3 |35.3 | | Code-Cushman-001 | 33.5 |45.9 | | StarCoder-15B | 33.6 |43.6* | | InstructCodeT5+ | 35.0 |-- | | WizardLM-30B 1.0| 37.8 |-- | | WizardCoder-15B 1.0 | **57.3** |**51.8** | โ—**Note: The reproduced result of StarCoder on MBPP.** โ—**Note: The above table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).** ## Call for Feedbacks We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it. ## Contents 1. [Online Demo](#online-demo) 2. [Fine-tuning](#fine-tuning) 3. [Inference](#inference) 4. [Evaluation](#evaluation) 5. [Citation](#citation) 6. [Disclaimer](#disclaimer) ## Online Demo We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks. ## Fine-tuning We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X). We fine-tune StarCoder-15B with the following hyperparameters: | Hyperparameter | StarCoder-15B | |----------------|---------------| | Batch size | 512 | | Learning rate | 2e-5 | | Epochs | 3 | | Max length | 2048 | | Warmup step | 30 | | LR scheduler | cosine | To reproduce our fine-tuning of WizardCoder, please follow the following steps: 1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`) 2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`) 3. Login Huggingface: ```bash huggingface-cli login ``` 4. Execute the following training command: ```bash deepspeed train_wizardcoder.py \ --model_name_or_path "bigcode/starcoder" \ --data_path "/your/path/to/code_instruction_data.json" \ --output_dir "/your/path/to/ckpt" \ --num_train_epochs 3 \ --model_max_length 2048 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50 \ --save_total_limit 2 \ --learning_rate 2e-5 \ --warmup_steps 30 \ --logging_steps 2 \ --lr_scheduler_type "cosine" \ --report_to "tensorboard" \ --gradient_checkpointing True \ --deepspeed configs/deepspeed_config.json \ --fp16 True ``` ## Inference We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file. You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file. ```bash pip install jsonlines ``` The decoding command is: ``` python src\inference_wizardcoder.py \ --base_model "/your/path/to/ckpt" \ --input_data_path "/your/path/to/input/data.jsonl" \ --output_data_path "/your/path/to/output/result.jsonl" ``` The format of `data.jsonl` should be: ``` {"idx": 11, "Instruction": "Write a Python code to count 1 to 10."} {"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."} ``` The prompt for our WizardCoder in `src\inference_wizardcoder.py` is: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: ``` ## Evaluation We provide the evaluation script on HumanEval for WizardCoder. 1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment. 2. Run the following script to generate the answer. ```bash model="/path/to/your/model" temp=0.2 max_len=2048 pred_num=200 num_seqs_per_iter=2 output_path=preds/T${temp}_N${pred_num} mkdir -p ${output_path} echo 'Output path: '$output_path echo 'Model to eval: '$model # 164 problems, 21 per GPU if GPU=8 index=0 gpu_num=8 for ((i = 0; i < $gpu_num; i++)); do start_index=$((i * 21)) end_index=$(((i + 1) * 21)) gpu=$((i)) echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu} ((index++)) ( CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} ) & if (($index % $gpu_num == 0)); then wait; fi done ``` 3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files. ```bash output_path=preds/T${temp}_N${pred_num} echo 'Output path: '$output_path python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt evaluate_functional_correctness ${output_path}.jsonl ``` ## Citation Please cite the repo if you use the data or code in this repo. ``` @misc{luo2023wizardcoder, title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang}, year={2023}, } ``` ## Disclaimer The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
Graphcore/sentence-t5-base
Graphcore
2023-07-04T14:05:42Z
1
0
null
[ "optimum_graphcore", "license:apache-2.0", "region:us" ]
null
2023-07-04T13:53:20Z
--- license: apache-2.0 --- # Graphcore/sentence-t5-base Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcoreโ€™s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description (source: https://huggingface.co/sentence-transformers/sentence-t5-base) Sentence-t5 is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks. This model was converted from the Tensorflow model st5-base-1 to PyTorch. When using this model, have a look at the publication: Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models. The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-base model. The weights are stored in FP16. ## Intended uses & limitations This model contains just the `IPUConfig` files for running the `sentence-t5-base` model (e.g. [sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig from transformers import T5EncoderModel ipu_config = IPUConfig.from_pretrained("Graphcore/sentence-t5-base") model = T5EncoderModel.from_pretrained("sentence-transformers/sentence-t5-base") ```
idealflaw/ppo-Huggy
idealflaw
2023-07-04T14:01:19Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-04T14:01:15Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: idealflaw/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
lucas-valenzuela-everke/BETO-chile-politico-1990-2019
lucas-valenzuela-everke
2023-07-04T13:57:32Z
112
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "legal", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-04T04:59:50Z
--- language: - es tags: - legal --- This BETO was fine-tuned using 196.063 speeches made by legislators from the Chilean Chamber of Deputies and the Senate. Only 5 words were added to the tokenizer: pinochet, aylwin, frei, bachelet and piรฑera.
maxkskhor/q-FrozenLake-v1-4x4-noSlippery
maxkskhor
2023-07-04T13:44:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T13:44:57Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="maxkskhor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
apechliva/code-summ_v2
apechliva
2023-07-04T13:36:17Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-04T13:36:15Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Collab-uniba/github-issues-preprocessed-mpnet-st-e10
Collab-uniba
2023-07-04T13:28:35Z
5
1
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-04T13:22:12Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # GitHub Issues Preprocessed MPNet Sentence Transformer (10 Epochs) This is a [sentence-transformers](https://www.SBERT.net) model, specific for GitHub Issue data. ## Dataset For training, we used the [NLBSE22 dataset](https://nlbse2022.github.io/tools/), after removing issues with empty body and duplicates. Similarity between title and body was used to train the sentence embedding model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Collab-uniba/github-issues-preprocessed-mpnet-st-e10') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Collab-uniba/github-issues-preprocessed-mpnet-st-e10') model = AutoModel.from_pretrained('Collab-uniba/github-issues-preprocessed-mpnet-st-e10') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 43709 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 43709, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hezheop/q-Taxi-v3
hezheop
2023-07-04T13:22:30Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T13:22:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="hezheop/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jimregan/BERTreach
jimregan
2023-07-04T13:18:51Z
175
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "fill-mask", "irish", "ga", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 language: ga tags: - irish --- ## BERTreach ([beirtreach](https://www.teanglann.ie/en/fgb/beirtreach) means 'oyster bed') **Model size:** 84M **Training data:** * [PARSEME 1.2](https://gitlab.com/parseme/parseme_corpus_ga/-/blob/master/README.md) * Newscrawl 300k portion of the [Leipzig Corpora](https://wortschatz.uni-leipzig.de/en/download/irish) * Private news corpus crawled with [Corpus Crawler](https://github.com/google/corpuscrawler) (2125804 sentences, 47419062 tokens, as reckoned by wc) ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="jimregan/BERTreach", tokenizer="jimregan/BERTreach") ```
jimregan/psst-partial-timit
jimregan
2023-07-04T13:14:23Z
18
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "dataset:jimregan/psst", "dataset:timit_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-06T08:30:28Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition datasets: - jimregan/psst - timit_asr --- This repository contains a number of experiments for the [PSST Challenge](https://psst.study/). As the test set is unavailable, all numbers are based on the validation set. The experiments in the tables below were finetuned on [Wav2vec 2.0 Base, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) Our overall best performing model (**FER** 9\.2%, **PER:** 21\.0%) was based on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) (git tag: `larger-rir`), with the TIMIT subset augmented with Room Impulse Response, based on the experiments below, on the base model. ## Augmented TIMIT subset Using a subset of TIMIT that could map easily to the phoneset used by the PSST Challenge data (a list of IDs are in the repository), we experimented with augmenting the data to better match the PSST data. The best results were obtained using Room Impulse Response (tag: `rir`) | **Augmentation** | **FER** | **PER** | **Git tag** | | :----------------------------------------------- | :-------- | :--------- | :---------------------------------- | | unaugmented | 10\.2% | 22\.5% | huggingface-unaugmented | | Gaussian noise | 10\.0% | 22\.1% | gaussian | | Pitchshift | 9\.6% | 22\.9% | pitchshift | | RIR | **9\.6%** | **21\.8%** | rir | | Time stretch | 10\.1% | 22\.8% | timestretch | | Gaussian noise + RIR | 10\.0% | 23\.4% | gaussian-rir | | Pitchshift + Gaussian noise | 9\.9% | 22\.9% | pitchshift-gaussian | | Pitchshift + RIR | 9\.9% | 22\.8% | pitchshift-rir | | Tim estretch + Gaussian noise | 10\.2% | 22\.8% | timestretch-gaussian | | Time stretch + Pitchshift | 9\.8% | 22\.0% | timestretch-pitchshift | | Time stretch + RIR | 9\.7% | 22\.2% | timestretch-rir | | Pitchshift + Gaussian noise + RIR | 10\.1% | 23\.5% | pitchshift-gaussian-rir | | Time stretch + Gaussian noise + RIR | 9\.7% | 22\.3% | timestretch-gaussian-rir | | Time stretch + Pitchshift + Gaussian noise | 10\.2% | 22\.9% | timestretch-pitchshift-gaussian | | Time stretch + Pitchshift + RIR | 10\.2% | 22\.5% | timestretch-pitchshift-rir | | Time stretch + Pitchshift + Gaussian noise + RIR | 10\.9% | 24\.1% | timestretch-pitchshift-gaussian-rir | ## LM experiments We experimented with a number of language model configurations, combining the data from the PSST challenge, the subset of TIMIT we used, and CMUdict. We tried combining CMUdict data in a number of ways: unmodified, with a silence token added at the start of the pronunciation, at the end, and at both the start and the end. The best result was from a 5-gram model, with silences added at the end of the CMUdict data (git tag: `lm-nosil-cmudict-sile.5`). Evaluation was performed using scripts provided by the PSST Challenge's organisers, so there are no scripts in place to automatically use the LM with the transformers library. | | **n-gram** | **FER** | **PER** | **Tag** | | :----------------------------- | :--------- | :--------- | :--------- | :--------- | | Baseline + TIMIT | --- | **10\.2%** | 22\.5% | huggingface-unaugmented | | All silences | 4 | 10\.5% | 23\.0% | lm-allsil.4 | | | 5 | 10\.5% | 22\.6% | lm-allsil.5 | | | 6 | 10\.3% | 22\.3% | lm-allsil.6 | | No silences | 4 | 10\.3% | 22\.6% | lm-nosil.4 | | | 5 | **10\.2%** | 22\.2% | lm-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-nosil.6 | | PSST and TIMIT without silence | | | | | | Unmodified CMUdict | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-nosil.4 | | | 5 | 10\.2% | 22\.2% | lm-nosil-cmudict-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-nosil-cmudict-nosil.6 | | CMUdict-end | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-sile.4 | | | 5 | **10\.2%** | **22\.1%** | lm-nosil-cmudict-sile.5 | | | 6 | **10\.2%** | 22\.3% | lm-nosil-cmudict-sile.6 | | CMUdict-start | 4 | 10\.4% | 22\.6% | lm-nosil-cmudict-sils.4 | | | 5 | 10\.3% | 22\.4% | lm-nosil-cmudict-sils.5 | | | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-sils.6 | | CMUdict-both | 4 | 10\.4% | 22\.7% | lm-nosil-cmudict-silb.4 | | | 5 | 10\.4% | 22\.3% | lm-nosil-cmudict-silb.5 | | | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-silb.6 | | Unmodified PSST and TIMIT | | | | | | Unmodified CMUdict | 4 | 10\.3% | 22\.8% | lm-orig-cmudict-nosil.4 | | | 5 | 10\.3% | 22\.4% | lm-orig-cmudict-nosil.5 | | | 6 | **10\.2%** | 22\.4% | lm-orig-cmudict-nosil.6 | | CMUdict-end | 4 | 10\.3% | 22\.7% | lm-orig-cmudict-sile.4 | | | 5 | **10\.2%** | 22\.2% | lm-orig-cmudict-sile.5 | | | 6 | **10\.2%** | 22\.3% | lm-orig-cmudict-sile.6 | | CMUdict-start | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-sils.4 | | | 5 | 10\.4% | 22\.5% | lm-orig-cmudict-sils.5 | | | 6 | 10\.3% | 22\.4% | lm-orig-cmudict-sils.6 | | CMUdict-both | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-silb.4 | | | 5 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.5 | | | 6 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.6 |
pratikg123/finetunned_falcon-7b
pratikg123
2023-07-04T13:10:35Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-04T12:45:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0
tanmayyyj/dqn-SpaceInvadersNoFrameskip-v4
tanmayyyj
2023-07-04T13:09:53Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T13:09:15Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 627.00 +/- 271.64 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tanmayyyj -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tanmayyyj -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tanmayyyj ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v2.2.1
PraveenJesu
2023-07-04T13:01:53Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-04T13:01:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
dcarpintero/ppo-SnowballTarget
dcarpintero
2023-07-04T13:01:15Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-04T13:01:12Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dcarpintero/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
velascoluis/falcon7b-instruct-database-ft-5-epochs
velascoluis
2023-07-04T12:48:16Z
0
0
null
[ "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-04T12:48:03Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: falcon7b-instruct-database-ft-5-epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7b-instruct-database-ft-5-epochs This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Arindam75/ppo-Pyramids
Arindam75
2023-07-04T12:40:57Z
25
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-04T08:03:09Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Arindam75/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
ahmedALM1221/convnextv2-tiny-1k-224-finetuned-eurosat-50
ahmedALM1221
2023-07-04T12:40:43Z
190
0
transformers
[ "transformers", "pytorch", "tensorboard", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-04T11:41:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnextv2-tiny-1k-224-finetuned-eurosat-50 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Skin_Dataset split: train args: Skin_Dataset metrics: - name: Accuracy type: accuracy value: 0.7762711864406779 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-1k-224-finetuned-eurosat-50 This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2472 - Accuracy: 0.7763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9434 | 0.97 | 18 | 1.8549 | 0.2847 | | 1.7722 | 2.0 | 37 | 1.6757 | 0.3661 | | 1.5502 | 2.97 | 55 | 1.4652 | 0.4339 | | 1.2595 | 4.0 | 74 | 1.1916 | 0.6068 | | 0.9304 | 4.97 | 92 | 1.0282 | 0.6576 | | 0.7333 | 6.0 | 111 | 0.8574 | 0.7051 | | 0.6015 | 6.97 | 129 | 0.8427 | 0.6983 | | 0.4617 | 8.0 | 148 | 0.7682 | 0.7458 | | 0.3162 | 8.97 | 166 | 0.7453 | 0.7559 | | 0.2249 | 10.0 | 185 | 0.7475 | 0.7661 | | 0.1667 | 10.97 | 203 | 0.7677 | 0.7492 | | 0.091 | 12.0 | 222 | 1.0114 | 0.7220 | | 0.0783 | 12.97 | 240 | 1.0206 | 0.7186 | | 0.0613 | 14.0 | 259 | 0.8466 | 0.7492 | | 0.0703 | 14.97 | 277 | 1.1067 | 0.7119 | | 0.0335 | 16.0 | 296 | 1.0117 | 0.7390 | | 0.0171 | 16.97 | 314 | 0.9367 | 0.7525 | | 0.0253 | 18.0 | 333 | 1.3196 | 0.7153 | | 0.0201 | 18.97 | 351 | 1.0530 | 0.7525 | | 0.0041 | 20.0 | 370 | 1.0523 | 0.7729 | | 0.0154 | 20.97 | 388 | 1.1311 | 0.7661 | | 0.0025 | 22.0 | 407 | 1.1477 | 0.7729 | | 0.0036 | 22.97 | 425 | 1.1309 | 0.7627 | | 0.002 | 24.0 | 444 | 1.1399 | 0.7729 | | 0.0014 | 24.97 | 462 | 1.1543 | 0.7797 | | 0.0011 | 26.0 | 481 | 1.1799 | 0.7763 | | 0.0011 | 26.97 | 499 | 1.1579 | 0.7661 | | 0.0009 | 28.0 | 518 | 1.1907 | 0.7627 | | 0.0009 | 28.97 | 536 | 1.1878 | 0.7661 | | 0.0008 | 30.0 | 555 | 1.1986 | 0.7661 | | 0.0008 | 30.97 | 573 | 1.2051 | 0.7661 | | 0.0007 | 32.0 | 592 | 1.2073 | 0.7661 | | 0.0007 | 32.97 | 610 | 1.2156 | 0.7661 | | 0.0007 | 34.0 | 629 | 1.2218 | 0.7627 | | 0.0007 | 34.97 | 647 | 1.2173 | 0.7661 | | 0.0006 | 36.0 | 666 | 1.2217 | 0.7729 | | 0.0006 | 36.97 | 684 | 1.2272 | 0.7695 | | 0.0006 | 38.0 | 703 | 1.2261 | 0.7763 | | 0.0006 | 38.97 | 721 | 1.2305 | 0.7763 | | 0.0006 | 40.0 | 740 | 1.2325 | 0.7763 | | 0.0005 | 40.97 | 758 | 1.2362 | 0.7763 | | 0.0005 | 42.0 | 777 | 1.2409 | 0.7763 | | 0.0005 | 42.97 | 795 | 1.2422 | 0.7763 | | 0.0005 | 44.0 | 814 | 1.2429 | 0.7729 | | 0.0005 | 44.97 | 832 | 1.2434 | 0.7763 | | 0.0005 | 46.0 | 851 | 1.2458 | 0.7763 | | 0.0005 | 46.97 | 869 | 1.2468 | 0.7763 | | 0.0005 | 48.0 | 888 | 1.2471 | 0.7763 | | 0.0005 | 48.65 | 900 | 1.2472 | 0.7763 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
AlpacaSundae/full_totakeke
AlpacaSundae
2023-07-04T12:35:10Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-04T12:11:15Z
--- license: openrail --- Trained using all 73 songs from acww. (overkill but seems to work better than when I hand selected 5 songs, maybe the next model will just be scales etc of the sound bank instead) Muted the instrument tracks in sf2 and converted to wav in python, but I left the whistling in as I thought it would be ok but it get's weird with silences so maybe it will be remade without whistling one day. I typically just use mangio-crepe set to 64 hop length and I set the pitch down an octave. For some songs I'll generate two octaves to get clarity in higher and lower parts of the song. It seems that too high/low pitch in the often lets words slip through too much. Usually need to cut bits where the input was silence after generation as well due to weird artefacts mentioned. idk what im doing
ccattomio/q-Taxi-v3
ccattomio
2023-07-04T12:25:37Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T12:04:19Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ccattomio/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ccattomio/q-FrozenLake-v1-4x4-noSlippery
ccattomio
2023-07-04T12:25:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T09:57:07Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ccattomio/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
juancopi81/lmd-8bars-2048-epochs10
juancopi81
2023-07-04T12:23:11Z
127
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-01T23:26:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: lmd-8bars-2048-epochs10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmd-8bars-2048-epochs10 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 4 - seed: 1 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.4182 | 0.5 | 4994 | 1.4933 | | 1.4626 | 1.0 | 9988 | 1.3082 | | 1.3176 | 1.5 | 14982 | 1.2276 | | 1.2604 | 2.0 | 19976 | 1.1815 | | 1.2101 | 2.5 | 24970 | 1.1499 | | 1.1804 | 3.0 | 29964 | 1.1260 | | 1.1517 | 3.5 | 34958 | 1.1043 | | 1.1349 | 4.0 | 39952 | 1.0887 | | 1.1133 | 4.5 | 44946 | 1.0762 | | 1.0995 | 5.0 | 49940 | 1.0618 | | 1.0824 | 5.5 | 54934 | 1.0507 | | 1.0713 | 6.0 | 59928 | 1.0423 | | 1.0552 | 6.5 | 64922 | 1.0328 | | 1.0505 | 7.0 | 69916 | 1.0279 | | 1.0365 | 7.5 | 74910 | 1.0217 | | 1.0307 | 8.0 | 79904 | 1.0153 | | 1.022 | 8.5 | 84898 | 1.0107 | | 1.0189 | 9.0 | 89892 | 1.0090 | | 1.0129 | 9.5 | 94886 | 1.0084 | | 1.0139 | 10.0 | 99880 | 1.0086 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
maxkskhor/ppo-Huggy
maxkskhor
2023-07-04T12:20:09Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-04T12:20:04Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: maxkskhor/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
BadreddineHug/donut-base-ocr3
BadreddineHug
2023-07-04T12:09:53Z
72
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-04T11:22:07Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-ocr3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-ocr3 This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ddoc/adt
ddoc
2023-07-04T12:02:45Z
0
1
null
[ "region:us" ]
null
2023-07-04T12:02:27Z
# !After Detailer !After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet. ## Install (from Mikubill/sd-webui-controlnet) 1. Open "Extensions" tab. 2. Open "Install from URL" tab in the tab. 3. Enter `https://github.com/Bing-su/adetailer.git` to "URL for extension's git repository". 4. Press "Install" button. 5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart". 6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.) 7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.) You can now install it directly from the Extensions tab. ![image](https://i.imgur.com/g6GdRBT.png) You **DON'T** need to download any model from huggingface. ## Options | Model, Prompts | | | | --------------------------------- | ------------------------------------- | ------------------------------------------------- | | ADetailer model | Determine what to detect. | `None`ย = disable | | ADetailer prompt,ย negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. | | Detection | | | | ------------------------------------ | -------------------------------------------------------------------------------------------- | --- | | Detection model confidence threshold | Only objects with a detection model confidence above this threshold are used for inpainting. | | | Mask min/max ratio | Only use masks whose area is between those ratios for the area of the entire image. | | If you want to exclude objects in the background, try setting the min ratio to around `0.01`. | Mask Preprocessing | | | | ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | | Mask x, y offset | Moves the mask horizontally and vertically by | | | Mask erosion (-) / dilation (+) | Enlarge or reduce the detected mask. | [opencv example](https://docs.opencv.org/4.7.0/db/df6/tutorial_erosion_dilatation.html) | | Mask merge mode | `None`: Inpaint each mask<br/>`Merge`: Merge all masks and inpaint<br/>`Merge and Invert`: Merge all masks and Invert, then inpaint | | Applied in this order: x, y offset โ†’ erosion/dilation โ†’ merge/invert. #### Inpainting ![image](https://i.imgur.com/wyWlT1n.png) Each option corresponds to a corresponding option on the inpaint tab. ## ControlNet Inpainting You can use the ControlNet extension if you have ControlNet installed and ControlNet models. Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically. ## Model | Model | Target | mAP 50 | mAP 50-95 | | --------------------- | --------------------- | ----------------------------- | ----------------------------- | | face_yolov8n.pt | 2D / realistic face | 0.660 | 0.366 | | face_yolov8s.pt | 2D / realistic face | 0.713 | 0.404 | | hand_yolov8n.pt | 2D / realistic hand | 0.767 | 0.505 | | person_yolov8n-seg.pt | 2D / realistic person | 0.782 (bbox)<br/>0.761 (mask) | 0.555 (bbox)<br/>0.460 (mask) | | person_yolov8s-seg.pt | 2D / realistic person | 0.824 (bbox)<br/>0.809 (mask) | 0.605 (bbox)<br/>0.508 (mask) | | mediapipe_face_full | realistic face | - | - | | mediapipe_face_short | realistic face | - | - | | mediapipe_face_mesh | realistic face | - | - | The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer). ### User Model Put your [ultralytics](https://github.com/ultralytics/ultralytics) model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`. It must be a bbox detection or segment model and use all label. ### Dataset Datasets used for training the yolo models are: #### Face - [Anime Face CreateML](https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml) - [xml2txt](https://universe.roboflow.com/0oooooo0/xml2txt-njqx1) - [AN](https://universe.roboflow.com/sed-b8vkf/an-lfg5i) - [wider face](http://shuoyang1213.me/WIDERFACE/index.html) #### Hand - [AnHDet](https://universe.roboflow.com/1-yshhi/anhdet) - [hand-detection-fuao9](https://universe.roboflow.com/catwithawand/hand-detection-fuao9) #### Person - [coco2017](https://cocodataset.org/#home) (only person) - [AniSeg](https://github.com/jerryli27/AniSeg) - [skytnt/anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation) ## Example ![image](https://i.imgur.com/38RSxSO.png) ![image](https://i.imgur.com/2CYgjLx.png) [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/F1F1L7V2N)
HilbertS/ppo-CartPole-v1
HilbertS
2023-07-04T11:56:04Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T11:55:56Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 179.30 +/- 76.48 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'first-run' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'f': '/root/.local/share/jupyter/runtime/kernel-10ad5965-bc3b-4029-b8a5-74b58d83db89.json' 'repo_id': 'HilbertS/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
fatcat22/rl_course_vizdoom_health_gathering_supreme
fatcat22
2023-07-04T11:52:55Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T11:52:52Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 7.46 +/- 2.25 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r fatcat22/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
cv43/distilbert-base-uncased-finetuned-squad
cv43
2023-07-04T11:51:02Z
133
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-03T12:52:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.5644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 190 | 2.0763 | | No log | 2.0 | 380 | 1.6763 | | 2.3144 | 3.0 | 570 | 1.5644 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
LarryAIDraw/CHAR-Kord
LarryAIDraw
2023-07-04T11:47:18Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-04T11:32:25Z
--- license: creativeml-openrail-m --- https://civitai.com/models/100517/kord-or-girls-frontline
Bilgilice/bilgilice35
Bilgilice
2023-07-04T11:46:09Z
0
0
null
[ "arxiv:1703.10135", "arxiv:1712.05884", "arxiv:2005.11129", "arxiv:2008.03802", "arxiv:2003.01950", "arxiv:2006.06873", "arxiv:1905.09263", "arxiv:2006.04558", "arxiv:2104.05557", "arxiv:1906.03402", "arxiv:2211.06892", "arxiv:2108.13320", "arxiv:2106.06103", "arxiv:2112.02418", "arxiv:1710.08969", "arxiv:1907.09006", "arxiv:1910.10288", "arxiv:2108.10447", "arxiv:1710.10467", "arxiv:2003.11982", "arxiv:1910.06711", "arxiv:2005.05106", "arxiv:1910.11480", "arxiv:1909.11646", "arxiv:2009.00713", "arxiv:2010.05646", "arxiv:2106.07889", "arxiv:2210.15418", "region:us" ]
null
2023-07-04T11:44:42Z
## ๐ŸธCoqui.ai News - ๐Ÿ“ฃ [๐ŸถBark](https://github.com/suno-ai/bark) is now available for inference with uncontrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html) - ๐Ÿ“ฃ You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with ๐ŸธTTS. - ๐Ÿ“ฃ ๐ŸธTTS now supports ๐ŸขTortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html) - ๐Ÿ“ฃ **Coqui Studio API** is landed on ๐ŸธTTS. - [Example](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api) - ๐Ÿ“ฃ [**Coqui Studio API**](https://docs.coqui.ai/docs) is live. - ๐Ÿ“ฃ Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin)!! - [Blog Post](https://coqui.ai/blog/tts/prompt-to-voice) - ๐Ÿ“ฃ Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin). - ๐Ÿ“ฃ Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin). ## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/> ๐ŸธTTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. ๐ŸธTTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects. [![Dicord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv) [![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0) [![PyPI version](https://badge.fury.io/py/TTS.svg)](https://badge.fury.io/py/TTS) [![Covenant](https://camo.githubusercontent.com/7d620efaa3eac1c5b060ece5d6aacfcc8b81a74a04d05cd0398689c01c4463bb/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d76322e3025323061646f707465642d6666363962342e737667)](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md) [![Downloads](https://pepy.tech/badge/tts)](https://pepy.tech/project/tts) [![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/aux_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/data_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/docker.yaml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/inference_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/style_check.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/text_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/tts_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/vocoder_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests0.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests1.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests2.yml/badge.svg) [![Docs](<https://readthedocs.org/projects/tts/badge/?version=latest&style=plastic>)](https://tts.readthedocs.io/en/latest/) ๐Ÿ“ฐ [**Subscribe to ๐ŸธCoqui.ai Newsletter**](https://coqui.ai/?subscription=true) ๐Ÿ“ข [English Voice Samples](https://erogol.github.io/ddc-samples/) and [SoundCloud playlist](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2) ๐Ÿ“„ [Text-to-Speech paper collection](https://github.com/erogol/TTS-papers) <img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" /> ## ๐Ÿ’ฌ Where to ask questions Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it. | Type | Platforms | | ------------------------------- | --------------------------------------- | | ๐Ÿšจ **Bug Reports** | [GitHub Issue Tracker] | | ๐ŸŽ **Feature Requests & Ideas** | [GitHub Issue Tracker] | | ๐Ÿ‘ฉโ€๐Ÿ’ป **Usage Questions** | [GitHub Discussions] | | ๐Ÿ—ฏ **General Discussion** | [GitHub Discussions] or [Discord] | [github issue tracker]: https://github.com/coqui-ai/tts/issues [github discussions]: https://github.com/coqui-ai/TTS/discussions [discord]: https://discord.gg/5eXr5seRrv [Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials ## ๐Ÿ”— Links and Resources | Type | Links | | ------------------------------- | --------------------------------------- | | ๐Ÿ’ผ **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/) | ๐Ÿ’พ **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)| | ๐Ÿ‘ฉโ€๐Ÿ’ป **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)| | ๐Ÿ“Œ **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378) | ๐Ÿš€ **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)| ## ๐Ÿฅ‡ TTS Performance <p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p> Underlined "TTS*" and "Judy*" are **internal** ๐ŸธTTS models that are not released open-source. They are here to show the potential. ## Features - High-performance Deep Learning models for Text2Speech tasks. - Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). - Speaker Encoder to compute speaker embeddings efficiently. - Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN) - Fast and efficient model training. - Detailed training logs on the terminal and Tensorboard. - Support for Multi-speaker TTS. - Efficient, flexible, lightweight but feature complete `Trainer API`. - Released and ready-to-use models. - Tools to curate Text2Speech datasets under```dataset_analysis```. - Utilities to use and test your models. - Modular (but not too much) code base enabling easy implementation of new ideas. ## Implemented Models ### Spectrogram models - Tacotron: [paper](https://arxiv.org/abs/1703.10135) - Tacotron2: [paper](https://arxiv.org/abs/1712.05884) - Glow-TTS: [paper](https://arxiv.org/abs/2005.11129) - Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802) - Align-TTS: [paper](https://arxiv.org/abs/2003.01950) - FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf) - FastSpeech: [paper](https://arxiv.org/abs/1905.09263) - FastSpeech2: [paper](https://arxiv.org/abs/2006.04558) - SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557) - Capacitron: [paper](https://arxiv.org/abs/1906.03402) - OverFlow: [paper](https://arxiv.org/abs/2211.06892) - Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320) ### End-to-End Models - VITS: [paper](https://arxiv.org/pdf/2106.06103) - ๐Ÿธ YourTTS: [paper](https://arxiv.org/abs/2112.02418) - ๐Ÿข Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts) - ๐Ÿถ Bark: [orig. repo](https://github.com/suno-ai/bark) ### Attention Methods - Guided Attention: [paper](https://arxiv.org/abs/1710.08969) - Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006) - Graves Attention: [paper](https://arxiv.org/abs/1910.10288) - Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/) - Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf) - Alignment Network: [paper](https://arxiv.org/abs/2108.10447) ### Speaker Encoder - GE2E: [paper](https://arxiv.org/abs/1710.10467) - Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf) ### Vocoders - MelGAN: [paper](https://arxiv.org/abs/1910.06711) - MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106) - ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480) - GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646) - WaveRNN: [origin](https://github.com/fatchord/WaveRNN/) - WaveGrad: [paper](https://arxiv.org/abs/2009.00713) - HiFiGAN: [paper](https://arxiv.org/abs/2010.05646) - UnivNet: [paper](https://arxiv.org/abs/2106.07889) ### Voice Conversion - FreeVC: [paper](https://arxiv.org/abs/2210.15418) You can also help us implement more models. ## Install TTS ๐ŸธTTS is tested on Ubuntu 18.04 with **python >= 3.7, < 3.11.**. If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released ๐ŸธTTS models, installing from PyPI is the easiest option. ```bash pip install TTS ``` If you plan to code or train models, clone ๐ŸธTTS and install it locally. ```bash git clone https://github.com/coqui-ai/TTS pip install -e .[all,dev,notebooks] # Select the relevant extras ``` If you are on Ubuntu (Debian), you can also run following commands for installation. ```bash $ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS. $ make install ``` If you are on Windows, ๐Ÿ‘‘@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system). ## Docker Image You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it. ```bash docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu python3 TTS/server/server.py --list_models #To get the list of available models python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server ``` You can then enjoy the TTS server [here](http://[::1]:5002/) More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html) ## Synthesizing speech by ๐ŸธTTS ### ๐Ÿ Python API ```python from TTS.api import TTS # Running a multi-speaker and multi-lingual model # List available ๐ŸธTTS models and choose the first one model_name = TTS.list_models()[0] # Init TTS tts = TTS(model_name) # Run TTS # โ— Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language # Text to speech with a numpy output wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) # Text to speech to a file tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav") # Running a single speaker model # Init TTS with the target model name tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False) # Run TTS tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH) # Example voice cloning with YourTTS in English, French and Portuguese tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True) tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav") tts.tts_to_file("Isso รฉ clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav") # Example voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav` tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True) tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav") # Example voice cloning by a single speaker TTS model combining with the voice conversion model. This way, you can # clone voices by using any model in ๐ŸธTTS. tts = TTS("tts_models/de/thorsten/tacotron2-DDC") tts.tts_with_vc_to_file( "Wie sage ich auf Italienisch, dass ich dich liebe?", speaker_wav="target/speaker.wav", file_path="output.wav" ) # Example text to speech using [๐ŸธCoqui Studio](https://coqui.ai) models. # You can use all of your available speakers in the studio. # [๐ŸธCoqui Studio](https://coqui.ai) API token is required. You can get it from the [account page](https://coqui.ai/account). # You should set the `COQUI_STUDIO_TOKEN` environment variable to use the API token. # If you have a valid API token set you will see the studio speakers as separate models in the list. # The name format is coqui_studio/en/<studio_speaker_name>/coqui_studio models = TTS().list_models() # Init TTS with the target studio speaker tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False, gpu=False) # Run TTS tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH) # Run TTS with emotion and speed control tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5) #Example text to speech using **Fairseq models in ~1100 languages** ๐Ÿคฏ. #For these models use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`. #You can find the list of language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms). # TTS with on the fly voice conversion api = TTS("tts_models/deu/fairseq/vits") api.tts_with_vc_to_file( "Wie sage ich auf Italienisch, dass ich dich liebe?", speaker_wav="target/speaker.wav", file_path="output.wav" ) ``` ### Command line `tts` #### Single Speaker Models - List provided models: ``` $ tts --list_models ``` - Get model info (for both tts_models and vocoder_models): - Query by type/name: The model_info_by_name uses the name as it from the --list_models. ``` $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" ``` For example: ``` $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts ``` ``` $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2 ``` - Query by type/idx: The model_query_idx uses the corresponding idx from --list_models. ``` $ tts --model_info_by_idx "<model_type>/<model_query_idx>" ``` For example: ``` $ tts --model_info_by_idx tts_models/3 ``` - Run TTS with default models: ``` $ tts --text "Text for TTS" --out_path output/path/speech.wav ``` - Run a TTS model with its default vocoder model: ``` $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav ``` For example: ``` $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav ``` - Run with specific TTS and vocoder models from the list: ``` $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav ``` For example: ``` $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav ``` - Run your own TTS model (Using Griffin-Lim Vocoder): ``` $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav ``` - Run your own TTS and Vocoder models: ``` $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json ``` #### Multi-speaker Models - List the available speakers and choose a <speaker_id> among them: ``` $ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs ``` - Run the multi-speaker TTS model with the target speaker ID: ``` $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id> ``` - Run your own multi-speaker TTS model: ``` $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id> ``` ## Directory Structure ``` |- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.) |- utils/ (common utilities.) |- TTS |- bin/ (folder for all the executables.) |- train*.py (train your target model.) |- ... |- tts/ (text to speech models) |- layers/ (model layer definitions) |- models/ (model definitions) |- utils/ (model specific utilities.) |- speaker_encoder/ (Speaker Encoder models.) |- (same) |- vocoder/ (Vocoder models.) |- (same) ```
Word2vec/nlpl_7
Word2vec
2023-07-04T11:45:15Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_February_2017", "license:cc-by-4.0", "region:us" ]
null
2023-07-04T10:02:23Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_February_2017 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`. The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_7", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/7.zip
nolanaatama/tny
nolanaatama
2023-07-04T11:43:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-04T11:40:16Z
--- license: creativeml-openrail-m ---
Allenpai/alpacaRec
Allenpai
2023-07-04T11:43:15Z
0
0
null
[ "region:us" ]
null
2023-07-04T11:42:16Z
Training procedure The following bitsandbytes quantization config was used during training: load_in_8bit: True load_in_4bit: False llm_int8_threshold: 6.0 llm_int8_skip_modules: None llm_int8_enable_fp32_cpu_offload: False llm_int8_has_fp16_weight: False bnb_4bit_quant_type: fp4 bnb_4bit_use_double_quant: False bnb_4bit_compute_dtype: float32 Framework versions PEFT 0.4.0.dev0
dcarpintero/Reinforce-Pixelcopter-PLE-v1
dcarpintero
2023-07-04T11:41:06Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T11:41:02Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 28.70 +/- 22.43 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
iammartian0/whisper-tiny-finetuned-gtzan
iammartian0
2023-07-04T11:08:08Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-04T10:40:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-tiny-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.4342 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7087 | 0.99 | 56 | 1.6682 | 0.53 | | 1.0139 | 2.0 | 113 | 1.1272 | 0.64 | | 0.8057 | 2.99 | 169 | 0.7579 | 0.79 | | 0.393 | 4.0 | 226 | 0.5791 | 0.86 | | 0.3414 | 4.99 | 282 | 0.5055 | 0.86 | | 0.1083 | 6.0 | 339 | 0.4109 | 0.9 | | 0.0783 | 6.99 | 395 | 0.4297 | 0.87 | | 0.0998 | 8.0 | 452 | 0.4627 | 0.87 | | 0.0119 | 8.99 | 508 | 0.4410 | 0.87 | | 0.0095 | 9.91 | 560 | 0.4342 | 0.87 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
heybezayb/ppo-LunarLander-v2
heybezayb
2023-07-04T11:01:18Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T11:00:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.41 +/- 16.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sazyou-roukaku/LittleStepMix
sazyou-roukaku
2023-07-04T10:47:46Z
248
33
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-25T06:57:42Z
--- license: creativeml-openrail-m language: - ja library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image --- License:[CreativeML Open RAIL-M](https://huggingface.co/sazyou-roukaku/LittleStepMix/blob/main/license_v1.txt)<br> Additional Copyright: sazyou_roukaku (TwitterID [@sazyou_roukaku](https://twitter.com/sazyou_roukaku)) as of June 25, 2023<br> ใ“ใฎใƒขใƒ‡ใƒซใฏใ€ŽCreativeML Open RAIL-Mใ€ใงLicenseใใฎใ‚‚ใฎใซๅค‰ๆ›ดใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚<br> ใ—ใ‹ใ—่ฟฝๅŠ ่‘—ไฝœ่€…ใจใ—ใฆไฝๅŸŽ้ƒŽ็”ปใฎๅๅ‰ใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใพใ™ใ€‚<br> ใชใŠใ€ŽCreativeML Open RAIL-Mใ€ใซ่จ˜่ผ‰ใ•ใ‚Œใฆใ„ใ‚‹้€šใ‚Šใ€<br> ๆœฌใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใฎ็”Ÿๆˆ็‰ฉใซ้–ขใ—ใฆใฏLicenseใฎไฝฟ็”จๅˆถ้™Aใฎไบ‹ไพ‹ใ‚’้™คใใ€ๅฝ“ๆ–นใฏไธ€ๅˆ‡้–ขไธŽ่‡ดใ—ใพใ›ใ‚“ใ€‚<br> ็Šฏ็ฝช็›ฎ็š„ๅˆฉ็”จใ‚„ๅŒป็™‚็”จ็”ปๅƒใชใฉ็‰นๅฎšๅฐ‚้–€็š„ใช็”จ้€”ใงใฎๅˆฉ็”จใฏไฝฟ็”จๅˆถ้™Aใง็ฆๆญขใ•ใ‚Œใฆใ„ใพใ™ใ€‚<br> ๅฟ…ใš็ขบ่ชใ—ใ”ๅˆฉ็”จใใ ใ•ใ„ใ€‚<br> ใพใŸๅฝ“ๆ–นใฏไธ€ๅˆ‡่ฒฌไปปใ‚’ๆŒใกใพใ›ใ‚“ใ€‚ๅ…่ฒฌใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’ใ”ไบ†ๆ‰ฟใฎไธŠใ€ใ”ไฝฟ็”จใใ ใ•ใ„ใ€‚<br> <br> ใ“ใฎCheckPointใฎใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใƒปไฝฟ็”จใฏไธŠ่จ˜CreativeML Open RAIL-M Licenseใ‚’็ขบ่ชใฎไธŠใ€<br> ๅŒๆ„ใ—ใŸใจใ„ใ†ๅ‰ๆๅŠใณๅฅ‘็ด„ใซๅŸบใฅใใ‚‚ใฎใจๅˆคๆ–ญใ•ใ‚Œใพใ™ใ€‚<br> <h4>ๆ›ดๆ–ฐๅฑฅๆญด</h4> <ul> <li>6/25 LittleStepMix_v1ๅ…ฌ้–‹</li> <li>7/1 LittleStepMix_AใƒปBใƒปCๅ…ฌ้–‹</li> <li>7/3 LittleStepMix_Aใ€Textencoderๅค‰ๆ›ดๅ‰ใ‚’ใ‚ขใƒƒใƒ—ใ—ใฆใ„ใŸ็‚บใ€ๅ‰Š้™คใ—ๅค‰ๆ›ดๆธˆ็‰ˆใ‚’ๅ†ๅ…ฌ้–‹</li> </ul> <h4>ๅˆถ้™</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ่‘—ไฝœ่€…่กจ่จ˜ใ‚’ๅ…ฅใ‚Œใšใซใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ใ“ใฎใƒขใƒ‡ใƒซใง็”Ÿๆˆใ—ใŸ็”ปๅƒใ‚’ๅ•†็”จๅˆฉ็”จใ™ใ‚‹<br> Sell images they generate </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ๅ•†็”จ็”ปๅƒ็”Ÿๆˆใ‚ตใƒผใƒ“ใ‚นใซใ€ใ“ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹<br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ใ“ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใŸใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใƒป้…ๅธƒใ™ใ‚‹<br> Share merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ใ“ใฎใƒขใƒ‡ใƒซใ€ใพใŸใฏๆดพ็”Ÿใƒขใƒ‡ใƒซใ‚’่ฒฉๅฃฒใ™ใ‚‹<br> Sell this model or merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> ใ“ใฎใƒขใƒ‡ใƒซใ‚’ใƒžใƒผใ‚ธใ—ใŸใƒขใƒ‡ใƒซใซ็•ฐใชใ‚‹ๆจฉ้™ใ‚’่จญๅฎšใ™ใ‚‹<br> Have different permissions when sharing merges </td> </tr> </table> </div> ใชใŠใ€ไธŠ่จ˜ใฎใƒขใƒ‡ใƒซใใฎใ‚‚ใฎใฎ่ฒฉๅฃฒใ‚„ๅ•†็”จ็”ปๅƒ็”Ÿๆˆใ‚ตใƒผใƒ“ใ‚นใธใฎๅˆฉ็”จใฏใ€<br> ใ€ŽCreativeML Open RAIL-Mใ€ใฎLicenseไธŠใ€ไฝฟ็”จๅˆถ้™Aใซ่ฟฝ่จ˜่จ˜่ผ‰ใ—ใชใ„้™ใ‚Šใ€<br> ๅˆถ้™ใ™ใ‚‹ใ“ใจใŒๆœฌๆฅใงใใชใ„็‚บใ€ใƒžใƒผใ‚ธ่€…ใธใฎ่ฒ ๆ‹…ใ‚‚่€ƒๆ…ฎใ—ใ€civitaiๅˆถ้™่กจ่จ˜ไธŠOKใจใ—ใฆใ„ใ‚‹ใ ใ‘ใงใ‚ใ‚Šใ€<br> ็ฉๆฅต็š„ใชๆŽจๅฅจใฏ่กŒใฃใฆใŠใ‚‰ใšใ€ใพใŸใใ‚Œใซใ‚ˆใ‚Šไฝ•ใ‚‰ใ‹ใฎๅ•้กŒใŒ็”Ÿใ˜ใฆใ‚‚ๅฝ“ๆ–นใฏไธ€ๅˆ‡่ฒฌไปปใ‚’ๆŒใกใพใ›ใ‚“ใ€‚<br> ใใฎ็‚นใ€ใ”็•™ๆ„ใ„ใŸใ ใใ‚ˆใ†ใŠ้ก˜ใ„ใ„ใŸใ—ใพใ™ใ€‚<br> <br> <h2>LittleStepMix_v1 ใƒžใƒผใ‚ธๅˆฉ็”จใƒขใƒ‡ใƒซไธ€่ฆง</h2> <ul> <li><a href="https://civitai.com/models/4384">dreamshaper_6BakedVae</a> ยฉLykon</li> <li><a href="https://civitai.com/models/25694">epicrealism_newAge</a> ยฉepinikion</li> <li><a href="https://civitai.com/models/1169">sxd_v10</a> ยฉizuek</li> <li><a href="https://huggingface.co/haor/Evt_V4-preview">Evt_V4_e04_ema</a> ยฉhaor</li> <li><a href="https://huggingface.co/Crosstyan/BPModel">bp_mk5</a> ยฉCrosstyan</li> <li><a href="https://huggingface.co/naclbit/trinart_characters_19.2m_stable_diffusion_v1">trinart_characters_it4_v1</a> ยฉSta, AI Novelist Dev <a href="https://ai-novel.com/">(https://ai-novel.com/)</a> @ Bit192, Inc.</li> </ul> <h2>LLittleStepMix_AใƒปBใƒปC่ฟฝๅŠ ใƒžใƒผใ‚ธๅˆฉ็”จใƒขใƒ‡ใƒซ</h2> <ul> <li><a href="https://huggingface.co/Ai-tensa/FlexWaifu">FlexWaifuRainbow</a> <a href="https://twitter.com/Ai_tensa">ยฉAi-tensa</a></li> <li><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-3">wd-v1-3-float16</a> developed by Anthony Mercurio, Salt, and Cafe</a></li> </ul> <p></p> -------------------------------------------------------------------------- <h4>ใ‚ตใƒณใƒ—ใƒซ</h4> <img src="https://huggingface.co/sazyou-roukaku/LittleStepMix/resolve/main/sample/002.jpg" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> (gyaru:1.3),high resolution,ultra-detail,solo,short shirt and short shorts,locker room, (cowboy shot:1.2),sexy smile,blonde long hair, Negative prompt: (worst quality:2),(low quality:1.4),(manicure:1.5),(long neck:2),lip Steps: 30 Sampler: DPM++ 2M Karras CFG scale: 7 Seed: 3358380436 </pre> <img src="https://huggingface.co/sazyou-roukaku/LittleStepMix/resolve/main/sample/001.jpg" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl,handsome face,cool beauty,high resolution,ultra-detail,solo,punk tee and cargo pants, london street, (cowboy shot:1.2),happy smile,black short hair, Negative prompt: (worst quality:2),(low quality:1.4),(manicure:1.5),(long neck:2),lip Steps: 30 Sampler: DPM++ 2M Karras CFG scale: 7 Seed: 269540596 </pre> -------------------------------------------------------------------------- <div> <h3>่ฉณ็ดฐ</h3> <p> <div class="px-2"> <div class="border p-2"> <details> <summary><h4>LittleStepMix_AใƒปBใƒปC</h4></summary> CLIP่จญๅฎš/clip skip:2<br> ๆŽจๅฅจVAE/mse840000_klf8anime_klf8anime2.vae<br> ใ‚‚ใ—ใใฏใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซใ‚ใ‚‹sr_SDv2vae_kl-f8anime2.safetensors<br> sr_SDv2vae_kl-f8anime2.safetensorsใฏSD2VAEใจkl-f8anime2ใ‚’็งใŒใƒžใƒผใ‚ธใ—ใŸVAEใงใ™ใ€‚<br> LittleStepMix_Aใ€LittleStepMix_Bใ€LittleStepMix_Cใฏ็„ผใ่พผใฟใชใ—ใฎNoVAEใงใ™ใ€‚<br> ClearVAEใฏ1.0ใŒNAIVAEใฎๅฝฑ้ŸฟใŒใ‚ใ‚‹ใจ่จ˜่ผ‰ใŒใ‚ใ‚Šใ€ใใ‚Œไปฅ้™ใฎVersionใ‚‚ๅ‡บๆ‰€ไธๆ˜Žใฎ็‚บใ€ใ‚ณใƒณใ‚ปใƒ—ใƒˆ็š„ใซๆŽจๅฅจใ—ใฆใ„ใพใ›ใ‚“ใ€‚<br> <br> 1ไบบใฎๆ™‚ใฏsoloใ‚’ใƒ—ใƒญใƒณใƒ—ใƒˆใงๅ…ฅใ‚Œใชใ„ใจใ€ๅค‰ใชใ‚ณใƒžๅ‰ฒใ‚Š็”ปๅƒใฎใ‚ˆใ†ใช่กจ็คบใซใชใ‚Šใ‚„ใ™ใ„ๅ‚พๅ‘ใŒใ‚ใ‚Šใพใ™ใ€‚ SD1.4ใ‹ใ‚‰็ขบ่ชใ•ใ‚Œใฆใ„ใ‚‹ใฎใงใ™ใŒใ€ACertainty็ณปใฏ็‰นใซใ“ใฎๅ‚พๅ‘ใŒๅผทใ„ใฎใงใ€1ไบบใฎๅ ดๅˆใฏsoloใจๆŒ‡ๅฎšๆŽจๅฅจใ€‚<br> NFSWใฏใใ“ใใ“ใพใงใฏๆ™ฎ้€šใซๅ‡บใ›ใพใ™ใ€‚ </details> </div> </div> <div class="px-2"> <div class="border p-2"> <details> <summary><h4>LittleStepMix_v1</h4></summary> CLIP่จญๅฎš/clip skip:2<br> ๆŽจๅฅจVAE/mse840000_klf8anime_klf8anime2.vae<br> ใ‚‚ใ—ใใฏใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซใ‚ใ‚‹sr_SDv2vae_kl-f8anime2.safetensorsใŒๅฅฝใฟใงใ™ใ€‚<br> sr_SDv2vae_kl-f8anime2.safetensorsใฏSD2VAEใจkl-f8anime2ใ‚’็งใŒใƒžใƒผใ‚ธใ—ใŸVAEใงใ™ใ€‚<br> <br> ใชใŠLittleStepMix_v1ใฏSD1.xใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆVAEใŒๆจ™ๆบ–็„ผใ่พผใฟๆธˆใฟใงใ™ใ€‚<br> <br> ่‡ช็„ถ่จ€่ชž(ๆ–‡็ซ )ใƒ—ใƒญใƒณใƒ—ใƒˆใ ใจใ€ใ‚ˆใ‚Š้ก”ใฎใƒชใ‚ขใƒซๅŒ–ใŒๅผทใใชใ‚‹ๅ‚พๅ‘ใŒใฟใ‚‰ใ‚Œใพใ™ใ€‚<br> ๅ˜ๆ–‡ใƒ—ใƒญใƒณใƒ—ใƒˆใงใฎๅˆฉ็”จใ‚’ๆŽจๅฅจใ—ใพใ™ใ€‚<br> ใชใŠใ‚คใƒฉใ‚นใƒˆใƒขใƒ‡ใƒซใƒปใƒ•ใ‚ฉใƒˆใƒชใ‚ขใƒซใƒขใƒ‡ใƒซใ‚‚ๅซใ‚ใŸไปŠๅพŒใฎ่‡ชๅทฑใƒ™ใƒผใ‚น็ด ๆใƒขใƒ‡ใƒซใจใ—ใฆใฎๅ…ฌ้–‹ใฎๅด้ขใ‚‚ๅผทใใ€็พ็Šถ่ฉณใ—ใ„่ƒฝๅŠ›ใฏๆคœ่จผไธญใงใ™ใ€‚ใ”ไบ†ๆ‰ฟใใ ใ•ใ„ใ€‚<br> ใชใŠใƒ™ใƒผใ‚นใŒใƒ•ใ‚ฉใƒˆใƒชใ‚ขใƒซใƒขใƒ‡ใƒซใ‚„ใ‚ปใƒŸใƒชใ‚ขใƒซใƒขใƒ‡ใƒซใฎ็‚บใ€ใ‚จใƒ•ใ‚งใ‚ฏใƒˆ็ณปใฏใ‹ใชใ‚Šๅผฑใ„ๅฐ่ฑกใงใ™ใ€‚<br> </details> </div> </div> <h3>FAQ</h3> <h4>Q1:LittleStepMixใจใฏไฝ•ใ‹</h4> A1:<br> ็พๅœจใ‚คใƒฉใ‚นใƒˆใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใฏleakใƒขใƒ‡ใƒซใฎๆททๅ…ฅใฎๅ•้กŒใŒๆ‡ธๅฟตใ•ใ‚Œใ€ๆฌกใ€…ใซๅ…ฌ้–‹ๅœๆญขใŒ็›ธๆฌกใใชใฉ่Ž็ธฎใƒ ใƒผใƒ‰ใซๅ…ฅใฃใฆใ„ใพใ™ใ€‚<br> ๅฝ“ใƒขใƒ‡ใƒซใฏๆฏ”่ผƒ็š„ๅฎ‰็‰Œใจๆ€ใ‚ใ‚Œใ‚‹่จ“็ทดใƒขใƒ‡ใƒซใ‚’ไธป่ปธใจใ—ใ€ไปŠๅพŒ่ชฟๆ•ดไบˆๅฎšใฎใƒขใƒ‡ใƒซใฎๅŸบ็คŽใจใ—ใฆไฝœใฃใฆใ„ใพใ™ใ€‚<br> ๅฎŒๅ…จใซๆททๅ…ฅใŒใชใ„ใจใฏๆ–ญ่จ€ใงใใชใ„ใ‚‚ใฎใฎใ€ใƒžใƒผใ‚ธ็ด ๆใฏ่กจ่จ˜ใฎใ‚‚ใฎไปฅๅค–ไธ€ๅˆ‡ไฝฟ็”จใ—ใฆใ„ใชใ„็‚นใ€‚(add็”จใฎSD1.4ใ€SD1.5ใฏ้™คใ)<br> ่จ“็ทดใƒขใƒ‡ใƒซใฎใฟใงใฎใƒžใƒผใ‚ธใงใ‚ใ‚‹็‚นใ‹ใ‚‰ใ€ๆฏ”่ผƒ็š„ไฝŽใƒชใ‚นใ‚ฏใฎใƒฉใ‚คใƒณใ‚’็›ฎๆŒ‡ใ—ใฆใ„ใพใ™ใ€‚<br> ๅŸบๆœฌ็š„ใซใฏไผๆฅญใƒขใƒ‡ใƒซ็ญ‰ใŒๆŠ•ๅ…ฅใ•ใ‚Œใ‚‹ใชใฉใฎๆ™‚ไปฃใพใงใฎ็น‹ใŽใจใ—ใฆใฎๅฝนๅ‰ฒใงใ™ใ€‚<br> ๆใ‚Œๅ…ฅใ‚Šใพใ™ใŒใ€ๅ…จใฆใฎใƒžใƒผใ‚ธ็ด ๆใ‚’็ขบ่ชใฎไธŠใ€ใ”ๅˆฉ็”จใฏ่‡ชๅทฑใงใ”ๅˆคๆ–ญใใ ใ•ใ„ใ€‚<br> <br> *7/1่ฟฝ่จ˜*ใ€€LittleStepMix_AใƒปBใƒปCใฏLittleStepMixใ‚’ๅœŸๅฐใจใ—ใฆใ‚คใƒฉใ‚นใƒˆใƒขใƒ‡ใƒซๅŒ–ใ—ใพใ—ใŸใ€‚<br> ใƒžใƒผใ‚ธ็ด ๆใจใ—ใฆ่‡ช็”ฑใซใ”ๅˆฉ็”จใ„ใŸใ ใ„ใฆๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚<br> <h4>Q2:ๅ„ๅญฆ็ฟ’ใƒขใƒ‡ใƒซ้ธๅฎšๅŸบๆบ–ใซใคใ„ใฆ</h4> A2:<br> *7/1่ฟฝ่จ˜* sampleใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซใ€Anything-V3.0ใ‚’ๅŸบๆบ–ใจใ—ใฆใ€<br> Baka-DiffusionV1(fp16)ใ€sd-v1-4ใ€LittleStepMixใ‚ทใƒชใƒผใ‚บ4็จฎๅŠใณไธป่ปธใƒขใƒ‡ใƒซใงใ‚ใ‚‹dreamshaperใง็พ็Šถๆœ€ๅคใฎๅ…ฌ้–‹ใƒขใƒ‡ใƒซ<br> dreamshaper_252ใ‚’ใƒฉใƒณใƒ€ใƒ Seedใง10ๅ›žใ€<br> IN01-02,04-05,07-08/OUT03-11ใฎcosineไธ€่‡ด็އใ‚’ๅ‡บๅŠ›ใ—ใŸใƒ•ใ‚กใ‚คใƒซใ‚’ๅ…ฌ้–‹ใ„ใŸใ—ใพใ™ใ€‚<br> Anything-V3.0ใซๅฏพใ—ใ€SD1.4ใฏๆฆ‚ใญ84๏ผ…ใปใฉไธ€่‡ดใ€‚<br> dreamshaper_252.safetensorsใง88๏ผ…ใ€‚LittleStepMixใ‚ทใƒชใƒผใ‚บใฏๆฆ‚ใญ89๏ผ…็จ‹ๅบฆใฎไธ€่‡ด็އใงใ™ใ€‚<br> Baka-DiffusionV1ใ‚’ๆŽก็”จใ—ใชใ‹ใฃใŸ็†็”ฑใ‚‚ใ“ใฎๆ•ฐๅ€คใซใ‚ใ‚Šใพใ™ใ€‚<br> ไธ‹่จ˜ใฎASimilarityCalculatiorใ‚’ใƒ™ใƒผใ‚นใซใ€ใƒฉใƒณใƒ€ใƒ Seedใงใ€ๅˆ่จˆใงใฏใชใๅ„ๆ•ฐๅ€คใ‚’ๅ‡บใ›ใ‚‹ใ‚ˆใ†ๆ”น่‰ฏใ—ใŸใ‚‚ใฎใ‚’็”จใ„ใฆใ„ใพใ™ใ€‚<br> ใ”ๅ‚่€ƒใพใงใซใ€‚ <br> <br> <br> โ‘ dreamshaper_6BakedVae<br> ๆœฌใƒขใƒ‡ใƒซใฎ<strong>ไธป่ปธ</strong>ใจใชใฃใฆใ„ใ‚‹่จ“็ทดใƒขใƒ‡ใƒซใงใ™ใ€‚<br> ่จ“็ทดใƒขใƒ‡ใƒซใฎ่กจ่จ˜ใŒใ‚ใ‚Šใ€่ค‡ๆ•ฐใฎๅ•†็”จ็”ปๅƒ็”Ÿๆˆใ‚ตใƒผใƒ“ใ‚นใงใ‚‚ๅˆฉ็”จใ•ใ‚Œใฆใ„ใ‚‹็‚บใ€ไธ€ๅฎšใฎไฟก้ ผๆ€งใŒๆ‹…ไฟใ•ใ‚Œใฆใ„ใ‚‹ใจๅˆคๆ–ญใ—ใฆใ„ใพใ™ใ€‚<br> <strong>ใ‚ขใ‚นใ‚ซใƒ†ใ‚นใƒˆใชใฉใงใฎ้กžไผผๆ€งใฏๅŸบๆœฌ็š„ใซdreamshaper_6BakedVae็”ฑๆฅ</strong>ใงใ™ใ€‚<br> <br> โ‘กsxd_v10<br> v0.8ใจ้•ใ„ใ€v1.0ๅ…ฌ้–‹ๆ—ฅใฏใƒชใƒผใ‚ฏๅพŒใชใŒใ‚‰SD1.5ใƒ™ใƒผใ‚นใฎ่จ“็ทดใƒขใƒ‡ใƒซใงใ€ใƒชใ‚ขใƒชใƒ†ใ‚ฃ้‡่ฆ–ใฎ็‚บใ€็ทๅˆ็š„ใซๅˆคๆ–ญใ€‚<br> ไบบไฝ“ๆง‹้€ ๅผทๅŒ–ใจๅฐ†ๆฅNFSWใƒขใƒ‡ใƒซๅŒ–ใ‚’่กŒใ†้š›ใฎ่ฃœๅผทใจใ—ใฆๆŽก็”จใ€‚<br> <br> โ‘ขepicrealism_newAge<br> ็พ่กŒใฎ่จ“็ทดใƒขใƒ‡ใƒซใงๆœ€ๅผทใฎใ‚นใƒšใƒƒใ‚ฏใ‚’่ช‡ใ‚‹ใจๆ€ใ‚ใ‚Œใ‚‹ใƒขใƒ‡ใƒซใ€‚<br> ่ƒŒๆ™ฏ่ฃœๅผทใจ่ƒฝๅŠ›ใฎ้ซ˜ใ•ใ‹ใ‚‰ๆŽก็”จใ€‚<br> ๆœ€ๆ–ฐใงใฏใชใ„ใฎใฏใ€ไป–ใฎ็งใฎใƒ•ใ‚ฉใƒˆใƒชใ‚ขใƒซใƒขใƒ‡ใƒซใจใฎๅ…ผใญๅˆใ„ใจใ€ใ‚ณใƒณใƒˆใƒฉใ‚นใƒˆใชใฉใฎๅ…ผใญๅˆใ„ใ‹ใ‚‰epicrealism_newAgeใ‚’้ธๆŠžใ€‚<br> <br> โ‘ฃEvt_V4_e04_ema<br> ACertaintyใจใ„ใ†leakใƒ‡ใƒผใ‚ฟใ‚’ๅซใพใชใ„ใจๅ…ฌ่จ€ใ—ใฆใ„ใ‚‹ใ‚คใƒฉใ‚นใƒˆๅญฆ็ฟ’ใƒขใƒ‡ใƒซใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ„็”Ÿใฟๅ‡บใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ€‚<br> ็ตตๆŸ„ใฎไธป่ปธใƒขใƒ‡ใƒซใ€‚e04ๆŽก็”จใฏใ‚ใพใ‚Šๆ นๆ‹ ใŒใชใ„ใ€‚็ตตๆŸ„็š„ใซไธ€็•ชๅฅฝใฟใชใฎใงๆŽก็”จใ—ใ€ใƒ•ใ‚ฉใƒˆใƒชใ‚ขใƒซใƒขใƒ‡ใƒซใงใ“ใกใ‚‰ใ‚’ๅˆฉ็”จใ—ใฆใ„ใ‚‹ใฎใงๅค‰ๆ›ดใ™ใ‚‹ใจ็ฎก็†ใŒ้ขๅ€’ใซใชใ‚‹ใจใ„ใ†็†็”ฑใ ใ‘ใงใ™ใ€‚<br> โ€ปACertaintyใฏNOVEL AIใฎใƒ‡ใƒผใ‚ฟใ‚’่’ธ็•™ใ—ใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใฏใ‚ใ‚Šใพใ™ใŒใ€ใ“ใกใ‚‰ใฏ็‰น่จฑๆณ•ใซๆŠต่งฆใ—ใชใ„็‚บใ€ๅ•้กŒใชใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚<br> ACertainty<br> <a href="https://huggingface.co/JosephusCheung/ACertainty">https://huggingface.co/JosephusCheung/ACertainty</a><br> <a href="https://huggingface.co/JosephusCheung/ASimilarityCalculatior">https://huggingface.co/JosephusCheung/ASimilarityCalculatior</a><br> <br> โ‘คbp_mk5<br> ACertaintyใƒ™ใƒผใ‚นใฎ่จ“็ทดใƒขใƒ‡ใƒซใ€‚ไธŠ่จ˜ๅŒๆง˜ใ€‚<br> <br> โ‘ฅtrinart_characters_it4_v1<br> AIใฎในใ‚Šใ™ใจใงๆœ‰ๅใชไผš็คพใŒๅ…ฌ้–‹ใ—ใฆใใ ใ•ใฃใŸใƒขใƒ‡ใƒซใชใฎใงไธ€็•ชไฟก้ ผๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚<br> ใ‚คใƒฉใ‚นใƒˆ่ฆ็ด ่ฃœๅผทใจใ—ใฆไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚<br> <br> โ‘ฆFlexWaifuRainbow<br> ใƒขใƒ‡ใƒซใฎ้€†ใƒžใƒผใ‚ธ่งฃๆžใ‚ณใƒผใƒ‰ใ‚’ๅ…ฌ้–‹ใ™ใ‚‹ใชใฉใ‚‚่กŒใฃใฆใ„ใ‚‹ๅคฉ็ด—ๆ„›ๆฐใŒWD1.3ใซ่ฟฝๅŠ ๅญฆ็ฟ’ใ‚’ๆ–ฝใ—ใŸใƒขใƒ‡ใƒซใ€‚<br> ACertaintyใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซๆŽก็”จใซๅฝ“ใŸใ‚Šใ€ACertaintyใฎ่งฃๆž็ตๆžœใชใฉใ‚‚ๅ‚่€ƒใซใ•ใ›ใฆใ„ใŸใ ใ„ใฆใ„ใพใ™ใ€‚<br> ้€ฃ็ถšๅ‡บๅŠ›ๆ™‚ใฎ็ตตๆŸ„ใฎๅฎ‰ๅฎšๆ€งใจไฟก้ ผๆ€งใ‹ใ‚‰้ธใฐใ›ใฆใ„ใŸใ ใใพใ—ใŸใ€‚ <br> <h3>Q3:ไปŠๅ›žใฎๅˆถ้™ใซๅ•้กŒใ‚„็Ÿ›็›พใฏใชใ„ใฎใ‹</h3> <h4>A3:</h4> <strong>dreamshaper_6BakedVae</strong> ใฏcivitaiใฎใƒ‘ใƒผใƒŸใ‚ทใƒงใƒณใŒใ€ <strong>OK:Have different permissions when sharing merges</strong>ใจใชใฃใฆใŠใ‚Š่งฃ้™คๅฏ่ƒฝใ€‚<br> ไป–ใฏๅˆถ้™ใชใ—ใฎ็‚บใ€ไปŠๅ›žๅ…จใฆๅˆถ้™ใชใ—ใจใ—ๅ…ฌ้–‹ใ—ใฆใŠใ‚Šใพใ™ใ€‚<br> <br> ใชใŠใƒžใƒผใ‚ธๅˆฉ็”จใƒขใƒ‡ใƒซๅดใซLicenseๅค‰ๆ›ดใƒปๅˆถ้™ๅค‰ๆ›ด็ญ‰ใŒ็”Ÿใ˜ใŸ้š›ใ‚‚<br> ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ๆ™‚็‚นใฎLicenseใ‚„ๅˆถ้™ใ‚’ๅ‰ๆใจใ—ใฆๅ…ฌ้–‹ใ—ใฆใ„ใ‚‹็‚บใ€creativeml-openrail-mใซๆบ–ใ˜ใพใ™ใ€‚<br> ใ“ใกใ‚‰ใฏLittleStepMIXMerge_LicenseSS_v1ใซ่ฉฒๅฝ“ใƒขใƒ‡ใƒซใฎSSใ‚’ไฟ็ฎกใ—ใฆใŠใ‚Šใพใ™ใ€‚<br> ใŸใ ใ—huggingfaceๅ…ฌ้–‹ใฎใƒขใƒ‡ใƒซใฏSSใ‚ˆใ‚Šใƒชใƒใ‚ธใƒˆใƒชใฎใปใ†ใŒไฟก้ ผๆ€งใŒ้ซ˜ใ„ใฎใงใ€ไฟ็ฎกใ—ใฆใŠใ‚Šใพใ›ใ‚“ใ€‚<br> <br> ใชใŠใƒžใƒผใ‚ธๅˆฉ็”จใƒขใƒ‡ใƒซๅดใซ้‡ๅคงใชๅ•้กŒใŒ็™บ็”Ÿใ—ใŸๅ ดๅˆใฏใ€ใƒขใƒ‡ใƒซใฎๅ…ฌ้–‹ๅœๆญขใ‚’่กŒใ„ใ€<br> ๅˆฉ็”จๅœๆญขใ‚’ๅ‘ผใณใ‹ใ‘ใ‚‹ๅฏ่ƒฝๆ€งใฏใ‚ใ‚Šใพใ™ใŒใ€<strong>ๅฝ“ๆ–นๅดใ‚’็†็”ฑใจใ—ใŸ่ฟฝๅŠ ๅˆถ้™ใ‚’่จญใ‘ใ‚‹ใ“ใจใฏ่‡ดใ—ใพใ›ใ‚“ใ€‚</strong> </div>
Anwaarma/autotrain-enhancedauto-72049138835
Anwaarma
2023-07-04T10:47:14Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain", "unk", "dataset:Anwaarma/autotrain-data-enhancedauto", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T10:42:11Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain" datasets: - Anwaarma/autotrain-data-enhancedauto co2_eq_emissions: emissions: 3.3106524610859784 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 72049138835 - CO2 Emissions (in grams): 3.3107 ## Validation Metrics - Loss: 0.042 - Accuracy: 0.990 - Precision: 0.994 - Recall: 0.935 - AUC: 0.997 - F1: 0.964 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Anwaarma/autotrain-enhancedauto-72049138835 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138835", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138835", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
vivekraina/falcon-7b-4bit
vivekraina
2023-07-04T10:47:09Z
4
0
peft
[ "peft", "region:us" ]
null
2023-07-04T10:46:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
revmag/Taxi-v3
revmag
2023-07-04T10:43:12Z
0
1
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T10:43:11Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="revmag/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Bugsys0302/niplbarpcg
Bugsys0302
2023-07-04T10:38:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-04T10:33:15Z
--- license: creativeml-openrail-m ---
ericNguyen0132/roberta-large-Dep-pretrain
ericNguyen0132
2023-07-04T10:33:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T06:57:43Z
--- tags: - generated_from_trainer model-index: - name: roberta-large-Dep-pretrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-Dep-pretrain This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
revmag/q-FrozenLake-v1-4x4-noSlippery
revmag
2023-07-04T10:30:26Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T10:30:24Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="revmag/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
chenxingphh/distilbert-base-uncased-finetuned-imdb
chenxingphh
2023-07-04T10:28:47Z
126
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-04T10:21:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
erkam/sd-clevr-sg2im-objects_cap-e2e
erkam
2023-07-04T10:26:20Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-2", "base_model:adapter:stabilityai/stable-diffusion-2", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-08T12:35:18Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - erkam/sd-clevr-sg2im-objects_cap-e2e These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v4 dataset. You can find some example images in the following.
AnnaAp/Equip
AnnaAp
2023-07-04T10:23:54Z
0
0
null
[ "region:us" ]
null
2023-07-04T10:16:55Z
--- language: - ru ---ะปะพะณะพั‚ะธะฟ ัั‚ั€ะพะธั‚ะตะปัŒะฝะฐั ั‚ะตั…ะฝะธะบะฐ ะพะฑะพั€ัƒะดะพะฒะฐะฝะธะต
msladic/ppo-ML-Agents-Pyramids
msladic
2023-07-04T10:19:42Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-04T10:19:39Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: msladic/ppo-ML-Agents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
msladic/ppo-SnowballTarget
msladic
2023-07-04T10:18:36Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-04T10:02:46Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: msladic/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
NasimB/gpt2-cl-concat-log-rarity-9-210k-mod-datasets
NasimB
2023-07-04T10:10:08Z
121
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-04T08:51:19Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-cl-concat-log-rarity-9-210k-mod-datasets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-cl-concat-log-rarity-9-210k-mod-datasets This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 5.0793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.2877 | 0.07 | 500 | 5.9527 | | 5.0107 | 0.14 | 1000 | 5.5940 | | 4.7383 | 0.21 | 1500 | 5.4130 | | 4.5602 | 0.28 | 2000 | 5.2903 | | 4.423 | 0.35 | 2500 | 5.2322 | | 4.3129 | 0.41 | 3000 | 5.1696 | | 4.2078 | 0.48 | 3500 | 5.1278 | | 4.1161 | 0.55 | 4000 | 5.1007 | | 4.023 | 0.62 | 4500 | 5.0613 | | 3.933 | 0.69 | 5000 | 5.0483 | | 3.8578 | 0.76 | 5500 | 5.0290 | | 3.7859 | 0.83 | 6000 | 5.0156 | | 3.746 | 0.9 | 6500 | 5.0064 | | 3.7228 | 0.97 | 7000 | 5.0027 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
nageen/roberta-finetuned-subjqa-event_model
nageen
2023-07-04T10:05:57Z
122
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-29T22:46:41Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: roberta-finetuned-subjqa-event_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-event_model This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
heka-ai/cross-mpnet-70k
heka-ai
2023-07-04T10:01:14Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-04T10:01:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # heka-ai/cross-mpnet-70k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('heka-ai/cross-mpnet-70k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('heka-ai/cross-mpnet-70k') model = AutoModel.from_pretrained('heka-ai/cross-mpnet-70k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/cross-mpnet-70k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 400000 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 100000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
macavaney/deepct
macavaney
2023-07-04T09:57:26Z
114
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "retrieval", "document-rewriting", "en", "arxiv:1910.10687", "arxiv:2007.14271", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T13:17:13Z
--- language: - en tags: - retrieval - document-rewriting datasets: - irds:msmarco-passage library_name: transformers --- A DeepCT model based on `bert-base-uncased` and trained on MS MARCO. This is a version of [the checkpoint released by the original authors](http://boston.lti.cs.cmu.edu/appendices/arXiv2019-DeepCT-Zhuyun-Dai/outputs/marco.zip), converted to pytorch format and ready for use in PyTerrier. ## References - [Dai19]: Zhuyun Dai, Jamie Callan. Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval. https://arxiv.org/abs/1910.10687 - [Macdonald20]: Craig Macdonald, Nicola Tonellotto. Declarative Experimentation in Information Retrieval using PyTerrier. Craig Macdonald and Nicola Tonellotto. In Proceedings of ICTIR 2020. https://arxiv.org/abs/2007.14271
Ejru5/ml_model
Ejru5
2023-07-04T09:55:32Z
0
0
null
[ "region:us" ]
null
2023-07-04T09:41:17Z
# Random_Forest A project we made while having ML value added course
ymkgr/shikimiya_mana_from_Re_Stage
ymkgr
2023-07-04T09:27:21Z
0
1
null
[ "anime", "game", "license:creativeml-openrail-m", "region:us" ]
null
2023-07-04T08:29:48Z
--- license: creativeml-openrail-m metrics: - character tags: - anime - game --- ๆจกๅž‹็ฑปๅž‹/Model type: LoRA --- v2.3็‰ˆๆœฌๆจกๅž‹่ฏฆ็ป†ไฟกๆฏ/v2.3 Version Model Details(I used a translator in English): - ๆฅ่‡ช ๆ—ฅๆœฌๅคšๅช’ไฝ“ไผๅˆ’๏ผšRe:Stage! - ็ป„ๅˆ๏ผšKiRaRe - ่ง’่‰ฒๅ๏ผšๅผๅฎซ่ˆž่œใ€‚/from Japanese multimedia project: Re:Stage! - Unit: KiRaRe - character name: shikimiya mana. - LoRAๆƒ้‡/weight๏ผš0.6~1ใ€‚ - ่งฆๅ‘่ฏ/Trigger Words * ่ฏท่‡ช่กŒๅœจ"("ๅ’Œ")"็š„ๅ‰้ขๆทปๅŠ \็ฌฆๅท๏ผŒ่ฟ™ไธช้กต้ขไผผไนŽไธ่ƒฝๅฐ†\็ฌฆๅทไธŽๅ…ถๅฎƒ็ฌฆๅท่ฟžๅœจไธ€่ตทๆ˜พ็คบ/Please add the \ symbol before "(" and ")" yourself. It seems that the Model card cannot display the \ symbol together with other symbols๏ผš - ่ง’่‰ฒ/character๏ผš shikimiya mana\(re:stage!\), ahoge, short hair, orange hair, blue eyes, clover hairclip\(shikimiya mana\), ็คบไพ‹/Example:![122690-3778830886-masterpiece, best quality, 1girl, shikimiya mana_(re_stage!_), ahoge, short hair, orange hair, blue eyes, yukata,.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/9pTAdVbkkNAI1V2a_or3n.png) - ่ˆžๅฐๆœ/stage dress๏ผš dress\(smsa\), star hair ornament\(smsa\), hat\(smsa\), one wrist cuffs\(smsa\), one wrist scrunchie\(smsa\), asymmetrical thighhighs\(smsa\), shoes\(smsa\), ![122650-431890354-masterpiece, best quality, 1girl, shikimiya mana_(re_stage!_), ahoge, short hair, orange hair, blue eyes, clover hairclip_(shiki.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/rEcNVPwLK_MUAx2MKRI07.png) - ๆ กๆœ/school uniform๏ผš sailor collar, blue pleated skirt, bowtie,![122672-3658421627-masterpiece, best quality, 1girl, shikimiya mana_(re_stage!_), ahoge, short hair, orange hair, blue eyes, clover hairclip_(shiki.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/p-rpxD5jkb67qAGbPWukc.png) --- v2.3็‰ˆๆœฌ่ฏดๆ˜Ž/v2.3 Version description: - ๅฎƒๅœจไธๆทปๅŠ ไปปไฝ•ๅ‘้ฅฐ็ฑป็š„ๆ็คบ่ฏๆ—ถ๏ผŒไนŸๅฏ่ƒฝไผš็”Ÿๆˆ็ฑปไผผๅ‘้ฅฐ็š„ๆ‚็‰ฉ๏ผŒ่งฃๅ†ณๆ–นๆณ•/It may also generate something similar to hair accessories without adding any hint words for hair accessories. Solution:๏ผš ยท ๅœจ Negative prompt ไธญๆทปๅŠ  hairclipใ€hair ornament ็ญ‰ๅ‘้ฅฐ็ฑปๆ็คบ่ฏ/Add hairclip, hair oment, and other hair accessory prompts to Negative prompt ยท ้™ไฝŽLoRAๆƒ้‡/Reduce LoRA weight ็›ธๆฏ”v1็‰ˆๆœฌ๏ผŒๆœ้ฅฐๆ–น้ขๆ›ดๅƒใ€‚/Compared to the v1 Version, the clothing aspect is more similar. --- I don't know English and I'm not very good at using the Hugging Face website. I also use a translation for the description Please comply with regulations.
ak2704/q-FrozenLake-v1-4x4-noSlippery
ak2704
2023-07-04T09:24:35Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T09:24:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ak2704/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
a2ran/kor_chatGLM
a2ran
2023-07-04T09:21:16Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-04T09:15:50Z
--- library_name: peft --- - **WIP** Data used : https://raw.githubusercontent.com/Beomi/KoAlpaca/main/alpaca_data.json training_args = TrainingArguments( "output", fp16 =True, gradient_accumulation_steps=1, per_device_train_batch_size = 1, learning_rate = 1e-4, max_steps=3000, logging_steps=100, remove_unused_columns=False, seed=0, data_seed=0, group_by_length=False, )
Word2vec/nlpl_5
Word2vec
2023-07-04T09:20:25Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_February_2017", "license:cc-by-4.0", "region:us" ]
null
2023-06-01T15:35:34Z
--- language: eng tags: - word2vec datasets: English_Wikipedia_Dump_of_February_2017 license: cc-by-4.0 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 302866 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_5", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/5.zip
NasimB/gpt2-dp-cl-rarity-9-210k-mod-datasets
NasimB
2023-07-04T09:20:18Z
125
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-04T07:52:17Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-cl-rarity-9-210k-mod-datasets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-cl-rarity-9-210k-mod-datasets This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 5.0528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.3046 | 0.06 | 500 | 5.9519 | | 5.0135 | 0.13 | 1000 | 5.5816 | | 4.7368 | 0.19 | 1500 | 5.3952 | | 4.5486 | 0.26 | 2000 | 5.2773 | | 4.412 | 0.32 | 2500 | 5.2062 | | 4.3027 | 0.39 | 3000 | 5.1514 | | 4.1991 | 0.45 | 3500 | 5.1160 | | 4.1058 | 0.52 | 4000 | 5.0827 | | 4.0144 | 0.58 | 4500 | 5.0443 | | 3.9241 | 0.65 | 5000 | 5.0280 | | 3.8441 | 0.71 | 5500 | 5.0056 | | 3.7614 | 0.78 | 6000 | 4.9986 | | 3.7094 | 0.84 | 6500 | 4.9807 | | 3.6717 | 0.91 | 7000 | 4.9782 | | 3.6519 | 0.97 | 7500 | 4.9763 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Copax/Graceful
Copax
2023-07-04T09:19:30Z
0
0
null
[ "region:us" ]
null
2023-07-04T08:31:12Z
version: Spotlight https://civitai.com/models/102749?modelVersionId=109965 The model brings vibrant and vivid colors to the images, with excellent contrast. The hair details create flowing and intricate hairstyles, while the overall appearance of the characters follows a tall and slender catwalk style. The outfits are enhanced with additional ornate patterns along the edges. It's important to note that this model primarily focuses on female character designs, so drawing male characters or other genres may not yield the desired results. Recommend: step: 30~60 Denoising strength: 0.3 CFG Scale: 7 ~14 Upscaler: 4x-UltraSharp R-ESRGAN 4x+ for real, R-ESRGAN 4x+Anime6B for anime N prompt illustration, 3d, 2d, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, vaginas in breasts, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, bad image, bad photo
DEplain/trimmed_longmbart_docs_apa
DEplain
2023-07-04T09:18:27Z
85
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "text simplification", "plain language", "easy-to-read language", "document simplification", "de", "dataset:DEplain/DEplain-APA-doc", "arxiv:2305.18939", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text2text-generation
2023-03-02T16:39:31Z
--- inference: false license: apache-2.0 language: - de datasets: - DEplain/DEplain-APA-doc metrics: - sari - bleu - bertscore library_name: transformers pipeline_tag: text2text-generation tags: - text simplification - plain language - easy-to-read language - document simplification --- # DEplain German Text Simplification This model belongs to the experiments done at the work of Stodden, Momen, Kallmeyer (2023). ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939) In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics. Detailed documentation can be found on this GitHub repository [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain) We reused the codes from [https://github.com/a-rios/ats-models](https://github.com/a-rios/ats-models) to do our experiments. ### Model Description The model is a finetuned checkpoint of the pre-trained LongmBART model based on `mbart-large-cc25`. With a trimmed vocabulary to the most frequent 30k words in the German language. The model was finetuned towards the task of German text simplification of documents. The finetuning dataset included manually aligned sentences from the datasets `DEplain-APA-doc` only. ### Model Usage This model can't be used in the HuggingFace interface or via the .from_pretrained method currently. As it's a finetuning of a custom model (LongMBart), which hasn't been registered on HF yet. You can find this custom model codes at: [https://github.com/a-rios/ats-models](https://github.com/a-rios/ats-models) To test this model checkpoint, you need to clone the checkpoint repository as follows: ``` # Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git clone https://huggingface.co/DEplain/trimmed_longmbart_docs_apa # if you want to clone without large files โ€“ just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 ``` Then set up the conda environment via: ``` conda env create -f environment.yaml ``` Then follow the procedure in the notebook `generation.ipynb`.
Word2vec/nlpl_3
Word2vec
2023-07-04T09:08:44Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_February_2017", "license:cc-by-4.0", "region:us" ]
null
2023-06-01T15:13:39Z
--- language: eng tags: - word2vec datasets: English_Wikipedia_Dump_of_February_2017 license: cc-by-4.0 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 296630 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_3", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/3.zip
Word2vec/nlpl_2
Word2vec
2023-07-04T09:06:54Z
0
1
null
[ "word2vec", "nor", "dataset:Norsk_Aviskorpus/NoWaC", "license:cc-by-4.0", "region:us" ]
null
2023-06-01T15:11:33Z
--- language: nor tags: - word2vec datasets: Norsk_Aviskorpus/NoWaC license: cc-by-4.0 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 306943 corresponding to 1941761506 tokens from the dataset `Norsk_Aviskorpus/NoWaC`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_2", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jรถrg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linkรถping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/2.zip
natykov/swin-tiny-patch4-window7-224-finetuned-eurosat
natykov
2023-07-04T09:01:46Z
209
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-04T08:52:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5564 - Accuracy: 0.2861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5752 | 0.99 | 115 | 1.5699 | 0.2685 | | 1.5519 | 2.0 | 231 | 1.5570 | 0.2866 | | 1.5324 | 2.98 | 345 | 1.5564 | 0.2861 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KPF/KPF-bert-cls3
KPF
2023-07-04T08:54:34Z
161
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T07:48:21Z
# KPF-BERT-CLS2 - [๋น…์นด์ธ์ฆˆ๋žฉ](https://lab.bigkinds.or.kr/) ์ธ์‚ฌ์ด๋“œ ๋ฉ”๋‰ด์˜ ์ง€์—ญ๋‰ด์Šค์—์„œ ์‚ฌ์šฉ๋œ ์ง€์—ญ๋ถ„๋ฅ˜ ์˜ˆ์ธก ๋ชจ๋ธ์ด๋ฉฐ ์ง€์—ญ์˜ ์„ธ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ๋ฅผ ๋‚˜ํƒ€๋‚ธ๋‹ค. - ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์•ˆ๋‚ด ๋ฐ ์ฝ”๋“œ๋Š” [KPF-bigkinds github](https://github.com/KPF-bigkinds/BIGKINDS-LAB/tree/main/KPF-BERT-CLS)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ ์†Œ๊ฐœ ### KPF-BERT-CLS ํ•œ๊ตญ์–ธ๋ก ์ง„ํฅ์žฌ๋‹จ์ด ๊ฐœ๋ฐœํ•œ kpf-BERT ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ CLS(Classification) task๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” kpf-BERT-cls ๋ชจ๋ธ์„ ์„ค๊ณ„ ๋ฐ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. - ๋ณธ ์˜ˆ์ œ์— ์‚ฌ์šฉ๋œ kpf-BERT๋Š” [kpfBERT](https://github.com/KPFBERT/kpfbert)์— ๊ณต๊ฐœ๋˜์–ด ์žˆ๋‹ค. - ๋ณธ ์˜ˆ์ œ์—์„œ๋Š” ๋Œ€๋ถ„๋ฅ˜, ์ง€์—ญ์„ ์ œ์™ธํ•œ ๋Œ€๋ถ„๋ฅ˜๋“ค์˜ ์„ธ๋ถ„๋ฅ˜, ์ง€์—ญ ์„ธ๋ถ„๋ฅ˜๋กœ ๊ตฌ๋ถ„ํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•œ๋‹ค. ํ•™์Šต๋ฐ์ดํ„ฐ๋Š” ๊ธฐ์‚ฌ๋‚ด์šฉ๊ณผ ๋ถ„๋ฅ˜๋ช…์„ ๋„ฃ์–ด ์ œ์ž‘ํ•˜์˜€๋‹ค. ๋ถ„๋ฅ˜๋ช…์€ ์•„๋ž˜์˜ ๋ถ„๋ฅ˜์ฒด๊ณ„๋ฅผ ๋”ฐ๋ฅด๋ฉฐ, ๊ธฐ์‚ฌ๋‚ด์šฉ + ๋Œ€๋ถ„๋ฅ˜(์ง€์—ญ์ œ์™ธ) ๋ฐ์ดํ„ฐ์…‹, ๊ธฐ์‚ฌ๋‚ด์šฉ + ์„ธ๋ถ„๋ฅ˜(์ง€์—ญ์ œ์™ธ) ๋ฐ์ดํ„ฐ์…‹, ๊ธฐ์‚ฌ๋‚ด์šฉ + ์ง€์—ญ์„ธ๋ถ„๋ฅ˜ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋‚˜๋ˆ„์–ด ํ•™์Šต์„ ์ง„ํ–‰ํ–ˆ๋‹ค. ![img](https://user-images.githubusercontent.com/87846939/221474119-7701e4e4-fe73-4b74-8f55-58d0853e5639.png) ํ•œ๊ตญ์–ธ๋ก ์ง„ํฅ์žฌ๋‹จ์ด ๊ฐœ๋ฐœํ•œ kpf-BERT๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ classification layer๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ kpf-BERT-cls ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•œ๋‹ค. kpf-BERT-cls ๋ชจ๋ธ์€ ๊ธฐ์‚ฌ๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ kpf-BERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๋‹น ๊ธฐ์‚ฌ๊ฐ€ ์–ด๋А ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€ ์˜ˆ์ธกํ•œ๋‹ค. ๊ธฐ๋ณธ BERT ๋ชจ๋ธ์˜ ๊ตฌ์กฐ์™€ ํ† ํฌ๋‚˜์ด์ €๋Š” ์•„๋ž˜์˜ ๊ทธ๋ฆผ๊ณผ ๊ฐ™๋‹ค. ![img_2](https://user-images.githubusercontent.com/87846939/221474169-552bba7c-0a05-4f3d-a90e-2ad8f9f69cba.png) ![img_3](https://user-images.githubusercontent.com/87846939/221474197-2b588cea-4d73-4caf-b451-b52a10ef966d.png) BERT๋Š” ์ž…๋ ฅ ๊ธธ์ด์˜ ์ œํ•œ์œผ๋กœ 512 subword ์ดํ•˜์˜ ๊ฐ’๋งŒ ์ž…๋ ฅ๋ฐ›์„ ์ˆ˜ ์žˆ๋‹ค. ๊ธฐ์‚ฌ์˜ ํŠน์„ฑ์ƒ ์ธํ„ฐ๋ทฐ ๋“ฑ์˜ ๊ธ€์€ 512 subword๋ณด๋‹ค ๊ธด ๊ฒƒ์ด ๋Œ€๋ถ€๋ถ„์ด๋‹ค. ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋ณธ ๊ณผ์ œ์—์„œ๋Š” stride๋ฅผ ์ฃผ์–ด ๋…๋ฆฝ์ ์œผ๋กœ ๋ฌธ์„œ์˜ ์กฐ๊ฐ๋“ค์„ ์ฒ˜๋ฆฌํ•œ๋‹ค. ![img_1](https://user-images.githubusercontent.com/87846939/221474214-4e760c55-ba53-4e08-9154-65c73afabca6.png) kpf-BERT-cls๋Š” ๋Œ€๋ถ„๋ฅ˜ ์˜ˆ์ธก ๋ชจ๋ธ, ์„ธ๋ถ„๋ฅ˜ ์˜ˆ์ธก ๋ชจ๋ธ, ์ง€์—ญ ์„ธ๋ถ„๋ฅ˜ ์˜ˆ์ธก ๋ชจ๋ธ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋‹ค. ๋Œ€๋ถ„๋ฅ˜/์„ธ๋ถ„๋ฅ˜ ์˜ˆ์ธก ๋ชจ๋ธ์€ top-3 ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค. ![img_4](https://user-images.githubusercontent.com/87846939/221474226-fb68c3aa-b45a-4bdf-9c10-a6c98b6451e8.png)
Fuyuxiang123/ppo-Huggy
Fuyuxiang123
2023-07-04T08:51:14Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-04T08:51:10Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Fuyuxiang123/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
nolanaatama/vgtfrmdbzrvcncgm
nolanaatama
2023-07-04T08:51:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-04T08:47:40Z
--- license: creativeml-openrail-m ---
greenw0lf/wav2vec2-large-xls-r-1b-frisian-cv-8-large-train
greenw0lf
2023-07-04T08:49:21Z
114
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-25T08:03:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_8_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-1b-frisian-cv-8-large-train results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_8_0 type: common_voice_8_0 config: fy-NL split: validation args: fy-NL metrics: - name: Wer type: wer value: 0.04206541922582488 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_8_0 type: common_voice_8_0 config: fy-NL split: test args: fy-NL metrics: - name: Wer type: wer value: 0.04108252637664402 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-frisian-cv-8-large-train This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_8_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0444 - Wer: 0.0421 And on the test set: - Wer: 0.0411 ## Model description This model has been developed for my Master's thesis in "Voice Technology" at Rijksuniversiteit Groningen - Campus Fryslรขn. It corresponds to experiment 2 where I use as training set all validated data (~ 50 hours) except the test and evaluation sets (~ 4.5 hours each). The number of training hours adds up to 41 hours of Frisian speech. ## Intended uses & limitations The intended use is for recognizing Frisian speech. Limitations include no LM rescoring and using version 8.0 of Common Voice instead of 13.0. ## Training and evaluation data The evaluation split used is the one available in the Common Voice 8.0 Frisian subset. The train split corresponds to all of the validated data except for the recordings found in the evaluation and test splits. ## Training procedure The script used for training this model can be found in this GitHub repository: [link](https://github.com/greenw0lf/MSc-VT-Thesis/). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 36 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 7.2522 | 0.48 | 400 | 3.1028 | 1.0 | | 3.0052 | 0.97 | 800 | 2.9334 | 1.0 | | 2.0865 | 1.45 | 1200 | 0.7288 | 0.6646 | | 1.1654 | 1.93 | 1600 | 0.4298 | 0.4196 | | 0.9665 | 2.41 | 2000 | 0.3134 | 0.3162 | | 0.7891 | 2.9 | 2400 | 0.2378 | 0.2587 | | 0.8366 | 3.38 | 2800 | 0.1896 | 0.2016 | | 0.8606 | 3.86 | 3200 | 0.1647 | 0.1903 | | 0.7536 | 4.34 | 3600 | 0.1486 | 0.1573 | | 0.632 | 4.83 | 4000 | 0.1341 | 0.1450 | | 0.5198 | 5.31 | 4400 | 0.1223 | 0.1415 | | 0.4998 | 5.79 | 4800 | 0.1155 | 0.1388 | | 0.4273 | 6.27 | 5200 | 0.1132 | 0.1302 | | 0.3982 | 6.76 | 5600 | 0.1036 | 0.1102 | | 0.3964 | 7.24 | 6000 | 0.0988 | 0.1209 | | 0.3848 | 7.72 | 6400 | 0.0995 | 0.0985 | | 0.3702 | 8.2 | 6800 | 0.0969 | 0.0945 | | 0.3612 | 8.69 | 7200 | 0.0899 | 0.0967 | | 0.3518 | 9.17 | 7600 | 0.0856 | 0.1061 | | 0.3371 | 9.65 | 8000 | 0.0902 | 0.0875 | | 0.3295 | 10.13 | 8400 | 0.0819 | 0.0914 | | 0.3157 | 10.62 | 8800 | 0.0785 | 0.0937 | | 0.3025 | 11.1 | 9200 | 0.0782 | 0.0804 | | 0.3092 | 11.58 | 9600 | 0.0758 | 0.0845 | | 0.301 | 12.06 | 10000 | 0.0775 | 0.0847 | | 0.3016 | 12.55 | 10400 | 0.0730 | 0.0776 | | 0.2892 | 13.03 | 10800 | 0.0719 | 0.0735 | | 0.283 | 13.51 | 11200 | 0.0728 | 0.0727 | | 0.2806 | 13.99 | 11600 | 0.0694 | 0.0710 | | 0.2639 | 14.48 | 12000 | 0.0705 | 0.0703 | | 0.2606 | 14.96 | 12400 | 0.0652 | 0.0668 | | 0.2595 | 15.44 | 12800 | 0.0638 | 0.0691 | | 0.2611 | 15.92 | 13200 | 0.0636 | 0.0713 | | 0.246 | 16.41 | 13600 | 0.0632 | 0.0653 | | 0.2544 | 16.89 | 14000 | 0.0605 | 0.0638 | | 0.2509 | 17.37 | 14400 | 0.0640 | 0.0646 | | 0.2381 | 17.85 | 14800 | 0.0604 | 0.0663 | | 0.2336 | 18.34 | 15200 | 0.0590 | 0.0628 | | 0.2285 | 18.82 | 15600 | 0.0580 | 0.0612 | | 0.2362 | 19.3 | 16000 | 0.0655 | 0.0638 | | 0.2279 | 19.78 | 16400 | 0.0611 | 0.0669 | | 0.2228 | 20.27 | 16800 | 0.0606 | 0.0621 | | 0.2242 | 20.75 | 17200 | 0.0560 | 0.0575 | | 0.2053 | 21.23 | 17600 | 0.0571 | 0.0572 | | 0.2097 | 21.71 | 18000 | 0.0557 | 0.0555 | | 0.2072 | 22.2 | 18400 | 0.0563 | 0.0576 | | 0.2076 | 22.68 | 18800 | 0.0532 | 0.0562 | | 0.2026 | 23.16 | 19200 | 0.0531 | 0.0540 | | 0.1941 | 23.64 | 19600 | 0.0535 | 0.0534 | | 0.1983 | 24.13 | 20000 | 0.0528 | 0.0541 | | 0.2075 | 24.61 | 20400 | 0.0536 | 0.0538 | | 0.1937 | 25.09 | 20800 | 0.0532 | 0.0569 | | 0.1943 | 25.57 | 21200 | 0.0511 | 0.0507 | | 0.1844 | 26.06 | 21600 | 0.0521 | 0.0521 | | 0.181 | 26.54 | 22000 | 0.0506 | 0.0507 | | 0.1877 | 27.02 | 22400 | 0.0529 | 0.0510 | | 0.1825 | 27.5 | 22800 | 0.0527 | 0.0498 | | 0.1872 | 27.99 | 23200 | 0.0506 | 0.0485 | | 0.1857 | 28.47 | 23600 | 0.0497 | 0.0492 | | 0.1766 | 28.95 | 24000 | 0.0504 | 0.0488 | | 0.1756 | 29.43 | 24400 | 0.0496 | 0.0482 | | 0.1701 | 29.92 | 24800 | 0.0479 | 0.0479 | | 0.1717 | 30.4 | 25200 | 0.0499 | 0.0468 | | 0.1624 | 30.88 | 25600 | 0.0492 | 0.0466 | | 0.1671 | 31.36 | 26000 | 0.0490 | 0.0461 | | 0.1704 | 31.85 | 26400 | 0.0482 | 0.0452 | | 0.1653 | 32.33 | 26800 | 0.0467 | 0.0446 | | 0.158 | 32.81 | 27200 | 0.0465 | 0.0449 | | 0.1599 | 33.29 | 27600 | 0.0473 | 0.0445 | | 0.1558 | 33.78 | 28000 | 0.0475 | 0.0453 | | 0.1556 | 34.26 | 28400 | 0.0462 | 0.0445 | | 0.1591 | 34.74 | 28800 | 0.0464 | 0.0431 | | 0.1544 | 35.22 | 29200 | 0.0476 | 0.0433 | | 0.1576 | 35.71 | 29600 | 0.0466 | 0.0434 | | 0.1507 | 36.19 | 30000 | 0.0451 | 0.0435 | | 0.1501 | 36.67 | 30400 | 0.0453 | 0.0429 | | 0.1482 | 37.15 | 30800 | 0.0439 | 0.0432 | | 0.1518 | 37.64 | 31200 | 0.0446 | 0.0424 | | 0.1454 | 38.12 | 31600 | 0.0449 | 0.0417 | | 0.145 | 38.6 | 32000 | 0.0440 | 0.0421 | | 0.147 | 39.08 | 32400 | 0.0441 | 0.0424 | | 0.141 | 39.57 | 32800 | 0.0444 | 0.0421 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
NourEldin-Osama/mT5-finetuned-xlsum
NourEldin-Osama
2023-07-04T08:47:02Z
104
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "dataset:xlsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-04T04:03:22Z
--- tags: - generated_from_trainer datasets: - xlsum metrics: - rouge model-index: - name: mT5-finetuned-xlsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xlsum type: xlsum config: arabic split: validation args: arabic metrics: - name: Rouge1 type: rouge value: 0.1179 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5-finetuned-xlsum This model is a fine-tuned version of [csebuetnlp/mT5_m2o_arabic_crossSum](https://huggingface.co/csebuetnlp/mT5_m2o_arabic_crossSum) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 0.6752 - Rouge1: 0.1179 - Rouge2: 0.0231 - Rougel: 0.118 - Rougelsum: 0.1178 - Gen Len: 47.6818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.8728 | 1.0 | 9380 | 0.6752 | 0.1179 | 0.0231 | 0.118 | 0.1178 | 47.6818 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Hawk91/whisper-small-hi
Hawk91
2023-07-04T08:43:37Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-01T12:21:50Z
--- language: - hi license: apache-2.0 tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Hi - Hawk results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: hi split: test args: hi metrics: - name: Wer type: wer value: 35.53475278714895 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Hawk This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.8294 - Wer Ortho: 58.7561 - Wer: 35.5348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.9225 | 0.03 | 50 | 0.8294 | 58.7561 | 35.5348 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
matejvadovic/unit1-lunar-lander-v2
matejvadovic
2023-07-04T08:40:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T08:39:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.35 +/- 19.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dcarpintero/Reinforce-CartPole-v2
dcarpintero
2023-07-04T08:39:40Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T08:39:02Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AIYIYA/my_awesome_model
AIYIYA
2023-07-04T08:30:35Z
65
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-02T10:47:12Z
--- tags: - generated_from_keras_callback model-index: - name: AIYIYA/my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AIYIYA/my_awesome_model This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1422 - Validation Loss: 0.2983 - Train Accuracy: 0.8940 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2014 | 0.3058 | 0.8742 | 0 | | 0.1413 | 0.2983 | 0.8940 | 1 | | 0.1422 | 0.2983 | 0.8940 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
heka-ai/cross-mpnet-20k
heka-ai
2023-07-04T08:30:13Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-04T08:30:09Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # heka-ai/cross-mpnet-20k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('heka-ai/cross-mpnet-20k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('heka-ai/cross-mpnet-20k') model = AutoModel.from_pretrained('heka-ai/cross-mpnet-20k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/cross-mpnet-20k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 400000 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 100000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
userusernamename/trinity_epoch1
userusernamename
2023-07-04T08:28:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-04T08:28:17Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
ycros/airoboros-33b-gpt4-1.4.1-PI-8192-GGML
ycros
2023-07-04T08:06:19Z
0
4
null
[ "region:us" ]
null
2023-07-04T07:14:40Z
GGML quants of https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
Andrewk2/kiedis99
Andrewk2
2023-07-04T08:02:57Z
0
1
null
[ "region:us" ]
null
2023-07-04T07:55:01Z
andthony kiedis 1999, californication full album + some bsides
ccattomio/PPO-LunarLander-v2
ccattomio
2023-07-04T07:56:56Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T07:38:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.54 +/- 18.84 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python from stable_baselines3 import PPO from huggingface_sb3 import load_from_hub repo_id = "ccattomio/PPO-LunarLander-v2" filename = "PPO-LunarLander-v2.zip" checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint) ```
skgg/output
skgg
2023-07-04T07:56:31Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-27T09:22:12Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - skgg/output This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
CICLAB-Comillas/AlpaCalls
CICLAB-Comillas
2023-07-04T07:50:42Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-27T11:48:49Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
heka-ai/tasb-bert-50k
heka-ai
2023-07-04T07:47:46Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-04T07:47:42Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # heka-ai/tasb-bert-50k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('heka-ai/tasb-bert-50k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('heka-ai/tasb-bert-50k') model = AutoModel.from_pretrained('heka-ai/tasb-bert-50k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/tasb-bert-50k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 50000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 50000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
revmag/ppo-LunarLander-v2
revmag
2023-07-04T07:41:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T07:41:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -837.23 +/- 436.43 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
vineetsharma/whisper-tiny-finetuned-minds14-en
vineetsharma
2023-07-04T07:40:30Z
86
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-03T15:25:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-finetuned-minds14-en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.33943329397874855 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-minds14-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6329 - Wer Ortho: 0.3430 - Wer: 0.3394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0009 | 17.86 | 500 | 0.6329 | 0.3430 | 0.3394 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
smart-assistant/falcon-7b-multi
smart-assistant
2023-07-04T07:34:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-04T07:34:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
sd-concepts-library/ahx-beta-4a3bf61
sd-concepts-library
2023-07-04T07:25:36Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-07-04T07:25:35Z
--- license: mit --- ### ahx-beta-4a3bf61 on Stable Diffusion This is the `<ahx-beta-4a3bf61>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ahx-beta-4a3bf61> 0](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/4.jpeg) ![<ahx-beta-4a3bf61> 1](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/8.jpeg) ![<ahx-beta-4a3bf61> 2](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/2.jpeg) ![<ahx-beta-4a3bf61> 3](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/3.jpeg) ![<ahx-beta-4a3bf61> 4](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/7.jpeg) ![<ahx-beta-4a3bf61> 5](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/0.jpeg) ![<ahx-beta-4a3bf61> 6](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/6.jpeg) ![<ahx-beta-4a3bf61> 7](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/5.jpeg) ![<ahx-beta-4a3bf61> 8](https://huggingface.co/sd-concepts-library/ahx-beta-4a3bf61/resolve/main/concept_images/1.jpeg)
weifeng-chen/controlavideo-hed
weifeng-chen
2023-07-04T07:21:48Z
42
1
diffusers
[ "diffusers", "arxiv:2305.13840", "license:gpl-3.0", "diffusers:Controlnet3DStableDiffusionPipeline", "region:us" ]
null
2023-06-13T14:25:02Z
--- license: gpl-3.0 --- - Hed Control Pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840) - Project page: https://controlavideo.github.io/ - Code: https://github.com/Weifeng-Chen/control-a-video # Citation ``` @misc{chen2023controlavideo, title={Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models}, author={Weifeng Chen and Jie Wu and Pan Xie and Hefeng Wu and Jiashi Li and Xin Xia and Xuefeng Xiao and Liang Lin}, year={2023}, eprint={2305.13840}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
weifeng-chen/controlavideo-canny
weifeng-chen
2023-07-04T07:21:12Z
228
1
diffusers
[ "diffusers", "arxiv:2305.13840", "license:gpl-3.0", "diffusers:Controlnet3DStableDiffusionPipeline", "region:us" ]
null
2023-06-13T12:26:52Z
--- license: gpl-3.0 --- - Canny Control Pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840) - Project page: https://controlavideo.github.io/ - Code: https://github.com/Weifeng-Chen/control-a-video # Citation ``` @misc{chen2023controlavideo, title={Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models}, author={Weifeng Chen and Jie Wu and Pan Xie and Hefeng Wu and Jiashi Li and Xin Xia and Xuefeng Xiao and Liang Lin}, year={2023}, eprint={2305.13840}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
jeff1jeffo/mystarcoder
jeff1jeffo
2023-07-04T07:10:51Z
0
0
null
[ "text-generation", "region:us" ]
text-generation
2023-07-04T06:41:37Z
--- pipeline_tag: text-generation inference: true ---
megagonlabs/t5-base-japanese-web-8k
megagonlabs
2023-07-04T07:05:38Z
115
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "seq2seq", "ja", "dataset:mc4", "dataset:wiki40b", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: ja tags: - t5 - text2text-generation - seq2seq license: apache-2.0 datasets: - mc4 - wiki40b --- # t5-base-japanese-web-8k (with Byte-fallback, 8K) ## Description [megagonlabs/t5-base-japanese-web-8k](https://huggingface.co/megagonlabs/t5-base-japanese-web-8k) is a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. Training codes are [available on GitHub](https://github.com/megagonlabs/t5-japanese). The vocabulary size of this model is 8K. [32K version is also available](https://huggingface.co/megagonlabs/t5-base-japanese-web). ### Corpora We used following corpora for pre-training. - Japanese in [mC4/3.0.1](https://huggingface.co/datasets/mc4) (We used [Tensorflow native format](https://github.com/allenai/allennlp/discussions/5056)) - 87,425,304 pages - 782 GB in TFRecord format - [Japanese](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) in [wiki40b/1.3.0](https://www.tensorflow.org/datasets/catalog/wiki40b) - 828,236 articles (2,073,584 examples) - 2 GB in TFRecord format ### Tokenizer We used Japanese Wikipedia to train [SentencePiece](https://github.com/google/sentencepiece). - Vocabulary size: 8,000 - [Byte-fallback](https://github.com/google/sentencepiece/releases/tag/v0.1.9): Enabled ### Parameters - T5 model: [models/t5.1.1.base.gin](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/gin/models/t5.1.1.base.gin) - Training steps: 1,000,000 It took about 126 hours with TPU v3-8 ## Related models - [ๆ—ฅๆœฌ่ชžT5ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซ (sonoisa/t5-base-japanese)](https://huggingface.co/sonoisa/t5-base-japanese) - [ๆ—ฅๆœฌ่ชžT5ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซ (sonoisa/t5-base-japanese-mC4-Wikipedia)](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia) ## License Apache License 2.0 ## Citations - mC4 Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/). ```bibtex @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` - wiki40b ```bibtex @inproceedings{49029, title = {Wiki-40B: Multilingual Language Model Dataset}, author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou}, year = {2020}, booktitle = {LREC 2020} } ```
sang-kyung/bottle
sang-kyung
2023-07-04T06:54:36Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:finetune:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-02T08:05:05Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: a photo of sks bottle tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - sang-kyung/bottle This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks bottle using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
adya/ppo-Huggy
adya
2023-07-04T06:54:22Z
15
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-04T06:54:03Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: adya/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
NasimB/gpt2-concat-gutenberg-fixed
NasimB
2023-07-04T06:31:50Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-04T04:12:40Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-gutenberg-fixed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-gutenberg-fixed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.0040 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7298 | 0.29 | 500 | 5.6360 | | 5.3656 | 0.58 | 1000 | 5.2026 | | 5.0212 | 0.87 | 1500 | 4.9523 | | 4.7476 | 1.16 | 2000 | 4.7988 | | 4.586 | 1.45 | 2500 | 4.6801 | | 4.4835 | 1.74 | 3000 | 4.5786 | | 4.3674 | 2.03 | 3500 | 4.4991 | | 4.1624 | 2.32 | 4000 | 4.4532 | | 4.137 | 2.61 | 4500 | 4.3960 | | 4.106 | 2.91 | 5000 | 4.3422 | | 3.9133 | 3.2 | 5500 | 4.3427 | | 3.8519 | 3.49 | 6000 | 4.3083 | | 3.8433 | 3.78 | 6500 | 4.2794 | | 3.758 | 4.07 | 7000 | 4.2761 | | 3.5652 | 4.36 | 7500 | 4.2719 | | 3.5749 | 4.65 | 8000 | 4.2517 | | 3.5632 | 4.94 | 8500 | 4.2355 | | 3.3622 | 5.23 | 9000 | 4.2584 | | 3.3265 | 5.52 | 9500 | 4.2559 | | 3.3112 | 5.81 | 10000 | 4.2500 | | 3.264 | 6.1 | 10500 | 4.2572 | | 3.1673 | 6.39 | 11000 | 4.2606 | | 3.1623 | 6.68 | 11500 | 4.2607 | | 3.1614 | 6.97 | 12000 | 4.2607 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Softechlb/Sent_analysis_CVs
Softechlb
2023-07-04T06:23:50Z
240
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "sentiment-analysis", "zero-shot-distillation", "distillation", "zero-shot-classification", "debarta-v3", "en", "ar", "de", "es", "fr", "ja", "zh", "id", "hi", "it", "ms", "pt", "dataset:tyqiangz/multilingual-sentiments", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T07:09:51Z
--- license: apache-2.0 tags: - sentiment-analysis - text-classification - zero-shot-distillation - distillation - zero-shot-classification - debarta-v3 model-index: - name: Softechlb/Sent_analysis_CVs results: [] datasets: - tyqiangz/multilingual-sentiments language: - en - ar - de - es - fr - ja - zh - id - hi - it - ms - pt --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Softechlb/Sent_analysis_CVs This model is distilled from the zero-shot classification pipeline on the Multilingual Sentiment dataset using this [script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation). In reality the multilingual-sentiment dataset is annotated of course, but we'll pretend and ignore the annotations for the sake of example. Teacher model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli Teacher hypothesis template: "The sentiment of this text is {}." Student model: distilbert-base-multilingual-cased ## Inference example ```python from transformers import pipeline distilled_student_sentiment_classifier = pipeline( model="Softechlb/Sent_analysis_CVs", return_all_scores=True ) # english distilled_student_sentiment_classifier ("I love this movie and i would watch it again and again!") >> [[{'label': 'positive', 'score': 0.9731044769287109}, {'label': 'neutral', 'score': 0.016910076141357422}, {'label': 'negative', 'score': 0.009985478594899178}]] # malay distilled_student_sentiment_classifier("Saya suka filem ini dan saya akan menontonnya lagi dan lagi!") [[{'label': 'positive', 'score': 0.9760093688964844}, {'label': 'neutral', 'score': 0.01804516464471817}, {'label': 'negative', 'score': 0.005945465061813593}]] # japanese distilled_student_sentiment_classifier("็งใฏใ“ใฎๆ˜ ็”ปใŒๅคงๅฅฝใใงใ€ไฝ•ๅบฆใ‚‚่ฆ‹ใพใ™๏ผ") >> [[{'label': 'positive', 'score': 0.9342429041862488}, {'label': 'neutral', 'score': 0.040193185210227966}, {'label': 'negative', 'score': 0.025563929229974747}]] ``` ``` ### Training log ```bash Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 2009.8864, 'train_samples_per_second': 73.0, 'train_steps_per_second': 4.563, 'train_loss': 0.6473459283913797, 'epoch': 1.0} 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 9171/9171 [33:29<00:00, 4.56it/s] [INFO|trainer.py:762] 2023-05-06 10:56:18,555 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message. [INFO|trainer.py:3129] 2023-05-06 10:56:18,557 >> ***** Running Evaluation ***** [INFO|trainer.py:3131] 2023-05-06 10:56:18,557 >> Num examples = 146721 [INFO|trainer.py:3134] 2023-05-06 10:56:18,557 >> Batch size = 128 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1147/1147 [08:59<00:00, 2.13it/s] 05/06/2023 11:05:18 - INFO - __main__ - Agreement of student and teacher predictions: 88.29% [INFO|trainer.py:2868] 2023-05-06 11:05:18,251 >> Saving model checkpoint to ./distilbert-base-multilingual-cased-sentiments-student [INFO|configuration_utils.py:457] 2023-05-06 11:05:18,251 >> Configuration saved in ./distilbert-base-multilingual-cased-sentiments-student/config.json [INFO|modeling_utils.py:1847] 2023-05-06 11:05:18,905 >> Model weights saved in ./distilbert-base-multilingual-cased-sentiments-student/pytorch_model.bin [INFO|tokenization_utils_base.py:2171] 2023-05-06 11:05:18,905 >> tokenizer config file saved in ./distilbert-base-multilingual-cased-sentiments-student/tokenizer_config.json [INFO|tokenization_utils_base.py:2178] 2023-05-06 11:05:18,905 >> Special tokens file saved in ./distilbert-base-multilingual-cased-sentiments-student/special_tokens_map.json ``` ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Pranjal-666/Reinforce-pixelcopter
Pranjal-666
2023-07-04T06:10:20Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T08:35:18Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.80 +/- 13.98 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
bobobert4/qlearning_Taxi-v3
bobobert4
2023-07-04T05:31:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T04:54:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: qlearning_Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="bobobert4/qlearning_Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
chet4/my_awesome_qa_model
chet4
2023-07-04T05:26:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-03T09:31:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.3733 | | 2.7971 | 2.0 | 500 | 1.7135 | | 2.7971 | 3.0 | 750 | 1.6204 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sourinkarmakar/kyc_v1-donut-demo
sourinkarmakar
2023-07-04T05:25:49Z
11
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "donut", "kyc", "en", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-03T19:04:52Z
--- language: - en metrics: - accuracy library_name: transformers tags: - donut - kyc --- # Model description Donut is an end-to-end (i.e., self-contained) VDU model for the general understanding of document images. The architecture of Donut is quite simple, which consists of a Transformer based visual encoder and textual decoder modules. Donut does not rely on any modules related to OCR functionality but uses a visual encoder for extracting features from a given document image. The following textual decoder maps the derived features into a sequence of subword tokens to construct a desired structured format (e.g., JSON). Each model component is Transformer-based, and thus the model is trained easily in an end-to-end manner. ![image.png](https://cdn-uploads.huggingface.co/production/uploads/637eccd46df7e8f7df76a3ae/OSQp25332524epV2PimZb.png) # Intended uses and limitations This model is trained to be used for reading the contents of Indian KYC documents. It can classify and read the contents of Aadhar, PAN and Voter. It also can detect the orientation and whether the document is coloured or Black and White. The document for input can be oriented in any direction. The model should be provided with a fair-quality image (so that the contents are readable). It has been trained on limited data so the performance might not be very good. In future versions, the number of images will be more and more types of KYC documents can be added to this. # Training data For v1, a custom dataset has been used for the training purpose where around 283 images were used, out of which 199 were for training, 42 were for validation and 42 were for testing. Out of 199 images, 57 Aadhar samples, 57 PAN samples and 85 Voter samples were used. # Performance The current performance is as follows Overall accuracy = 74 % Aadhar = 49 % (need to check out, the reason behind the less accuracy) PAN = 94 % Voter = 76 % # Inference ``` python from transformers import DonutProcessor, VisionEncoderDecoderModel import re import cv2 import json import torch from tqdm.auto import tqdm import numpy as np from donut import JSONParseEvaluator processor = DonutProcessor.from_pretrained("sourinkarmakar/kyc_v1-donut-demo") model = VisionEncoderDecoderModel.from_pretrained("sourinkarmakar/kyc_v1-donut-demo") # Need to install python-donut # !pip install -q donut-python # Images stored inside a folder 'unseen_samples' dataset = glob.glob(os.path.join(basepath, "unseen_samples/*")) output_list = [] for idx, sample in tqdm(enumerate(dataset), total=len(dataset)): # prepare encoder inputs img = cv2.imread(sample) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) pixel_values = processor(img, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) # prepare decoder inputs task_prompt = "<s_cord-v2>" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids decoder_input_ids = decoder_input_ids.to(device) # autoregressively generate sequence outputs = model.generate( pixel_values, decoder_input_ids=decoder_input_ids, max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) # turn into JSON seq = processor.batch_decode(outputs.sequences)[0] seq = seq.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") seq = re.sub(r"<.*?>", "", seq, count=1).strip() # remove first task start token seq = processor.token2json(seq) output_list.append(seq) print(output_list) ```
0x7o/rubert-base-massive-ner
0x7o
2023-07-04T05:18:06Z
236
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "ru", "dataset:massive", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-04T05:12:15Z
--- datasets: - massive model-index: - name: rubert-base-massive-ner results: [] license: apache-2.0 language: - ru pipeline_tag: token-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rubert-base-massive-ner This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.0367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1228 | 0.77 | 500 | 0.0565 | | 0.0517 | 1.54 | 1000 | 0.0367 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3