modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
kosamit/whisper-tiny-ja
kosamit
2024-01-27T08:46:49Z
63
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-25T11:50:12Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-ja results: [] datasets: - mozilla-foundation/common_voice_11_0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-ja This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6251 - Wer: 103.7719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6556 | 0.73 | 1000 | 0.6251 | 103.7719 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.1.1+cu118 - Datasets 2.16.1 - Tokenizers 0.13.3
MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T08:34:22Z
40
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "OpenBuddy/openbuddy-mistral-7b-v13.1", "pytorch", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T08:25:19Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - OpenBuddy/openbuddy-mistral-7b-v13.1 - pytorch - zh - en - fr - de - ja - ko - it - ru - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
tada20001/prompt-tuning-patent-noise-classification
tada20001
2024-01-27T08:34:11Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "region:us" ]
null
2024-01-27T08:33:54Z
--- library_name: peft base_model: bigscience/bloomz-560m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/Etheria-55b-v0.1-4.65bpw-h6-exl2
LoneStriker
2024-01-27T08:24:19Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T08:10:13Z
--- base_model: [] tags: - mergekit - merge --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
grimulkan/Goliath-longLORA-120b-rope8-32k-fp16
grimulkan
2024-01-27T08:15:44Z
9
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T22:34:27Z
--- license: llama2 --- This is an interleaved merge of [Xwin-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Xwin-longLORA-70b-rope8-32k-fp16) and [Euryale-1.3-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16), using the same merge formula as alpindale's [goliath-120b](https://huggingface.co/alpindale/goliath-120b). There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8). [ChuckMcSneed](https://huggingface.co/ChuckMcSneed) did a benchmark [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16/discussions/1), indicating 30% degradation with 8x the context length. A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-2k-6bpw_h8_exl2). More EXL2 quants [here](https://huggingface.co/aikitoria/Goliath-longLORA-120b-rope8-32k-exl2), thanks to aikitoria. See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how the original 70B merges were created with longLORA.
pszemraj/gpt2-medium-halved
pszemraj
2024-01-27T08:13:59Z
135
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T13:46:27Z
--- library_name: transformers license: mit language: - en inference: parameters: do_sample: True epsilon_cutoff: 0.0001 repetition_penalty: 1.1 no_repeat_ngram_size: 5 --- # Model Card for Model ID Alright, it's _slightly less_ than half of the original layers from https://hf.co/openai-community/gpt2-medium Refer to original model card for all details ---
fatemehsaveh/depression_tweet
fatemehsaveh
2024-01-27T08:11:30Z
166
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:rafalposwiata/deproberta-large-v1", "base_model:finetune:rafalposwiata/deproberta-large-v1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-22T12:37:49Z
--- base_model: rafalposwiata/deproberta-large-v1 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: depression_tweet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # depression_tweet This model is a fine-tuned version of [rafalposwiata/deproberta-large-v1](https://huggingface.co/rafalposwiata/deproberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0646 - Accuracy: 0.9836 - Precision: 0.9656 - Recall: 0.9977 - F1: 0.9814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.2 | 50 | 0.1556 | 0.9684 | 0.9508 | 0.9777 | 0.9641 | | No log | 0.4 | 100 | 0.1399 | 0.9646 | 0.9354 | 0.9865 | 0.9603 | | No log | 0.61 | 150 | 0.1118 | 0.9631 | 0.9279 | 0.9920 | 0.9589 | | No log | 0.81 | 200 | 0.1090 | 0.9659 | 0.9333 | 0.9922 | 0.9619 | | No log | 1.01 | 250 | 0.0819 | 0.9759 | 0.9556 | 0.9905 | 0.9727 | | No log | 1.21 | 300 | 0.0548 | 0.9831 | 0.9831 | 0.9777 | 0.9804 | | No log | 1.42 | 350 | 0.1162 | 0.9587 | 0.9435 | 0.9624 | 0.9529 | | No log | 1.62 | 400 | 0.1167 | 0.9657 | 0.9303 | 0.9955 | 0.9618 | | No log | 1.82 | 450 | 0.0859 | 0.9776 | 0.9549 | 0.9955 | 0.9747 | | 0.0575 | 2.02 | 500 | 0.0564 | 0.9848 | 0.9707 | 0.9950 | 0.9827 | | 0.0575 | 2.23 | 550 | 0.0591 | 0.9839 | 0.9693 | 0.9945 | 0.9817 | | 0.0575 | 2.43 | 600 | 0.0913 | 0.9814 | 0.9623 | 0.9962 | 0.9790 | | 0.0575 | 2.63 | 650 | 0.0633 | 0.9847 | 0.9686 | 0.9970 | 0.9826 | | 0.0575 | 2.83 | 700 | 0.1171 | 0.9762 | 0.9493 | 0.9985 | 0.9733 | | 0.0575 | 3.04 | 750 | 0.0646 | 0.9836 | 0.9656 | 0.9977 | 0.9814 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Etheria-55b-v0.1-3.5bpw-h6-exl2
LoneStriker
2024-01-27T08:10:11Z
6
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T07:59:25Z
--- base_model: [] tags: - mergekit - merge --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
jeevana/group8qna_gpt2__27janV001
jeevana
2024-01-27T07:59:15Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T07:53:26Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: group8qna_gpt2__27janV001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # group8qna_gpt2__27janV001 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9878 | 0.47 | 100 | 2.1829 | | 1.9811 | 0.93 | 200 | 2.0764 | | 1.4933 | 1.4 | 300 | 2.0009 | | 1.3546 | 1.87 | 400 | 1.9729 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
GGital/vit-Covid
GGital
2024-01-27T07:44:59Z
178
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-27T07:02:05Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-Covid results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9847036328871893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-Covid This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0805 - Accuracy: 0.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1283 | 0.38 | 100 | 0.1878 | 0.9484 | | 0.0312 | 0.76 | 200 | 0.1484 | 0.9560 | | 0.0655 | 1.15 | 300 | 0.0976 | 0.9713 | | 0.0587 | 1.53 | 400 | 0.0887 | 0.9713 | | 0.0106 | 1.91 | 500 | 0.0980 | 0.9732 | | 0.0137 | 2.29 | 600 | 0.1479 | 0.9618 | | 0.07 | 2.67 | 700 | 0.0882 | 0.9751 | | 0.0068 | 3.05 | 800 | 0.1160 | 0.9675 | | 0.0321 | 3.44 | 900 | 0.0872 | 0.9694 | | 0.0027 | 3.82 | 1000 | 0.0790 | 0.9809 | | 0.0041 | 4.2 | 1100 | 0.1029 | 0.9713 | | 0.0014 | 4.58 | 1200 | 0.0947 | 0.9809 | | 0.0018 | 4.96 | 1300 | 0.1399 | 0.9713 | | 0.001 | 5.34 | 1400 | 0.0689 | 0.9847 | | 0.001 | 5.73 | 1500 | 0.0852 | 0.9790 | | 0.0008 | 6.11 | 1600 | 0.1111 | 0.9790 | | 0.0013 | 6.49 | 1700 | 0.0695 | 0.9866 | | 0.0049 | 6.87 | 1800 | 0.0728 | 0.9885 | | 0.0007 | 7.25 | 1900 | 0.0963 | 0.9790 | | 0.0012 | 7.63 | 2000 | 0.0886 | 0.9847 | | 0.0006 | 8.02 | 2100 | 0.0811 | 0.9847 | | 0.0015 | 8.4 | 2200 | 0.0796 | 0.9847 | | 0.0143 | 8.78 | 2300 | 0.0804 | 0.9847 | | 0.0005 | 9.16 | 2400 | 0.0816 | 0.9847 | | 0.0006 | 9.54 | 2500 | 0.0811 | 0.9847 | | 0.0005 | 9.92 | 2600 | 0.0805 | 0.9847 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
TinyPixel/l2-chatml
TinyPixel
2024-01-27T07:42:41Z
6
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T07:35:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thefcraft/JARVIS-ckpt
thefcraft
2024-01-27T07:30:32Z
0
1
null
[ "region:us" ]
null
2024-01-27T07:27:37Z
model checkpoints for [JARVIS](https://github.com/thefcraft/JARVIS) OpenVoice
boruyang/Reinforce-CartPole-v1
boruyang
2024-01-27T07:26:29Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:26:21Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 1000.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AustinMcMike/mistral-7b-ft-test
AustinMcMike
2024-01-27T07:23:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-27T07:23:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
octnn/q-FrozenLake-v1-4x4-noSlippery
octnn
2024-01-27T07:20:57Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:20:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="octnn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dhiva100/PPO-LunarLander-v2
dhiva100
2024-01-27T07:12:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:12:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.44 +/- 14.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Benzaminnie/distilbert-base-uncased-finetuned-emotion
Benzaminnie
2024-01-27T07:10:58Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-27T06:57:01Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9215027409425609 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2220 - Accuracy: 0.9215 - F1: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.83 | 1.0 | 250 | 0.3231 | 0.904 | 0.9029 | | 0.2532 | 2.0 | 500 | 0.2220 | 0.9215 | 0.9215 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ChemPlusX/llama2-7b-ner-type2
ChemPlusX
2024-01-27T07:10:23Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:10:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaurav-mac/hindi-sensim-sbert-usingsumodataset-basel3cubepune
gaurav-mac
2024-01-27T07:09:06Z
21
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-28T17:49:21Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 80 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-06 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1200, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
sunzx0810/llama2-7b-science
sunzx0810
2024-01-27T07:02:45Z
7
1
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "generated_from_trainer", "dataset:customized", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-26T08:22:55Z
--- base_model: llama2/Llama-2-7b-hf tags: - generated_from_trainer datasets: - customized model-index: - name: llama2-science-7b-824 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-science-7b-824 This model is a fine-tuned version of [llama2/Llama-2-7b-hf](https://huggingface.co//mnt/workspace/workgroup/hanxiao/llama2/Llama-2-7b-hf) on the customized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 24 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
jeiku/NewJeans_3B_GGUF
jeiku
2024-01-27T07:02:14Z
10
0
null
[ "gguf", "mergekit", "merge", "arxiv:2203.05482", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T06:20:56Z
--- base_model: - jeiku/Gnosis_StableLM tags: - mergekit - merge --- # mumufinal This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * mumu2 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear models: - model: mumu2+jeiku/Gnosis_StableLM parameters: weight: 1 dtype: float16 ```
TMOU715/phi-2-qlora
TMOU715
2024-01-27T06:54:35Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:54:30Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
frankc350/phi-2-qlora
frankc350
2024-01-27T06:54:05Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:54:01Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Sailor01/phi-2-qlora
Sailor01
2024-01-27T06:53:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:29Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Askahoward/phi-2-qlora
Askahoward
2024-01-27T06:53:15Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:12Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
noczero/mistral-7b-text-to-sql
noczero
2024-01-27T06:43:07Z
4
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-27T06:19:31Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-text-to-sql This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gotchu/34b-3
gotchu
2024-01-27T06:39:26Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:gotchu/merge-34b-2", "base_model:merge:gotchu/merge-34b-2", "base_model:gotchu/roleplaymodel", "base_model:merge:gotchu/roleplaymodel", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:19:44Z
--- base_model: - gotchu/roleplaymodel - gotchu/merge-34b-2 tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [gotchu/roleplaymodel](https://huggingface.co/gotchu/roleplaymodel) * [gotchu/merge-34b-2](https://huggingface.co/gotchu/merge-34b-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: gotchu/merge-34b-2 dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 60] model: model: path: gotchu/merge-34b-2 - layer_range: [0, 60] model: model: path: gotchu/roleplaymodel ```
tobefan/sd-class-butterflies-32
tobefan
2024-01-27T06:35:46Z
47
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-01-27T06:17:57Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('tobefan/sd-class-butterflies-32') image = pipeline().images[0] image ```
Dhanraj1503/ppo-SnowballTarget
Dhanraj1503
2024-01-27T06:35:41Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-01-27T06:35:32Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Dhanraj1503/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
psugam/hello
psugam
2024-01-27T06:31:02Z
120
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-27T06:28:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omusico/opt-125m-sft
omusico
2024-01-27T06:20:44Z
178
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:17:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vadhri/dqn-SpaceInvadersNoFrameskip-v4
vadhri
2024-01-27T06:19:32Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T06:18:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 530.50 +/- 106.76 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vadhri -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vadhri -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vadhri ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
frankc350/opt-125m-sft
frankc350
2024-01-27T06:19:24Z
178
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:12:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Heng666/opt-125m-sft
Heng666
2024-01-27T06:18:35Z
181
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:13:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oblivious/Vikhr-7B-instruct-GGUF
oblivious
2024-01-27T06:09:43Z
188
3
null
[ "gguf", "text-generation", "ru", "en", "dataset:zjkarina/Vikhr_instruct", "license:apache-2.0", "region:us" ]
text-generation
2024-01-27T04:07:53Z
--- model_creator: Vikhrmodels base_model: Vikhr-7B-instruct model_name: Vikhr-7B-instruct-GGUF pipeline_tag: text-generation license: apache-2.0 model_type: mistral inference: false prompt_template: '<s>{role}\n{content}</s>\n' pretrain-datasets: - IlyaGusev/habr datasets: - zjkarina/Vikhr_instruct language: - ru - en --- # Vikhr-7B-instruct-GGUF - Model creator: [Vikhrmodels](https://huggingface.co/Vikhrmodels) - Original model: [Vikhr-7B-instruct](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct) ## Description This repository contains quantized GGUF format model files for [Vikhr-7B-instruct_merged](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_merged). ## Default message template: `<s>{role}\n{content}</s>\n` ## Default system prompt: `Ты — Вихрь, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.` ## Example `llama.cpp` command ```shell ./main -m Vikhr-7B-instruct-Q4_K_M.gguf -c 32768 -n -1 -p $'<s>system\nТы — Вихрь, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.</s>\n<s>user\nКак тебя зовут?</s>\n<s>bot\n' ``` ``` system Ты — Вихрь, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им. user Как тебя зовут? bot Я называю себя Вихрем! [end of text] ``` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
asun17904/imdb-gpt2-a2b1-l0.001
asun17904
2024-01-27T05:41:43Z
1
0
pytorch
[ "pytorch", "gpt2", "en", "license:mit", "region:us" ]
null
2024-01-26T17:24:11Z
--- language: en license: mit library_name: pytorch --- # Knowledge Continuity Regularized Network Trainer Hyperparameters: - `lr` = 5e-05 - `per_device_batch_size` = 8 - `gradient_accumulation_steps` = 2 - `weight_decay` = 1e-09 - `seed` = 42 Regularization Hyperparameters - `numerical stability denominator constant` = 0.01 - `lambda` = 0.001 - `alpha` = 2.0 - `beta` = 1.0 Extended Logs: |eval_loss|eval_accuracy|epoch| |--|--|--| |12.460|0.921|1.0| |12.355|0.924|2.0| |12.275|0.928|3.0| |12.118|0.933|4.0| |12.028|0.936|5.0| |11.984|0.938|6.0| |12.000|0.937|7.0| |11.973|0.938|8.0| |11.883|0.941|9.0| |12.051|0.935|10.0| |11.958|0.939|11.0| |12.281|0.928|12.0| |12.284|0.929|13.0| |11.990|0.938|14.0| |12.207|0.931|15.0| |11.940|0.940|16.0| |12.162|0.932|17.0| |11.981|0.938|18.0| |11.941|0.940|19.0| |11.961|0.939|20.0| |11.979|0.938|21.0| |11.854|0.943|22.0| |11.867|0.942|23.0| |11.889|0.941|24.0| |11.922|0.940|25.0| |11.985|0.939|26.0| |11.880|0.941|27.0| |11.893|0.941|28.0| |11.974|0.939|29.0| |11.792|0.944|30.0| |12.016|0.937|31.0| |11.867|0.942|32.0| |11.879|0.942|33.0| |11.830|0.943|34.0| |11.905|0.940|35.0| |11.799|0.944|36.0| |11.894|0.941|37.0| |11.853|0.942|38.0| |11.800|0.944|39.0| |11.784|0.944|40.0| |11.774|0.945|41.0| |11.746|0.946|42.0| |11.748|0.946|43.0| |11.770|0.945|44.0| |11.788|0.944|45.0| |11.777|0.945|46.0| |11.724|0.947|47.0| |11.744|0.946|48.0| |11.743|0.946|49.0|
alirzb/SeizureClassifier_AST_U_43275873
alirzb
2024-01-27T05:37:10Z
145
0
transformers
[ "transformers", "pytorch", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "base_model:MIT/ast-finetuned-audioset-10-10-0.4593", "base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-26T20:02:37Z
--- license: bsd-3-clause base_model: MIT/ast-finetuned-audioset-10-10-0.4593 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SeizureClassifier_AST_U_43275873 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SeizureClassifier_AST_U_43275873 This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0052 - Accuracy: 0.9990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0222 | 1.0 | 339 | 0.0859 | 0.9761 | | 0.0242 | 2.0 | 678 | 0.0091 | 0.9977 | | 0.0033 | 3.0 | 1017 | 0.0198 | 0.9932 | | 0.0171 | 4.0 | 1357 | 0.0242 | 0.9945 | | 0.0212 | 5.0 | 1696 | 0.0079 | 0.9990 | | 0.0 | 6.0 | 2035 | 0.0095 | 0.9984 | | 0.0 | 7.0 | 2374 | 0.0047 | 0.9990 | | 0.0 | 8.0 | 2714 | 0.0048 | 0.9990 | | 0.0 | 9.0 | 3053 | 0.0048 | 0.9990 | | 0.0 | 10.0 | 3392 | 0.0050 | 0.9990 | | 0.0 | 11.0 | 3731 | 0.0050 | 0.9990 | | 0.0 | 12.0 | 4071 | 0.0051 | 0.9990 | | 0.0 | 13.0 | 4410 | 0.0051 | 0.9990 | | 0.0 | 14.0 | 4749 | 0.0052 | 0.9990 | | 0.0 | 14.99 | 5085 | 0.0052 | 0.9990 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
callum-canavan/whisper-small-ga-4000
callum-canavan
2024-01-27T05:26:14Z
74
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-27T00:58:11Z
--- language: - ga license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Ga 4000 - Callum Canavan results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: ga-IE split: test args: ga-IE metrics: - name: Wer type: wer value: 64.32276657060518 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ga 4000 - Callum Canavan This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 1.3939 - Wer Ortho: 65.7126 - Wer: 64.3228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0174 | 7.58 | 500 | 1.1656 | 63.4346 | 60.4899 | | 0.0009 | 15.15 | 1000 | 1.2670 | 62.3248 | 60.8069 | | 0.0004 | 22.73 | 1500 | 1.3114 | 63.6974 | 62.2478 | | 0.0002 | 30.3 | 2000 | 1.3408 | 64.0187 | 62.6225 | | 0.0002 | 37.88 | 2500 | 1.3621 | 64.3692 | 63.0836 | | 0.0001 | 45.45 | 3000 | 1.3791 | 64.3984 | 62.9971 | | 0.0001 | 53.03 | 3500 | 1.3900 | 65.8294 | 64.4092 | | 0.0001 | 60.61 | 4000 | 1.3939 | 65.7126 | 64.3228 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ashishbaraiya/my-tweets-finetuned
ashishbaraiya
2024-01-27T05:25:49Z
1
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T18:38:44Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: ashishbaraiya/my-tweets-finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ashishbaraiya/my-tweets-finetuned This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0656 - Validation Loss: 3.2945 - Epoch: 98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.3483 | 8.3624 | 0 | | 7.2778 | 6.9685 | 1 | | 5.9195 | 6.2234 | 2 | | 5.0730 | 5.6830 | 3 | | 4.4703 | 5.3916 | 4 | | 3.8427 | 4.8847 | 5 | | 3.3641 | 4.5318 | 6 | | 2.8373 | 4.3084 | 7 | | 2.4261 | 4.0802 | 8 | | 2.0691 | 3.8920 | 9 | | 1.8213 | 3.8208 | 10 | | 1.5922 | 3.6103 | 11 | | 1.3694 | 3.5038 | 12 | | 1.1764 | 3.3149 | 13 | | 1.0135 | 3.2981 | 14 | | 0.8874 | 3.2975 | 15 | | 0.7716 | 3.2103 | 16 | | 0.6679 | 3.3297 | 17 | | 0.5770 | 3.2517 | 18 | | 0.5098 | 3.0959 | 19 | | 0.4403 | 3.1526 | 20 | | 0.3791 | 2.9750 | 21 | | 0.3367 | 3.0588 | 22 | | 0.3027 | 3.0408 | 23 | | 0.2617 | 3.1930 | 24 | | 0.2387 | 3.1227 | 25 | | 0.2175 | 3.0582 | 26 | | 0.2062 | 3.1239 | 27 | | 0.1868 | 3.0407 | 28 | | 0.1746 | 3.2357 | 29 | | 0.1657 | 3.1285 | 30 | | 0.1536 | 3.2110 | 31 | | 0.1512 | 3.1890 | 32 | | 0.1447 | 3.1713 | 33 | | 0.1426 | 3.1498 | 34 | | 0.1369 | 3.1877 | 35 | | 0.1327 | 3.2019 | 36 | | 0.1303 | 3.0486 | 37 | | 0.1213 | 3.1264 | 38 | | 0.1204 | 3.1468 | 39 | | 0.1206 | 3.1846 | 40 | | 0.1125 | 3.1880 | 41 | | 0.1113 | 3.1980 | 42 | | 0.1098 | 3.1759 | 43 | | 0.1071 | 3.1385 | 44 | | 0.1055 | 3.1730 | 45 | | 0.1024 | 3.1820 | 46 | | 0.0995 | 3.1252 | 47 | | 0.0995 | 3.1279 | 48 | | 0.1004 | 3.2428 | 49 | | 0.0982 | 3.1116 | 50 | | 0.0957 | 3.2210 | 51 | | 0.0936 | 3.1351 | 52 | | 0.0917 | 3.1618 | 53 | | 0.0930 | 3.1924 | 54 | | 0.0929 | 3.2831 | 55 | | 0.0889 | 3.2458 | 56 | | 0.0913 | 3.2061 | 57 | | 0.0899 | 3.4128 | 58 | | 0.0880 | 3.2114 | 59 | | 0.0869 | 3.2738 | 60 | | 0.0878 | 3.1723 | 61 | | 0.0844 | 3.1465 | 62 | | 0.0846 | 3.1106 | 63 | | 0.0841 | 3.2216 | 64 | | 0.0824 | 3.2971 | 65 | | 0.0823 | 3.2267 | 66 | | 0.0811 | 3.2503 | 67 | | 0.0823 | 3.1981 | 68 | | 0.0808 | 3.2618 | 69 | | 0.0803 | 3.1607 | 70 | | 0.0786 | 3.3295 | 71 | | 0.0801 | 3.2952 | 72 | | 0.0777 | 3.2545 | 73 | | 0.0764 | 3.1248 | 74 | | 0.0772 | 3.2185 | 75 | | 0.0758 | 3.3147 | 76 | | 0.0764 | 3.1842 | 77 | | 0.0758 | 3.2346 | 78 | | 0.0739 | 3.2914 | 79 | | 0.0738 | 3.2163 | 80 | | 0.0738 | 3.3555 | 81 | | 0.0731 | 3.0948 | 82 | | 0.0726 | 3.2040 | 83 | | 0.0729 | 3.2187 | 84 | | 0.0709 | 3.2877 | 85 | | 0.0703 | 3.3668 | 86 | | 0.0709 | 3.2290 | 87 | | 0.0712 | 3.3148 | 88 | | 0.0697 | 3.2762 | 89 | | 0.0694 | 3.2083 | 90 | | 0.0688 | 3.2673 | 91 | | 0.0694 | 3.2816 | 92 | | 0.0683 | 3.3135 | 93 | | 0.0680 | 3.2971 | 94 | | 0.0681 | 3.2272 | 95 | | 0.0670 | 3.2317 | 96 | | 0.0662 | 3.2029 | 97 | | 0.0656 | 3.2945 | 98 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
lbtutor/Taxi-v3
lbtutor
2024-01-27T05:03:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T05:02:58Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="lbtutor/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
sinhat98/w2v-bert-2.0-japanese-colab-CV16.0
sinhat98
2024-01-27T04:58:22Z
7
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:ylacombe/w2v-bert-2.0", "base_model:finetune:ylacombe/w2v-bert-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-25T11:20:39Z
--- base_model: ylacombe/w2v-bert-2.0 tags: - generated_from_trainer datasets: - common_voice_16_0 model-index: - name: w2v-bert-2.0-japanese-colab-CV16.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-japanese-colab-CV16.0 This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset. It achieves the following results on the evaluation set: - Loss: inf - Cer: 0.3171 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2694 | 0.96 | 300 | inf | 0.6823 | | 2.0595 | 1.93 | 600 | inf | 0.4528 | | 1.3044 | 2.89 | 900 | inf | 0.3920 | | 1.0889 | 3.85 | 1200 | inf | 0.3579 | | 0.7867 | 4.82 | 1500 | inf | 0.3518 | | 0.4371 | 5.78 | 1800 | inf | 0.3371 | | 0.3414 | 6.74 | 2100 | inf | 0.3246 | | 0.2373 | 7.7 | 2400 | inf | 0.3253 | | 0.1171 | 8.67 | 2700 | inf | 0.3183 | | 0.0524 | 9.63 | 3000 | inf | 0.3171 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
lbtutor/q-FrozenLake-v1-4x4-noSlippery
lbtutor
2024-01-27T04:56:22Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T04:56:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lbtutor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Slava/tiny-bert-sst2-distilled
Slava
2024-01-27T04:51:14Z
101
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google/bert_uncased_L-2_H-128_A-2", "base_model:finetune:google/bert_uncased_L-2_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-25T08:06:21Z
--- license: apache-2.0 base_model: google/bert_uncased_L-2_H-128_A-2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-bert-sst2-distilled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9648 - Accuracy: 0.8245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002628217875157273 - train_batch_size: 128 - eval_batch_size: 128 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5717 | 1.0 | 527 | 2.0086 | 0.8073 | | 1.2017 | 2.0 | 1054 | 1.8121 | 0.8222 | | 0.9081 | 3.0 | 1581 | 1.8837 | 0.8177 | | 0.7559 | 4.0 | 2108 | 1.9089 | 0.8234 | | 0.6694 | 5.0 | 2635 | 1.9749 | 0.8177 | | 0.6147 | 6.0 | 3162 | 1.9445 | 0.8257 | | 0.5729 | 7.0 | 3689 | 1.9648 | 0.8245 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.1
xyfJASON/Diffusion-Models-Implementations
xyfJASON
2024-01-27T04:47:02Z
0
0
null
[ "tensorboard", "dataset:cifar10", "license:mit", "region:us" ]
null
2023-08-10T03:37:36Z
--- license: mit datasets: - cifar10 metrics: - fid --- Checkpoints and training logs for GitHub repository: [xyfJASON/Diffusion-Models-Implementations](https://github.com/xyfJASON/Diffusion-Models-Implementations).
jeiku/NewJeans_3.43B_GGUF
jeiku
2024-01-27T04:41:32Z
7
0
null
[ "gguf", "mergekit", "merge", "arxiv:2203.05482", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T03:54:51Z
--- base_model: - jeiku/Gnosis_StableLM tags: - mergekit - merge --- # longfinal This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * long2 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear models: - model: long2+jeiku/Gnosis_StableLM parameters: weight: 1 dtype: float16 ```
Yankz/TR_Model-1st-Mistral-7B
Yankz
2024-01-27T04:29:14Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:LeoLM/leo-mistral-hessianai-7b-chat", "base_model:adapter:LeoLM/leo-mistral-hessianai-7b-chat", "license:apache-2.0", "region:us" ]
null
2024-01-26T16:29:08Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: LeoLM/leo-mistral-hessianai-7b-chat model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [LeoLM/leo-mistral-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b-chat) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6723 | 0.99 | 62 | 2.6241 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ycros/llmTechChat-GGUF
ycros
2024-01-27T04:19:23Z
9
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T03:31:59Z
GGUF quants of https://huggingface.co/Epiculous/llmTechChat
ramsi-k/Taxi-v3-3
ramsi-k
2024-01-27T04:13:18Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T04:09:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ramsi-k/Taxi-v3-3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ramsi-k/Taxi-v3-2
ramsi-k
2024-01-27T04:06:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T04:06:53Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ramsi-k/Taxi-v3-2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
intervitens/internlm2-limarp-chat-20b
intervitens
2024-01-27T04:03:56Z
12
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "en", "dataset:lemonilia/LimaRP", "arxiv:2212.04089", "base_model:intervitens/internlm2-base-20b-llama", "base_model:finetune:intervitens/internlm2-base-20b-llama", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T03:41:35Z
--- base_model: [intervitens/internlm2-base-20b-llama] language: - en pipeline_tag: text-generation tags: - mergekit - merge license: other datasets: - lemonilia/LimaRP --- Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b. Prompt format is ChatML. *** # internlm2-06limarp-1chat-TASK_ARITHM-20b-v0.03 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using intervitens/internlm2-base-20b-llama as a base. ### Models Merged The following models were included in the merge: * ./internlm2-limarp-20b-v0.03 * ./internlm2-chat-20b-llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./internlm2-chat-20b-llama parameters: weight: 1.0 - model: ./internlm2-limarp-20b-v0.03 parameters: weight: 0.6 merge_method: task_arithmetic base_model: ./internlm2-base-20b-llama parameters: #normalize: false #int8_mask: true dtype: bfloat16 ```
jeiku/NewJeans_3.43B
jeiku
2024-01-27T03:53:33Z
90
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "mergekit", "merge", "conversational", "custom_code", "arxiv:2203.05482", "autotrain_compatible", "region:us" ]
text-generation
2024-01-27T03:30:32Z
--- base_model: - jeiku/Gnosis_StableLM tags: - mergekit - merge --- # longfinal This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * long2 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear models: - model: long2+jeiku/Gnosis_StableLM parameters: weight: 1 dtype: float16 ```
e22vvb/EN_mt5-base_15_wikiSQL
e22vvb
2024-01-27T03:43:23Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "dataset:wikisql", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-26T19:12:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikisql model-index: - name: EN_mt5-base_15_wikiSQL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EN_mt5-base_15_wikiSQL This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wikisql dataset. It achieves the following results on the evaluation set: - Loss: 0.0849 - Rouge2 Precision: 0.8692 - Rouge2 Recall: 0.7928 - Rouge2 Fmeasure: 0.8234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.1534 | 1.0 | 4049 | 0.1157 | 0.8319 | 0.756 | 0.7858 | | 0.1204 | 2.0 | 8098 | 0.0980 | 0.8469 | 0.7706 | 0.8011 | | 0.1006 | 3.0 | 12147 | 0.0926 | 0.855 | 0.7775 | 0.8086 | | 0.0892 | 4.0 | 16196 | 0.0881 | 0.8579 | 0.7811 | 0.8119 | | 0.0809 | 5.0 | 20245 | 0.0857 | 0.8605 | 0.7839 | 0.8145 | | 0.0725 | 6.0 | 24294 | 0.0849 | 0.8643 | 0.787 | 0.8181 | | 0.0672 | 7.0 | 28343 | 0.0841 | 0.8662 | 0.7889 | 0.8199 | | 0.0628 | 8.0 | 32392 | 0.0847 | 0.8657 | 0.7895 | 0.82 | | 0.0589 | 9.0 | 36441 | 0.0835 | 0.8676 | 0.7909 | 0.8216 | | 0.0565 | 10.0 | 40490 | 0.0839 | 0.8685 | 0.7914 | 0.8223 | | 0.0532 | 11.0 | 44539 | 0.0837 | 0.8689 | 0.7925 | 0.8231 | | 0.051 | 12.0 | 48588 | 0.0844 | 0.8692 | 0.7927 | 0.8233 | | 0.0504 | 13.0 | 52637 | 0.0848 | 0.869 | 0.7924 | 0.8231 | | 0.0485 | 14.0 | 56686 | 0.0848 | 0.869 | 0.7928 | 0.8233 | | 0.0479 | 15.0 | 60735 | 0.0849 | 0.8692 | 0.7928 | 0.8234 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.7.dev0 - Tokenizers 0.13.3
ramsi-k/Taxi-v3
ramsi-k
2024-01-27T03:42:08Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T03:42:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ramsi-k/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
benjamin-paine/ccsr
benjamin-paine
2024-01-27T03:28:08Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2024-01-27T02:37:28Z
--- license: apache-2.0 --- This repository contains a copy of `real-world-ccsr.ckpt`. See documentation for CCSR at https://github.com/csslc/CCSR/
chathuranga-jayanath/codet5-small-v4
chathuranga-jayanath
2024-01-27T03:24:52Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-27T03:24:26Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v4 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7318 - Bleu Score: 0.2737 - Gen Len: 13.7838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----------:|:-------:| | No log | 1.0 | 20 | 1.7693 | 0.281 | 13.5946 | | No log | 2.0 | 40 | 1.0720 | 0.2706 | 13.9189 | | No log | 3.0 | 60 | 0.7318 | 0.2737 | 13.7838 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
AnonWilber/ppo-LunarLander-v2
AnonWilber
2024-01-27T03:20:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T03:20:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 150.50 +/- 71.47 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LN1996/peft-qlora-run3
LN1996
2024-01-27T03:16:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T03:16:12Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
rishikasrinivas/distilbert-base-uncased-finetuned-ner
rishikasrinivas
2024-01-27T02:54:49Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-26T20:00:18Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9243 - Recall: 0.9358 - F1: 0.9300 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2386 | 1.0 | 878 | 0.0710 | 0.9017 | 0.9207 | 0.9111 | 0.9798 | | 0.0498 | 2.0 | 1756 | 0.0619 | 0.9239 | 0.9319 | 0.9279 | 0.9830 | | 0.0308 | 3.0 | 2634 | 0.0609 | 0.9243 | 0.9358 | 0.9300 | 0.9836 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T02:51:01Z
34
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "fblgit/una-cybertron-7b-v3-OMA", "juanako", "UNA", "cybertron", "xaberius", "dataset:fblgit/tree-of-knowledge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T02:42:21Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - fblgit/una-cybertron-7b-v3-OMA - juanako - UNA - cybertron - xaberius - dataset:fblgit/tree-of-knowledge - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./una-cybertron-7b-v3-OMA-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
egiorh/distilbert-base-uncased-finetuned-emotion
egiorh
2024-01-27T02:49:24Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-24T06:39:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9245690662037136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8375 | 1.0 | 250 | 0.3221 | 0.907 | 0.9059 | | 0.255 | 2.0 | 500 | 0.2192 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
AustinMcMike/mosaic-fintune-test
AustinMcMike
2024-01-27T02:45:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-27T02:44:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T02:35:50Z
42
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Dans-DiscountModels/Dans-07YahooAnswers-7b", "pytorch", "question-answering", "en", "dataset:PocketDoc/Retro-YahooAnswers", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T02:26:55Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Dans-DiscountModels/Dans-07YahooAnswers-7b - pytorch - question-answering - en - dataset:PocketDoc/Retro-YahooAnswers - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Sirnoha/Therenderer
Sirnoha
2024-01-27T02:28:35Z
4
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:artistic-2.0", "region:us" ]
text-to-image
2024-01-27T02:28:28Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/download-5.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: artistic-2.0 --- # Renderer <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Sirnoha/Therenderer/tree/main) them in the Files & versions tab.
mbearss/dummy-model
mbearss
2024-01-27T02:27:15Z
93
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-27T02:25:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
balrogbob/MiniBOB
balrogbob
2024-01-27T02:22:37Z
33
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T00:39:11Z
--- tags: - generated_from_trainer model-index: - name: MiniBOB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniBOB This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T02:18:15Z
60
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "lcw99/zephykor-ko-beta-7b-chang", "ko", "en", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T02:09:24Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - lcw99/zephykor-ko-beta-7b-chang - ko - en - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T02:00:41Z
57
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "CobraMamba/mamba-gpt-7b-v1", "pytorch", "gpt", "llm", "large language model", "en", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T01:51:51Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - CobraMamba/mamba-gpt-7b-v1 - pytorch - gpt - llm - large language model - en - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Crystalcareai/NeuralPipe-7B-slerp
Crystalcareai
2024-01-27T01:56:12Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "region:us" ]
null
2024-01-27T01:56:12Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Crystalcareai/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
shamalshaikh/LunarLander
shamalshaikh
2024-01-27T01:54:39Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T01:54:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 247.17 +/- 21.84 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
phaeeza/whisper-small-ha
phaeeza
2024-01-27T01:47:22Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_1", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-26T23:34:55Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - common_voice_16_1 metrics: - wer model-index: - name: whisper-small-ha results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_16_1 type: common_voice_16_1 config: ha split: test args: ha metrics: - name: Wer type: wer value: 44.116527037319116 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-ha This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.7536 - Wer Ortho: 47.2867 - Wer: 44.1165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0713 | 3.18 | 500 | 0.6989 | 49.6836 | 46.2300 | | 0.0145 | 6.37 | 1000 | 0.7536 | 47.2867 | 44.1165 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
not-lain/whisper-small-dv
not-lain
2024-01-27T01:47:16Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-26T22:20:18Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-small-dv results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.27390791027154666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-dv This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.5904 - Wer Ortho: 0.267119 - Wer: 0.273908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0002 | 17.24 | 500 | 0.5904 | 0.267119 | 0.273908 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
andrewatef/MyBloggerV0.20-GGUF
andrewatef
2024-01-27T01:47:00Z
2
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "en", "base_model:unsloth/zephyr-sft-bnb-4bit", "base_model:quantized:unsloth/zephyr-sft-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T01:23:05Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/zephyr-sft-bnb-4bit --- # Uploaded model - **Developed by:** andrewatef - **License:** apache-2.0 - **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T01:45:00Z
54
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "OpenBuddy/openbuddy-zephyr-7b-v14.1", "pytorch", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T01:36:15Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - OpenBuddy/openbuddy-zephyr-7b-v14.1 - pytorch - zh - en - fr - de - ja - ko - it - ru - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
sqiangcao/sd-class-butterflies-64
sqiangcao
2024-01-27T01:43:38Z
45
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-01-27T01:42:39Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('sqiangcao/sd-class-butterflies-64') image = pipeline().images[0] image ```
MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T01:29:05Z
47
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "bofenghuang/vigostral-7b-chat", "pytorch", "LLM", "finetuned", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T01:20:22Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - bofenghuang/vigostral-7b-chat - pytorch - LLM - finetuned - fr - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./vigostral-7b-chat-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
ntc-ai/SDXL-LoRA-slider.cinematic-lighting
ntc-ai
2024-01-27T01:28:52Z
76
9
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-27T01:28:49Z
--- language: - en thumbnail: "images/evaluate/cinematic lighting.../cinematic lighting_17_3.0.png" widget: - text: cinematic lighting output: url: images/cinematic lighting_17_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_19_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_20_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_21_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "cinematic lighting" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - cinematic lighting (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cinematic lighting_17_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_19_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_20_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cinematic lighting ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cinematic-lighting', weight_name='cinematic lighting.safetensors', adapter_name="cinematic lighting") # Activate the LoRA pipe.set_adapters(["cinematic lighting"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cinematic lighting" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
akjindal53244/Arithmo-Mistral-7B
akjindal53244
2024-01-27T01:15:58Z
1,541
61
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "Mathematical Reasoning", "en", "dataset:akjindal53244/Arithmo-Data", "arxiv:2309.12284", "arxiv:2309.05653", "arxiv:2210.17517", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-14T23:56:46Z
--- license: apache-2.0 language: - en tags: - Mathematical Reasoning datasets: - akjindal53244/Arithmo-Data --- ## [January 2024] New Model Release: Arithmo2-Mistral-7B **Arithmo2-Mistral-7B** model improves initially released Arithmo-Mistral-7B model on both GSM8K and MATH benchmarks. Specifically, there is **absolute** improvement of: - +1.7% on GSM8K - +3.0% on GSM8K PoT - +1.9% on MATH <b>Note</b>: <span style="color:red"><b>It is recommended to use Arithmo2-Mistral-7B model</b></span>. Here is the [merged model](https://huggingface.co/upaya07/Arithmo2-Mistral-7B) and corresponding [LoRA Adapter](https://huggingface.co/upaya07/Arithmo2-Mistral-7B-adapter). # Model Card for Model ID [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](CODE_LICENSE) [![Model Weight License](https://img.shields.io/badge/Model%20Weights%20License-Apache_2.0-green.svg)](LICENSE) [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/) **P.S.:** Please reach out to [Ashvini Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/) if you would be interested in supporting compute need. We are looking for small-scale support so we'd appreciate any kind of help! :) ## Model Details Arithmo-Mistral-7B is trained to reason and answer mathematical problems and is also capable of writing a Python program that upon execution prints answer to the question. We used [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model and used **QLoRA to fine-tune it on a single RTX 4090 GPU**. ### Model Description - **Project GitHub Page:** https://github.com/akjindal53244/Arithmo-Mistral-7B - **Developed by:** [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/) - **Funded by:** self-work - **Model type:** fine-tuned - **Language(s) (NLP):** English - **Finetuned from model:** mistralai/Mistral-7B-v0.1 ## Results Arithmo-Mistral-7B outperforms existing 7B and 13B state-of-the-art Mathematical Reasoning models. Refer to [Comparing Arithmo-Mistral-7B with other LLM models](https://github.com/akjindal53244/Arithmo-Mistral-7B/tree/master#comparing-arithmo-mistral-7b-with-other-llm-models) section for more details. <table> <thead> <tr> <th>Prompt Approach</th> <th>GSM8k</th> <th>MATH</th> </tr> </thead> <tbody> <tr> <td>Zero-Shot CoT</td> <td><b>74.7</b></td> <td><b>25.3</b></td> </tr> <tr> <td>Zero-Shot PoT</td> <td><b>71.2</b></td> <td>-</td> </tr> </tbody> </table> - **Zero-Shot CoT**: On providing a question as prompt, model generates reasoning steps to solve the question along with answer. We check if answer matches with ground-truth. - **Zero-Shot PoT**: We prompt the model to generate a Python program for the given question. During inference, we execute the Python program generated by the model and check if the program output matches with ground-truth answer. ## Installation ``` pip install transformers >=4.34.0 pip install accelerate pip install sentencepiece pip install protobuf # If you are GPU poor like me pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu # If you have a GPU. pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118 pip install scipy pip install bitsandbytes ``` ## How to query the model ``` # Set `run_model_on_gpu` to `False` if you are running on CPU. Model will generate reasoning steps with answer for your question. If you want to generate Python program, uncomment line-69 that adds a Python prompt. # This script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc.** $ python query_model.py ``` **Note:** Above script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc. Checkout [query_model.py](https://github.com/akjindal53244/Arithmo-Mistral-7B/blob/master/query_model.py) for more details. <br><br> ##### Sample Input: ``` Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need? ``` ##### Model Output: ``` Answer: The total number of apples needed is the sum of the first 10 positive integers. This can be calculated using the formula for the sum of an arithmetic series: \[S = \frac{n}{2}(a_1 + a_n),\] where $S$ is the sum, $n$ is the number of terms, $a_1$ is the first term, and $a_n$ is the last term. In this case, $n = 10$, $a_1 = 1$, and $a_n = 10$. Plugging these values into the formula, we get: \[S = \frac{10}{2}(1 + 10) = 5(11) = \boxed{55}.\] The answer is: 55 ``` Arithmo-Mistral-7B is trained with the following format: #### CoT Format (generate reasoning steps with answer): ``` Question: <question> Answer: ``` #### PoT Format (generate a python program): ``` Question: <question> <python_prompt> Answer: ``` It will perform best if queried in this way with your own script. ## Comparing Arithmo-Mistral-7B with other LLM models. Results for all models except `Arithmo-Mistral-7B` are taken from [MetaMath](https://github.com/meta-math/MetaMath/blob/main/README.MD) repository. | Model | GSM8k Pass@1 | MATH Pass@1 | |---------------------|--------------|-------------| | MPT-7B | 6.8 | 3.0 | | Falcon-7B | 6.8 | 2.3 | | LLaMA-1-7B | 11.0 | 2.9 | | LLaMA-2-7B | 14.6 | 2.5 | | MPT-30B | 15.2 | 3.1 | | LLaMA-1-13B | 17.8 | 3.9 | | GPT-Neo-2.7B | 19.5 | -- | | Falcon-40B | 19.6 | 2.5 | | Baichuan-chat-13B | 23.9 | -- | | Vicuna-v1.3-13B | 27.6 | -- | | LLaMA-2-13B | 28.7 | 3.9 | | InternLM-7B | 31.2 | -- | | ChatGLM-2-6B | 32.4 | -- | | GPT-J-6B | 34.9 | -- | | LLaMA-1-33B | 35.6 | 3.9 | | LLaMA-2-34B | 42.2 | 6.24 | | RFT-7B | 50.3 | -- | | LLaMA-1-65B | 50.9 | 10.6 | | Qwen-7B | 51.6 | -- | | WizardMath-7B | 54.9 | 10.7 | | LLaMA-2-70B | 56.8 | 13.5 | | WizardMath-13B | 63.9 | 14.0 | | MetaMath-7B | 66.5 | 19.8 | | MetaMath-13B | 72.3 | 22.4 | | 🔥 **Arithmo-Mistral-7B Zero-Shot PoT** | **71.2** | -- | | 🔥 **Arithmo-Mistral-7B Zero-Shot CoT** | **74.7** | **25.3** | | WizardMath-70B | **81.6** | 22.7 | | MetaMath-70B | **82.3** | **26.6** | If you are interested in reproducing the resullts, visit https://github.com/akjindal53244/Arithmo-Mistral-7B#reproducing-results section. ### Support My Work Building LLMs takes time and resources; if you find my work interesting, your support would be epic! <a href="https://www.buymeacoffee.com/a_little_learner" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> ### Citation To cite Arithmo models: ``` @misc{jindal_2023_arithmo, author = {Jindal, Ashvini}, title = {Arithmo-Mistral-7B: Mathematical Reasoning Model}, howpublished = {Hugging Face}, month = {October}, year = {2023}, url = {https://huggingface.co/akjindal53244/Arithmo-Mistral-7B} } ``` <h2 id="References">References</h2> ``` @article{yu2023metamath, title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models}, author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang}, journal={arXiv preprint arXiv:2309.12284}, year={2023} } @article{Yue2023mammoth, title={MAmmoTH: Building math generalist models through hybrid instruction tuning}, author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen}, journal={arXiv preprint arXiv:2309.05653}, year={2023} } @article{mishra2022lila, title={Lila: A unified benchmark for mathematical reasoning}, author={Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan}, journal={arXiv preprint arXiv:2210.17517}, year={2022} } ```
afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF
afrideva
2024-01-27T01:10:17Z
5
0
null
[ "gguf", "ggml", "quantized", "q2_k", "q3_k_xs", "q3_k_m", "q4_k_m", "q5_k_m", "q6_k", "q8_0", "text-generation", "base_model:NickyNicky/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3", "base_model:quantized:NickyNicky/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3", "region:us" ]
text-generation
2024-01-27T01:06:52Z
--- base_model: NickyNicky/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3 inference: false model_creator: NickyNicky model_name: LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3 pipeline_tag: text-generation quantized_by: afrideva tags: - gguf - ggml - quantized - q2_k - q3_k_xs - q3_k_m - q4_k_m - q5_k_m - q6_k - q8_0 --- # NickyNicky/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF Quantized GGUF model files for [LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3](https://huggingface.co/NickyNicky/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3) from [NickyNicky](https://huggingface.co/NickyNicky) | Name | Quant method | Size | | ---- | ---- | ---- | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.fp16.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.fp16.gguf) | fp16 | 496.82 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q2_k.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q2_k.gguf) | q2_k | 105.46 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q3_k_xs.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q3_k_xs.gguf) | q3_k_xs | 115.32 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q3_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q3_k_m.gguf) | q3_k_m | 129.03 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q4_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q4_k_m.gguf) | q4_k_m | 155.67 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q5_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q5_k_m.gguf) | q5_k_m | 179.23 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q6_k.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q6_k.gguf) | q6_k | 204.26 MB | | [locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q8_0.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct_oasst2_chatML_V1_DPO_V3-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct_oasst2_chatml_v1_dpo_v3.q8_0.gguf) | q8_0 | 264.33 MB | ## Original Model Card:
ambrosfitz/neural-history-chat-v1.5_q8
ambrosfitz
2024-01-27T01:05:11Z
2
0
null
[ "gguf", "text-generation", "en", "dataset:ambrosfitz/mighty-history-merge", "dataset:ambrosfitz/textbook-openstax-yawp-merge", "license:cc", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T00:47:19Z
--- license: cc datasets: - ambrosfitz/mighty-history-merge - ambrosfitz/textbook-openstax-yawp-merge language: - en pipeline_tag: text-generation ---
asun17904/imdb-t5-base-a1b2
asun17904
2024-01-27T01:01:24Z
1
0
pytorch
[ "pytorch", "t5", "en", "license:mit", "region:us" ]
null
2024-01-25T19:10:33Z
--- language: en license: mit library_name: pytorch --- # Knowledge Continuity Regularized Network Trainer Hyperparameters: - `lr` = 5e-05 - `per_device_batch_size` = 8 - `gradient_accumulation_steps` = 2 - `weight_decay` = 1e-09 - `seed` = 42 Regularization Hyperparameters - `numerical stability denominator constant` = 0.01 - `lambda` = 0.02 - `alpha` = 1.0 - `beta` = 2.0 Extended Logs: |eval_loss|eval_accuracy|epoch| |--|--|--| |11.994|0.936|1.0| |11.877|0.941|2.0| |11.963|0.939|3.0| |11.879|0.941|4.0| |11.783|0.945|5.0| |11.723|0.946|6.0| |11.880|0.941|7.0| |11.739|0.946|8.0| |11.663|0.948|9.0| |11.671|0.948|10.0| |11.658|0.948|11.0| |11.637|0.949|12.0| |11.687|0.948|13.0| |11.685|0.948|14.0| |11.804|0.944|15.0| |11.721|0.947|16.0| |11.607|0.950|17.0| |11.713|0.947|18.0| |11.683|0.948|19.0| |11.658|0.949|20.0| |11.636|0.949|21.0| |11.700|0.947|22.0| |11.601|0.950|23.0| |11.821|0.944|24.0| |11.587|0.951|25.0| |11.623|0.950|26.0| |11.640|0.949|27.0| |11.627|0.950|28.0| |11.621|0.950|29.0| |11.651|0.949|30.0| |11.597|0.951|31.0| |11.650|0.949|32.0| |11.651|0.949|33.0| |11.642|0.949|34.0| |11.637|0.949|35.0| |11.593|0.951|36.0| |11.615|0.950|37.0| |11.594|0.951|38.0| |11.646|0.949|39.0| |11.631|0.950|40.0| |11.635|0.949|41.0| |11.605|0.950|42.0| |11.629|0.949|43.0| |11.638|0.949|44.0| |11.586|0.951|45.0| |11.591|0.951|46.0| |11.609|0.950|47.0| |11.607|0.950|48.0| |11.604|0.950|49.0|
andrewatef/MyBloggerV0.20
andrewatef
2024-01-27T01:00:40Z
4
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/zephyr-sft-bnb-4bit", "base_model:quantized:unsloth/zephyr-sft-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-27T00:33:39Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/zephyr-sft-bnb-4bit --- # Uploaded model - **Developed by:** andrewatef - **License:** apache-2.0 - **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hafeez7000/Talk-ai-small
Hafeez7000
2024-01-27T00:56:14Z
91
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-27T21:16:10Z
--- license: apache-2.0 pipeline_tag: text-generation ---
asun17904/imdb-gpt2-a2b1l1.0
asun17904
2024-01-27T00:56:06Z
1
0
pytorch
[ "pytorch", "gpt2", "en", "license:mit", "region:us" ]
null
2024-01-26T06:21:36Z
--- language: en license: mit library_name: pytorch --- # Knowledge Continuity Regularized Network Trainer Hyperparameters: - `lr` = 5e-05 - `per_device_batch_size` = 8 - `gradient_accumulation_steps` = 2 - `weight_decay` = 1e-09 - `seed` = 42 Regularization Hyperparameters - `numerical stability denominator constant` = 0.01 - `lambda` = 1.0 - `alpha` = 2.0 - `beta` = 1.0 Extended Logs: |eval_loss|eval_accuracy|epoch| |--|--|--| |7.389|0.892|1.0| |6.753|0.925|2.0| |6.567|0.927|3.0| |6.641|0.926|4.0| |6.818|0.928|5.0| |6.492|0.934|6.0| |6.386|0.932|7.0| |6.869|0.934|8.0| |6.730|0.928|9.0| |6.381|0.927|10.0| |6.308|0.935|11.0| |6.305|0.930|12.0| |6.592|0.931|13.0| |6.415|0.934|14.0| |6.477|0.921|15.0| |6.270|0.933|16.0| |6.311|0.932|17.0| |6.255|0.936|18.0| |6.412|0.935|19.0| |6.298|0.934|20.0| |6.390|0.938|21.0| |6.478|0.933|22.0| |6.534|0.936|23.0| |6.919|0.936|24.0| |6.374|0.933|25.0| |6.357|0.937|26.0| |6.519|0.933|27.0| |6.533|0.926|28.0| |6.357|0.934|29.0| |6.197|0.934|30.0| |6.516|0.934|31.0| |6.393|0.933|32.0| |6.099|0.935|33.0| |6.154|0.936|34.0| |6.296|0.935|35.0| |6.204|0.934|36.0| |6.188|0.938|37.0| |6.428|0.939|38.0| |6.397|0.939|39.0| |6.414|0.940|40.0| |6.283|0.939|41.0| |6.351|0.938|42.0| |6.249|0.940|43.0| |6.205|0.940|44.0| |6.125|0.939|45.0| |6.194|0.941|46.0| |6.147|0.941|47.0| |6.219|0.940|48.0| |6.179|0.941|49.0|
mikolaj-mialkowski/q-Taxi-v3
mikolaj-mialkowski
2024-01-27T00:55:40Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T00:44:34Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 382.85 +/- 221.57 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mikolaj-mialkowski/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T00:54:39Z
62
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "CobraMamba/mamba-gpt-7b-v2", "pytorch", "gpt", "llm", "large language model", "en", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T00:45:58Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - CobraMamba/mamba-gpt-7b-v2 - pytorch - gpt - llm - large language model - en - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
callum-canavan/whisper-small-ga-1
callum-canavan
2024-01-27T00:52:44Z
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-27T00:52:16Z
--- language: - ga license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 model-index: - name: Whisper Small Ga 1 - Callum Canavan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ga 1 - Callum Canavan This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Hafeez7000/Talk_ai_
Hafeez7000
2024-01-27T00:52:31Z
92
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T22:08:46Z
--- license: apache-2.0 pipeline_tag: text-generation ---
MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T00:23:44Z
64
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Azure99/blossom-v3-mistral-7b", "pytorch", "zh", "en", "dataset:Azure99/blossom-chat-v1", "dataset:Azure99/blossom-math-v2", "dataset:Azure99/blossom-wizard-v1", "dataset:Azure99/blossom-orca-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T00:11:43Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Azure99/blossom-v3-mistral-7b - pytorch - zh - en - dataset:Azure99/blossom-chat-v1 - dataset:Azure99/blossom-math-v2 - dataset:Azure99/blossom-wizard-v1 - dataset:Azure99/blossom-orca-v1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Hafeez7000/talkAI
Hafeez7000
2024-01-27T00:22:53Z
90
1
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T23:19:14Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: talkAI results: [] pipeline_tag: conversational --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # talkAI This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.1
mimicheng/zephyr-7b-sft-qlora-1ep-25jan
mimicheng
2024-01-27T00:10:40Z
20
0
peft
[ "peft", "safetensors", "mixtral", "dpo-experiment", "generated_from_trainer", "trl", "sft", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-01-26T02:19:25Z
--- license: apache-2.0 library_name: peft tags: - dpo-experiment - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k base_model: mistralai/Mixtral-8x7B-v0.1 model-index: - name: zephyr-7b-sft-qlora-1ep-25jan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-qlora-1ep-25jan This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 17428 | nan | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.0
MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T00:05:18Z
54
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "ignos/Mistral-T5-7B-v1", "pytorch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T23:56:28Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - ignos/Mistral-T5-7B-v1 - pytorch - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mistral-T5-7B-v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Bytes512/Queen
Bytes512
2024-01-26T23:57:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:01-ai/Yi-34B-200K", "base_model:merge:01-ai/Yi-34B-200K", "base_model:NousResearch/Nous-Hermes-2-Yi-34B", "base_model:merge:NousResearch/Nous-Hermes-2-Yi-34B", "base_model:abacusai/Smaug-34B-v0.1", "base_model:merge:abacusai/Smaug-34B-v0.1", "base_model:jondurbin/bagel-34b-v0.2", "base_model:merge:jondurbin/bagel-34b-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T23:47:18Z
--- base_model: - abacusai/Smaugv0.1 - NousResearch/Nous-Hermes-2-Yi-34B - jondurbin/bagel-34b-v0.2 - 01-ai/Yi-34B-200K tags: - mergekit - merge --- # queen This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) as a base. ### Models Merged The following models were included in the merge: * [abacusai/Smaugv0.1](https://huggingface.co/abacusai/Smaugv0.1) * [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) * [jondurbin/bagel-34b-v0.2](https://huggingface.co/jondurbin/bagel-34b-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: 01-ai/Yi-34B-200K # No parameters necessary for base model - model: abacusai/Smaugv0.1 parameters: density: 0.53 weight: 0.3 - model: jondurbin/bagel-34b-v0.2 parameters: density: 0.53 weight: 0.3 - model: NousResearch/Nous-Hermes-2-Yi-34B parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: 01-ai/Yi-34B-200K parameters: int8_mask: true dtype: bfloat16 ```
MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T23:50:36Z
57
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "WizardLM/WizardMath-7B-V1.1", "pytorch", "en", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "region:us", "license:apache-2.0", "endpoints_compatible", "base_model:MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T23:39:09Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - WizardLM/WizardMath-7B-V1.1 - pytorch - en - arxiv:2304.12244 - arxiv:2306.08568 - arxiv:2308.09583 - autotrain_compatible - region:us - license:apache-2.0 - endpoints_compatible model_name: WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
varun-v-rao/t5-large-bn-adapter-6.34M-snli
varun-v-rao
2024-01-26T23:49:09Z
0
0
adapter-transformers
[ "adapter-transformers", "tensorboard", "t5", "dataset:snli", "region:us" ]
null
2024-01-26T07:06:06Z
--- tags: - adapter-transformers - t5 datasets: - snli --- # Adapter `varun-v-rao/t5-large-bn-adapter-6.34M-snli` for t5-large An [adapter](https://adapterhub.ml) for the `t5-large` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("t5-large") adapter_name = model.load_adapter("varun-v-rao/t5-large-bn-adapter-6.34M-snli", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
izh97/zephyr-climate-change-assistant
izh97
2024-01-26T23:35:16Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
2024-01-24T16:49:10Z
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - trl - sft - generated_from_trainer model-index: - name: zephyr-climate-change-assistant results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-climate-change-assistant This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 17190 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
yihang7/phi-2-dpo-full-hydrox-safe
yihang7
2024-01-26T23:28:09Z
37
0
transformers
[ "transformers", "tensorboard", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T18:44:24Z
--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-2-dpo-full-hydrox-safe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-dpo-full-hydrox-safe This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0050 - Rewards/chosen: -0.0996 - Rewards/rejected: -19.3989 - Rewards/accuracies: 0.9992 - Rewards/margins: 19.2993 - Logps/rejected: -644.0859 - Logps/chosen: -275.4780 - Logits/rejected: 0.2021 - Logits/chosen: -0.3196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6535 | 0.03 | 100 | 0.6404 | 0.0912 | -0.0433 | 0.6936 | 0.1345 | -450.5297 | -273.5692 | 0.6266 | 0.1056 | | 0.4293 | 0.07 | 200 | 0.4021 | 0.4957 | -0.4032 | 0.8998 | 0.8989 | -454.1287 | -269.5248 | 0.6425 | 0.1325 | | 0.2685 | 0.1 | 300 | 0.2464 | 0.8729 | -1.0624 | 0.9386 | 1.9354 | -460.7211 | -265.7526 | 0.6700 | 0.1533 | | 0.1918 | 0.14 | 400 | 0.1792 | 1.1981 | -1.7756 | 0.9453 | 2.9737 | -467.8528 | -262.5003 | 0.6918 | 0.1719 | | 0.1409 | 0.17 | 500 | 0.1423 | 1.4079 | -2.4222 | 0.9503 | 3.8301 | -474.3184 | -260.4028 | 0.7139 | 0.1772 | | 0.1437 | 0.2 | 600 | 0.1232 | 1.6054 | -3.1009 | 0.9537 | 4.7063 | -481.1060 | -258.4279 | 0.7179 | 0.1571 | | 0.0747 | 0.24 | 700 | 0.1057 | 1.7190 | -3.9721 | 0.9613 | 5.6911 | -489.8179 | -257.2914 | 0.7179 | 0.1325 | | 0.0848 | 0.27 | 800 | 0.0863 | 1.8540 | -4.3289 | 0.9689 | 6.1829 | -493.3858 | -255.9415 | 0.7047 | 0.1024 | | 0.0853 | 0.31 | 900 | 0.0744 | 1.8328 | -5.0270 | 0.9722 | 6.8598 | -500.3666 | -256.1535 | 0.6746 | 0.0529 | | 0.0561 | 0.34 | 1000 | 0.0639 | 1.7736 | -5.4555 | 0.9747 | 7.2291 | -504.6516 | -256.7455 | 0.6517 | 0.0185 | | 0.0657 | 0.37 | 1100 | 0.0546 | 1.7966 | -5.6453 | 0.9840 | 7.4419 | -506.5500 | -256.5157 | 0.6738 | 0.0168 | | 0.0445 | 0.41 | 1200 | 0.0499 | 1.7245 | -6.0645 | 0.9823 | 7.7889 | -510.7414 | -257.2369 | 0.6757 | 0.0176 | | 0.0498 | 0.44 | 1300 | 0.0459 | 1.6804 | -6.6799 | 0.9848 | 8.3603 | -516.8956 | -257.6772 | 0.6719 | 0.0228 | | 0.0554 | 0.48 | 1400 | 0.0549 | 1.5994 | -6.8731 | 0.9848 | 8.4726 | -518.8279 | -258.4872 | 0.6684 | 0.0151 | | 0.0368 | 0.51 | 1500 | 0.0365 | 1.3551 | -7.8016 | 0.9865 | 9.1567 | -528.1130 | -260.9311 | 0.6464 | -0.0223 | | 0.0514 | 0.54 | 1600 | 0.0333 | 1.3431 | -7.5785 | 0.9891 | 8.9215 | -525.8817 | -261.0511 | 0.6617 | -0.0159 | | 0.0365 | 0.58 | 1700 | 0.0310 | 1.0743 | -8.8130 | 0.9933 | 9.8873 | -538.2264 | -263.7386 | 0.6354 | -0.0234 | | 0.0214 | 0.61 | 1800 | 0.0300 | 1.1310 | -9.1633 | 0.9924 | 10.2942 | -541.7292 | -263.1721 | 0.6278 | -0.0199 | | 0.0321 | 0.65 | 1900 | 0.0267 | 0.9514 | -9.9884 | 0.9924 | 10.9399 | -549.9811 | -264.9675 | 0.6044 | -0.0268 | | 0.0239 | 0.68 | 2000 | 0.0249 | 0.9624 | -10.0608 | 0.9933 | 11.0232 | -550.7047 | -264.8574 | 0.5955 | -0.0628 | | 0.0484 | 0.71 | 2100 | 0.0222 | 0.9176 | -11.0294 | 0.9941 | 11.9470 | -560.3909 | -265.3055 | 0.6043 | -0.0338 | | 0.0239 | 0.75 | 2200 | 0.0209 | 0.8247 | -10.5070 | 0.9941 | 11.3317 | -555.1663 | -266.2345 | 0.5913 | -0.0648 | | 0.0219 | 0.78 | 2300 | 0.0221 | 0.7334 | -10.5131 | 0.9924 | 11.2465 | -555.2272 | -267.1474 | 0.5942 | -0.0808 | | 0.0168 | 0.82 | 2400 | 0.0200 | 0.8909 | -11.0818 | 0.9966 | 11.9727 | -560.9146 | -265.5724 | 0.6006 | -0.0770 | | 0.0135 | 0.85 | 2500 | 0.0187 | 0.7273 | -11.3130 | 0.9958 | 12.0403 | -563.2269 | -267.2084 | 0.5948 | -0.0711 | | 0.0211 | 0.88 | 2600 | 0.0190 | 0.5390 | -11.8492 | 0.9949 | 12.3883 | -568.5890 | -269.0913 | 0.5818 | -0.0719 | | 0.0209 | 0.92 | 2700 | 0.0170 | 0.4957 | -12.3176 | 0.9949 | 12.8133 | -573.2725 | -269.5249 | 0.5537 | -0.0912 | | 0.0168 | 0.95 | 2800 | 0.0175 | 0.5850 | -12.2737 | 0.9958 | 12.8587 | -572.8340 | -268.6321 | 0.5471 | -0.0870 | | 0.0129 | 0.99 | 2900 | 0.0171 | 0.5691 | -12.7785 | 0.9949 | 13.3476 | -577.8817 | -268.7910 | 0.5441 | -0.0810 | | 0.0134 | 1.02 | 3000 | 0.0188 | 0.3713 | -13.0254 | 0.9966 | 13.3966 | -580.3504 | -270.7691 | 0.5261 | -0.1163 | | 0.0127 | 1.05 | 3100 | 0.0155 | 0.1769 | -14.3013 | 0.9958 | 14.4782 | -593.1094 | -272.7126 | 0.4928 | -0.1129 | | 0.0072 | 1.09 | 3200 | 0.0147 | 0.2474 | -13.6499 | 0.9975 | 13.8973 | -586.5952 | -272.0074 | 0.4753 | -0.1479 | | 0.0068 | 1.12 | 3300 | 0.0179 | 0.2168 | -14.6122 | 0.9958 | 14.8291 | -596.2192 | -272.3131 | 0.4722 | -0.1430 | | 0.0323 | 1.16 | 3400 | 0.0218 | 0.2536 | -15.9479 | 0.9958 | 16.2015 | -609.5762 | -271.9459 | 0.4159 | -0.1430 | | 0.0232 | 1.19 | 3500 | 0.0171 | 0.4273 | -14.2304 | 0.9958 | 14.6577 | -592.4003 | -270.2084 | 0.4598 | -0.1410 | | 0.0112 | 1.22 | 3600 | 0.0165 | 0.3634 | -15.1069 | 0.9966 | 15.4704 | -601.1661 | -270.8472 | 0.4366 | -0.1521 | | 0.0136 | 1.26 | 3700 | 0.0176 | 0.2922 | -14.4784 | 0.9966 | 14.7706 | -594.8809 | -271.5597 | 0.4553 | -0.1679 | | 0.0089 | 1.29 | 3800 | 0.0155 | 0.2349 | -14.8970 | 0.9966 | 15.1319 | -599.0669 | -272.1331 | 0.4447 | -0.1721 | | 0.01 | 1.33 | 3900 | 0.0123 | 0.1309 | -15.3620 | 0.9975 | 15.4930 | -603.7170 | -273.1724 | 0.4180 | -0.1834 | | 0.0055 | 1.36 | 4000 | 0.0132 | 0.0227 | -16.5037 | 0.9975 | 16.5265 | -615.1339 | -274.2542 | 0.3836 | -0.1899 | | 0.004 | 1.39 | 4100 | 0.0122 | 0.2637 | -15.2111 | 0.9975 | 15.4748 | -602.2075 | -271.8445 | 0.4087 | -0.1870 | | 0.0057 | 1.43 | 4200 | 0.0117 | 0.1593 | -16.5411 | 0.9975 | 16.7003 | -615.5074 | -272.8891 | 0.4086 | -0.1518 | | 0.0041 | 1.46 | 4300 | 0.0113 | -0.0815 | -17.4813 | 0.9975 | 17.3998 | -624.9092 | -275.2963 | 0.3722 | -0.1782 | | 0.0039 | 1.5 | 4400 | 0.0115 | 0.2051 | -17.4309 | 0.9983 | 17.6359 | -624.4056 | -272.4310 | 0.3445 | -0.1975 | | 0.0075 | 1.53 | 4500 | 0.0098 | 0.2810 | -15.5168 | 0.9992 | 15.7978 | -605.2650 | -271.6719 | 0.4021 | -0.1980 | | 0.0034 | 1.56 | 4600 | 0.0123 | 0.2926 | -15.4764 | 0.9992 | 15.7690 | -604.8610 | -271.5560 | 0.4225 | -0.1766 | | 0.0066 | 1.6 | 4700 | 0.0125 | 0.3192 | -14.8325 | 0.9992 | 15.1517 | -598.4222 | -271.2900 | 0.4068 | -0.2026 | | 0.0079 | 1.63 | 4800 | 0.0124 | 0.1405 | -15.1594 | 0.9992 | 15.2999 | -601.6904 | -273.0765 | 0.3942 | -0.2089 | | 0.0063 | 1.67 | 4900 | 0.0108 | -0.1407 | -17.4715 | 0.9975 | 17.3308 | -624.8116 | -275.8886 | 0.3294 | -0.2140 | | 0.0026 | 1.7 | 5000 | 0.0116 | -0.0520 | -17.9554 | 0.9992 | 17.9034 | -629.6504 | -275.0013 | 0.3361 | -0.2062 | | 0.0151 | 1.73 | 5100 | 0.0112 | 0.5727 | -14.3495 | 0.9975 | 14.9221 | -593.5914 | -268.7550 | 0.4234 | -0.1897 | | 0.0049 | 1.77 | 5200 | 0.0083 | 0.2709 | -15.7593 | 0.9975 | 16.0301 | -607.6893 | -271.7730 | 0.3788 | -0.2054 | | 0.0036 | 1.8 | 5300 | 0.0093 | 0.2336 | -16.3789 | 0.9992 | 16.6125 | -613.8856 | -272.1456 | 0.3798 | -0.1961 | | 0.0076 | 1.84 | 5400 | 0.0086 | 0.2618 | -16.1654 | 0.9992 | 16.4272 | -611.7511 | -271.8639 | 0.3844 | -0.2090 | | 0.0028 | 1.87 | 5500 | 0.0079 | 0.3444 | -15.7817 | 0.9983 | 16.1261 | -607.9135 | -271.0378 | 0.3874 | -0.2135 | | 0.0102 | 1.9 | 5600 | 0.0081 | 0.3276 | -15.7265 | 0.9983 | 16.0541 | -607.3622 | -271.2061 | 0.3713 | -0.2243 | | 0.0051 | 1.94 | 5700 | 0.0072 | 0.3979 | -15.8670 | 0.9983 | 16.2649 | -608.7670 | -270.5030 | 0.3562 | -0.2429 | | 0.0068 | 1.97 | 5800 | 0.0069 | 0.2817 | -16.7894 | 0.9992 | 17.0711 | -617.9911 | -271.6647 | 0.3329 | -0.2424 | | 0.0021 | 2.01 | 5900 | 0.0061 | 0.0725 | -17.3567 | 0.9992 | 17.4292 | -623.6634 | -273.7564 | 0.3142 | -0.2479 | | 0.0031 | 2.04 | 6000 | 0.0066 | -0.0179 | -18.1905 | 0.9983 | 18.1726 | -632.0013 | -274.6606 | 0.2948 | -0.2445 | | 0.0032 | 2.07 | 6100 | 0.0064 | 0.0140 | -17.8441 | 0.9992 | 17.8581 | -628.5374 | -274.3414 | 0.3043 | -0.2424 | | 0.0043 | 2.11 | 6200 | 0.0066 | 0.0216 | -18.5824 | 0.9983 | 18.6041 | -635.9210 | -274.2653 | 0.2863 | -0.2416 | | 0.0033 | 2.14 | 6300 | 0.0070 | 0.0355 | -18.3245 | 0.9983 | 18.3600 | -633.3419 | -274.1270 | 0.2783 | -0.2691 | | 0.0036 | 2.18 | 6400 | 0.0065 | 0.1473 | -18.2292 | 0.9983 | 18.3765 | -632.3882 | -273.0083 | 0.2772 | -0.2744 | | 0.0023 | 2.21 | 6500 | 0.0064 | 0.0450 | -17.9052 | 0.9992 | 17.9502 | -629.1483 | -274.0311 | 0.2750 | -0.2756 | | 0.0025 | 2.24 | 6600 | 0.0061 | -0.0642 | -17.4265 | 0.9983 | 17.3623 | -624.3619 | -275.1237 | 0.2773 | -0.2881 | | 0.0008 | 2.28 | 6700 | 0.0058 | 0.0951 | -17.3710 | 0.9992 | 17.4661 | -623.8071 | -273.5309 | 0.2703 | -0.2891 | | 0.002 | 2.31 | 6800 | 0.0057 | -0.0533 | -18.2744 | 0.9992 | 18.2212 | -632.8410 | -275.0142 | 0.2544 | -0.2823 | | 0.0052 | 2.35 | 6900 | 0.0073 | 0.0152 | -18.6276 | 0.9983 | 18.6427 | -636.3723 | -274.3301 | 0.2365 | -0.2928 | | 0.0029 | 2.38 | 7000 | 0.0056 | 0.0321 | -18.4600 | 0.9992 | 18.4921 | -634.6968 | -274.1606 | 0.2366 | -0.2889 | | 0.0013 | 2.41 | 7100 | 0.0058 | -0.1304 | -19.1740 | 0.9983 | 19.0436 | -641.8367 | -275.7859 | 0.2306 | -0.2898 | | 0.0011 | 2.45 | 7200 | 0.0059 | -0.0977 | -19.2699 | 0.9992 | 19.1722 | -642.7958 | -275.4590 | 0.2227 | -0.2916 | | 0.0058 | 2.48 | 7300 | 0.0059 | -0.0938 | -18.8625 | 0.9992 | 18.7687 | -638.7216 | -275.4195 | 0.2314 | -0.2945 | | 0.004 | 2.52 | 7400 | 0.0067 | -0.1457 | -19.2264 | 0.9992 | 19.0807 | -642.3610 | -275.9389 | 0.2194 | -0.2959 | | 0.0042 | 2.55 | 7500 | 0.0056 | -0.0100 | -18.6009 | 0.9992 | 18.5909 | -636.1058 | -274.5812 | 0.2398 | -0.3003 | | 0.0033 | 2.58 | 7600 | 0.0057 | -0.1484 | -19.3114 | 0.9992 | 19.1630 | -643.2104 | -275.9655 | 0.2215 | -0.2965 | | 0.0012 | 2.62 | 7700 | 0.0051 | -0.1574 | -19.2398 | 0.9992 | 19.0824 | -642.4946 | -276.0552 | 0.2210 | -0.3052 | | 0.002 | 2.65 | 7800 | 0.0052 | -0.1220 | -19.5223 | 0.9992 | 19.4004 | -645.3201 | -275.7011 | 0.2102 | -0.3165 | | 0.0053 | 2.69 | 7900 | 0.0051 | -0.1063 | -19.6889 | 0.9992 | 19.5826 | -646.9856 | -275.5444 | 0.2106 | -0.3074 | | 0.0013 | 2.72 | 8000 | 0.0056 | -0.1357 | -19.8339 | 0.9992 | 19.6982 | -648.4356 | -275.8386 | 0.2029 | -0.3129 | | 0.002 | 2.75 | 8100 | 0.0050 | -0.1024 | -19.6476 | 0.9992 | 19.5452 | -646.5726 | -275.5056 | 0.1993 | -0.3168 | | 0.0016 | 2.79 | 8200 | 0.0052 | -0.0378 | -19.4058 | 0.9992 | 19.3680 | -644.1544 | -274.8596 | 0.2152 | -0.3120 | | 0.0015 | 2.82 | 8300 | 0.0049 | -0.0549 | -19.2544 | 0.9992 | 19.1994 | -642.6404 | -275.0310 | 0.2093 | -0.3168 | | 0.003 | 2.86 | 8400 | 0.0051 | -0.1218 | -19.2614 | 0.9983 | 19.1396 | -642.7106 | -275.7000 | 0.2057 | -0.3182 | | 0.0019 | 2.89 | 8500 | 0.0050 | -0.1057 | -19.3041 | 0.9992 | 19.1983 | -643.1373 | -275.5387 | 0.2077 | -0.3151 | | 0.0039 | 2.92 | 8600 | 0.0052 | -0.0863 | -19.2826 | 0.9992 | 19.1963 | -642.9229 | -275.3445 | 0.2057 | -0.3208 | | 0.0042 | 2.96 | 8700 | 0.0048 | -0.1111 | -19.3837 | 0.9992 | 19.2726 | -643.9339 | -275.5926 | 0.2008 | -0.3215 | | 0.0023 | 2.99 | 8800 | 0.0050 | -0.1085 | -19.4076 | 0.9992 | 19.2990 | -644.1723 | -275.5668 | 0.1985 | -0.3228 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
Professor/CGIAR
Professor
2024-01-26T23:27:59Z
27
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease", "base_model:finetune:gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-12T14:13:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy base_model: gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease model-index: - name: CGIAR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CGIAR This model is a fine-tuned version of [gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease](https://huggingface.co/gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7888 - Accuracy: 0.6571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0123 | 1.0 | 652 | 0.8818 | 0.6178 | | 0.8619 | 2.0 | 1304 | 0.8398 | 0.6346 | | 0.8324 | 3.0 | 1956 | 0.8233 | 0.6366 | | 0.7872 | 4.0 | 2608 | 0.7888 | 0.6571 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.0.0 - Datasets 2.16.1 - Tokenizers 0.15.0
Santhoshi29/Checkpoints_prisma
Santhoshi29
2024-01-26T23:15:23Z
0
0
null
[ "region:us" ]
null
2024-01-20T17:41:42Z
## CIFAR-10 Results In our CIFAR-10 experiment, we aimed to assess the performance of Mice ViTs on a more complex and diverse dataset, CIFAR-10. We trained mice ViTs on classifying the 10 classes. ## Training Details For each attention layer setting, we explored two model variants: an attention-only model and a model combining attention with the MLP module. Dropout and layer normalization were not applied for simplicity. | Size | NumLayers | Attention Only | Attention+MLP | |--------|-----------|-----------------|---------------| | tiny | 1 | 0.43 | 0.479 | | base | 2 | 0.47 | 0.54 | | small | 3 | 0.47 | 0.51 | | medium | 4 | 0.47 | 0.52 |
hbpkillerX/phi-1_5-finetuned-gsm8k
hbpkillerX
2024-01-26T22:37:39Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:Open-Orca/oo-phi-1_5", "base_model:finetune:Open-Orca/oo-phi-1_5", "region:us" ]
null
2024-01-26T22:24:01Z
--- base_model: Open-Orca/oo-phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-finetuned-gsm8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-gsm8k This model is a fine-tuned version of [Open-Orca/oo-phi-1_5](https://huggingface.co/Open-Orca/oo-phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Seher99/emotion-analysis-with-distilbert
Seher99
2024-01-26T22:29:20Z
47
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T22:19:43Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Seher99/emotion-analysis-with-distilbert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Seher99/emotion-analysis-with-distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3838 - Validation Loss: 0.1765 - Train Accuracy: 0.933 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3838 | 0.1765 | 0.933 | 0 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.10.1 - Datasets 2.16.1 - Tokenizers 0.13.3