modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 00:46:34
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 00:44:25
card
stringlengths
11
1.01M
parrottygg/phi3v2
parrottygg
2024-11-01T12:15:28Z
35
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T12:11:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rfajri/sentiment-indobert-v1
rfajri
2024-11-01T12:15:13Z
105
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-01T12:14:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/SmolLM2-360M-GGUF
QuantFactory
2024-11-01T12:09:00Z
254
2
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-01T12:06:09Z
--- library_name: transformers license: apache-2.0 language: - en --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/SmolLM2-360M-GGUF This is quantized version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) created using llama.cpp # Original Model Card # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/gWt7M-JN62oXRpO-nQGo_.png) ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). ### How to use ```bash pip install transformers ``` #### Running the model on CPU/GPU/multi GPU * _Using full precision_ ```python # pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-360M" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM checkpoint = "HuggingFaceTB/SmolLM2-360M" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for fp16 use `torch_dtype=torch.float16` instead model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 723.56 MB ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M | |:-------------------|:------------:|:------------:|:------------:| | HellaSwag | **54.5** | 51.2 | 51.8 | | ARC (Average) | **53.0** | 45.4 | 50.1 | | PIQA | **71.7** | 69.9 | 71.6 | | MMLU (cloze) | **35.8** | 33.7 | 34.4 | | CommonsenseQA | **38.0** | 31.6 | 35.3 | | TriviaQA | **16.9** | 4.3 | 9.1 | | Winogrande | 52.5 | **54.1** | 52.8 | | OpenBookQA | **37.4** | **37.4** | 37.2 | | GSM8K (5-shot) | 3.2 | **33.4** | 1.6 | ## Instruction Model | Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct | |:-----------------------------|:---------------------:|:---------------------:|:---------------------:| | IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 | | MT-Bench | 3.66 | **4.16** | 3.37 | | HellaSwag | **52.1** | 48.0 | 47.9 | | ARC (Average) | **43.7** | 37.3 | 38.8 | | PIQA | **70.8** | 67.2 | 69.4 | | MMLU (cloze) | **32.8** | 31.7 | 30.6 | | BBH (3-shot) | 27.3 | **30.7** | 24.4 | | GSM8K (5-shot) | 7.43 | **26.8** | 1.36 | ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 4T - **Precision:** bfloat16 ### Hardware - **GPUs:** 64 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
Hi-Q/krx_qwen_2-7b-it_1101
Hi-Q
2024-11-01T12:07:58Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "krx", "conversational", "en", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:finetune:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T10:32:03Z
--- base_model: unsloth/Qwen2-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - krx license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Hi-Q - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
letuandat/tts-nnng-2410
letuandat
2024-11-01T12:04:49Z
103
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-31T16:25:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/SmolLM2-360M-Instruct-GGUF
QuantFactory
2024-11-01T12:03:38Z
246
3
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-01T12:00:52Z
--- library_name: transformers license: apache-2.0 language: - en --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/SmolLM2-360M-Instruct-GGUF This is quantized version of [HuggingFaceTB/SmolLM2-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) created using llama.cpp # Original Model Card # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/oWWfzW4RbWkVIo7f-5444.png) ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). ### How to use ### Transformers ```bash pip install transformers ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M | |:-------------------|:------------:|:------------:|:------------:| | HellaSwag | **54.5** | 51.2 | 51.8 | | ARC (Average) | **53.0** | 45.4 | 50.1 | | PIQA | **71.7** | 69.9 | 71.6 | | MMLU (cloze) | **35.8** | 33.7 | 34.4 | | CommonsenseQA | **38.0** | 31.6 | 35.3 | | TriviaQA | **16.9** | 4.3 | 9.1 | | Winogrande | 52.5 | **54.1** | 52.8 | | OpenBookQA | **37.4** | **37.4** | 37.2 | | GSM8K (5-shot) | 3.2 | **33.4** | 1.6 | ## Instruction Model | Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct | |:-----------------------------|:---------------------:|:---------------------:|:---------------------:| | IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 | | MT-Bench | 3.66 | **4.16** | 3.37 | | HellaSwag | **52.1** | 48.0 | 47.9 | | ARC (Average) | **43.7** | 37.3 | 38.8 | | PIQA | **70.8** | 67.2 | 69.4 | | MMLU (cloze) | **32.8** | 31.7 | 30.6 | | BBH (3-shot) | 27.3 | **30.7** | 24.4 | | GSM8K (5-shot) | 7.43 | **26.8** | 1.36 | ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 4T - **Precision:** bfloat16 ### Hardware - **GPUs:** 64 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
johnatanebonilla/w_small_lv_70
johnatanebonilla
2024-11-01T12:01:32Z
85
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-30T03:26:56Z
--- library_name: transformers tags: - generated_from_trainer metrics: - wer model-index: - name: w_small_lv_70 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w_small_lv_70 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6468 - Wer: 77.1230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.7247 | 0.7184 | 1000 | 0.6818 | 77.6120 | | 0.5041 | 1.4368 | 2000 | 0.6395 | 75.4202 | | 0.3808 | 2.1552 | 3000 | 0.6313 | 85.2857 | | 0.3595 | 2.8736 | 4000 | 0.6264 | 71.4611 | | 0.2771 | 3.5920 | 5000 | 0.6468 | 77.1230 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu118 - Datasets 3.0.0 - Tokenizers 0.19.1
mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF
mradermacher
2024-11-01T12:00:06Z
12
0
transformers
[ "transformers", "gguf", "en", "base_model:AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7", "base_model:quantized:AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-11-01T11:48:02Z
--- base_model: AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7 language: - en library_name: transformers license: cc-by-sa-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q2_K.gguf) | Q2_K | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_S.gguf) | Q3_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_L.gguf) | Q3_K_L | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.IQ4_XS.gguf) | IQ4_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q4_K_M.gguf) | Q4_K_M | 3.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q5_K_S.gguf) | Q5_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q5_K_M.gguf) | Q5_K_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q6_K.gguf) | Q6_K | 5.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.f16.gguf) | f16 | 12.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/SmolLM2-1.7B-Instruct-GGUF
QuantFactory
2024-11-01T11:57:57Z
52
3
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-01T11:48:52Z
--- library_name: transformers license: apache-2.0 language: - en --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/SmolLM2-1.7B-Instruct-GGUF This is quantized version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) created using llama.cpp # Original Model Card # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/y45hIMNREW7w_XpHYB_0q.png) ## Table of Contents 1. [Model Summary](#model-summary) 2. [Evaluation](#evaluation) 3. [Examples](#examples) 4. [Limitations](#limitations) 5. [Training](#training) 6. [License](#license) 7. [Citation](#citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). ### How to use ### Transformers ```bash pip install transformers ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B | |------------------|--------------|-------------|---------------|--------------| | HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 | | ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 | | PIQA | **77.6** | 74.8 | 76.1 | 76.0 | | MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 | | CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 | | TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 | | Winogrande | **59.4** | 57.8 | 59.3 | 54.7 | | OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** | | GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 | ## Instruction Model | Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct | |:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:| | IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 | | MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 | | OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN | | HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 | | ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 | | PIQA | **74.4** | 72.3 | 73.2 | 71.6 | | MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 | | BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 | | GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 | ## Examples Below are some system and instruct prompts that work well for special tasks ### Text rewriting ```python system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message." user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:" messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!}"] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ``` Hey there! I noticed that the CI isn't passing after your latest commit. Could you take a look and let me know what's going on? Thanks so much for your help! ``` ### Summarization ```python system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns." messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content": INSERT_LONG_EMAIL] input_text=tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Function calling SmolLM2-1.7B-Instruct can handle function calling, it scores 27% on the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html). Here's how you can leverage it: ```python import json import re from typing import Optional from jinja2 import Template import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.utils import get_json_schema system_prompt = Template("""You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose. If none of the functions can be used, point it out and refuse to answer. If the given question lacks the parameters required by the function, also point it out. You have access to the following tools: <tools>{{ tools }}</tools> The output MUST strictly adhere to the following format, and NO other text MUST be included. The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make the tool calls an empty list '[]'. <tool_call>[ {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}, ... (more tool calls as required) ]</tool_call>""") def prepare_messages( query: str, tools: Optional[dict[str, any]] = None, history: Optional[list[dict[str, str]]] = None ) -> list[dict[str, str]]: """Prepare the system and user messages for the given query and tools. Args: query: The query to be answered. tools: The tools available to the user. Defaults to None, in which case if a list without content will be passed to the model. history: Exchange of messages, including the system_prompt from the first query. Defaults to None, the first message in a conversation. """ if tools is None: tools = [] if history: messages = history.copy() messages.append({"role": "user", "content": query}) else: messages = [ {"role": "system", "content": system_prompt.render(tools=json.dumps(tools))}, {"role": "user", "content": query} ] return messages def parse_response(text: str) -> str | dict[str, any]: """Parses a response from the model, returning either the parsed list with the tool calls parsed, or the model thought or response if couldn't generate one. Args: text: Response from the model. """ pattern = r"<tool_call>(.*?)</tool_call>" matches = re.findall(pattern, text, re.DOTALL) if matches: return json.loads(matches[0]) return text ``` ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 11T - **Precision:** bfloat16 ### Hardware - **GPUs:** 256 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) - **Alignement Handbook** [alignement-handbook](https://github.com/huggingface/alignment-handbook/) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
THU-KEG/Llama3-Crab-DPO
THU-KEG
2024-11-01T11:49:36Z
7
2
null
[ "pytorch", "llama", "text-generation", "en", "arxiv:2410.24175", "license:apache-2.0", "region:us" ]
text-generation
2024-11-01T08:24:48Z
--- license: apache-2.0 language: - en metrics: - accuracy pipeline_tag: text-generation --- # Model Card for Llama3-Crab-DPO <!-- Provide a quick summary of what the model is/does. --> <p align="justify"> Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advanced LLMs cannot follow complex instructions well, thus limiting the quality of generated data. In this work, we find that <b><i>existing datasets inherently contain implicit complex constraints</i></b> and propose a novel data generation technique, <b><i>constraint back-translation</i></b>. Specifically, we take the high-quality instruction-response pairs in existing datasets and only adopt advanced LLMs to add complex constraints already met by the responses to the instructions, which naturally reduces costs and data noise. In the experiments, we adopt Llama3-70B-Instruct to back-translate constraints and create a high-quality complex instruction-response dataset, named <b>CRAB</b>. We present that post-training on <font face="Verdana">CRAB</font> improves multiple backbone LLMs' complex instruction-following ability, evaluated on extensive instruction-following benchmarks. We further find that constraint back-translation also serves as a useful auxiliary training objective in post-training. - ๐Ÿ“– Paper: [Constraint Back-translation Improves Complex Instruction Following of Large Language Models](https://arxiv.org/abs/2410.24175) </p> - ๐Ÿฆ€ Github: [THU/Crab](https://github.com/THU-KEG/Crab) ### Model Performance | Models | BaseModel | IFEval | FollowBench(HSR) | | | AVG | |--------------------|-----------|--------|------------------|-------|------|------| | | | AVG | L1-L2 | L3-L5 | AVG | | | GPT-3.5-turbo | GPT | 66.3 | 74.2 | 61 | 66.2 | 66.3 | | GPT-4 | GPT | 81.3 | 80.4 | 69.4 | 73.8 | 77.6 | | Vicuna-13b-V1.5 | Llama2 | 50.3 | 66.3 | 39.8 | 50.4 | 50.4 | | WizardLM-13B-V1.2 | Llama2 | 51.4 | 56.5 | 36.9 | 44.7 | 48 | | Conifer-13B | Llama2 | 50.2 | 57.1 | 40.3 | 47 | 48.6 | | Zephyr-7B-beta | Mistral | 45.4 | 54.8 | 38.2 | 44.8 | 45.1 | | Conifer-7B | Mistral | 53.9 | 51.9 | 40.2 | 44.9 | 49.4 | | Conifer-7B-DPO | Mistral | 55.7 | 57 | 45.4 | 50 | 52.9 | | Llama3 8B | Llama3 | 31.4 | 6.8 | 8.2 | 7.6 | 19.5 | | Llama3-crab | Llama3 | 46.9 | 51.2 | 26.7 | 36.5 | 41.7 | | Llama3-crab + DPO | Llama3 | 49.7 | 56.8 | 38.1 | 45.5 | 47.6 | | Mistral 7B | Mistral | 25.2 | 15.5 | 6.5 | 10.1 | 17.7 | | Mistral-crab | Mistral | 54.5 | 59.2 | 32.8 | 43.3 | 48.9 | | Mistral-crab + DPO | Mistral | 59.4 | 59.9 | 42.5 | 49.4 | 54.4 | ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li - **Model type:** Text Generation - **Language(s) (NLP):** English - **Finetuned from model [optional]:** Llama3-8B
parrottygg/phi3v1
parrottygg
2024-11-01T11:48:13Z
35
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T11:39:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rahulvk007/ExtractQueNumber
rahulvk007
2024-11-01T11:33:45Z
142
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/SmolLM2-360M", "base_model:finetune:unsloth/SmolLM2-360M", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T11:33:28Z
--- base_model: unsloth/SmolLM2-360M language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** rahulvk007 - **License:** apache-2.0 - **Finetuned from model :** unsloth/SmolLM2-360M This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
razhan/trocr-base-ckb
razhan
2024-11-01T11:14:12Z
66
0
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "image-text-to-text", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-04-01T11:35:44Z
# Kurdish OCR Transformer based ocr trained on synthetic Central Kurdish Data
Ariffiq99/Randomized_Roberta_Stacked_model_80
Ariffiq99
2024-11-01T11:14:00Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2024-11-01T09:10:23Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Randomized_Roberta_Stacked_model_80 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Randomized_Roberta_Stacked_model_80 This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8535 - F1: 0.7395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.64 | 1.0 | 1261 | 0.7758 | 0.7327 | | 0.5704 | 2.0 | 2522 | 0.7685 | 0.7408 | | 0.5059 | 3.0 | 3783 | 0.8209 | 0.7401 | | 0.4519 | 4.0 | 5044 | 0.8222 | 0.7381 | | 0.4177 | 5.0 | 6305 | 0.8535 | 0.7395 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
chendelong/DirectSAM-b0-1024px-sa1b-2ep-dsa-50ep-1101
chendelong
2024-11-01T11:06:34Z
35
0
transformers
[ "transformers", "safetensors", "segformer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-11-01T11:06:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/OH_original_wo_gpteacher
mlfoundations-dev
2024-11-01T10:59:02Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T06:12:40Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: OH_original_wo_gpteacher results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OH_original_wo_gpteacher This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the mlfoundations-dev/OH_original_wo_gpteacher dataset. It achieves the following results on the evaluation set: - Loss: 0.6055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 1738 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6194 | 1.0 | 334 | 0.6101 | | 0.5614 | 2.0 | 668 | 0.6015 | | 0.51 | 3.0 | 1002 | 0.6055 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.3.0 - Datasets 2.21.0 - Tokenizers 0.20.1
hyobi18220/jam_krx_qwen2.5_v7
hyobi18220
2024-11-01T10:41:01Z
5
0
null
[ "safetensors", "qwen2", "krx", "en", "ko", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "region:us" ]
null
2024-11-01T10:20:20Z
--- language: - en - ko base_model: - unsloth/Qwen2.5-7B-Instruct tags: - krx ---
shastraai/Shastra-LLAMA2-Math-Commonsense-SLERP
shastraai
2024-11-01T10:23:48Z
5
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "shastraai/Shastra-LLAMA-Math-DPO", "shastraai/Shastra-LLAMA2-Commonsense-SFT", "base_model:shastraai/Shastra-LLAMA-Math-DPO", "base_model:merge:shastraai/Shastra-LLAMA-Math-DPO", "base_model:shastraai/Shastra-LLAMA2-Commonsense-SFT", "base_model:merge:shastraai/Shastra-LLAMA2-Commonsense-SFT", "region:us" ]
null
2024-11-01T10:20:25Z
--- base_model: - shastraai/Shastra-LLAMA-Math-DPO - shastraai/Shastra-LLAMA2-Commonsense-SFT tags: - merge - mergekit - lazymergekit - shastraai/Shastra-LLAMA-Math-DPO - shastraai/Shastra-LLAMA2-Commonsense-SFT --- # Shastra-LLAMA2-Math-Commonsense Shastra-LLAMA2-Math-Commonsense is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [shastraai/Shastra-LLAMA-Math-DPO](https://huggingface.co/shastraai/Shastra-LLAMA-Math-DPO) * [shastraai/Shastra-LLAMA2-Commonsense-SFT](https://huggingface.co/shastraai/Shastra-LLAMA2-Commonsense-SFT) ## ๐Ÿงฉ Configuration ```yaml slices: - sources: - model: shastraai/Shastra-LLAMA-Math-DPO layer_range: [0, 32] - model: shastraai/Shastra-LLAMA2-Commonsense-SFT layer_range: [0, 32] merge_method: slerp base_model: shastraai/Shastra-LLAMA-Math-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "shastraai/Shastra-LLAMA2-Math-Commonsense" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
reemali811/nucleotide-transformer-finetuned-NucleotideTransformer
reemali811
2024-11-01T10:17:25Z
162
0
transformers
[ "transformers", "safetensors", "esm", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-01T10:15:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/OH_original_wo_evol_instruct_140k
mlfoundations-dev
2024-11-01T10:13:37Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T05:52:23Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: OH_original_wo_evol_instruct_140k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OH_original_wo_evol_instruct_140k This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the mlfoundations-dev/OH_original_wo_evol_instruct_140k dataset. It achieves the following results on the evaluation set: - Loss: 0.6121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 1738 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6185 | 0.9976 | 307 | 0.6178 | | 0.5652 | 1.9984 | 615 | 0.6080 | | 0.5197 | 2.9927 | 921 | 0.6121 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.3.0 - Datasets 2.21.0 - Tokenizers 0.20.1
sophiebui/en-ru_mtmodel_v1
sophiebui
2024-11-01T10:13:23Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:sophiebui/en-ru_mtmodel", "base_model:finetune:sophiebui/en-ru_mtmodel", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-01T09:49:27Z
--- library_name: transformers license: mit base_model: sophiebui/en-ru_mtmodel tags: - generated_from_trainer metrics: - bleu model-index: - name: en-ru_mtmodel_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-ru_mtmodel_v1 This model is a fine-tuned version of [sophiebui/en-ru_mtmodel](https://huggingface.co/sophiebui/en-ru_mtmodel) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8443 - Bleu: 44.9157 - Gen Len: 32.0811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 226 | 0.9394 | 37.9005 | 31.5405 | | No log | 2.0 | 452 | 0.8537 | 43.6072 | 32.3514 | | 0.935 | 3.0 | 678 | 0.8400 | 46.3652 | 31.8108 | | 0.935 | 4.0 | 904 | 0.8482 | 44.6002 | 31.973 | | 0.4432 | 5.0 | 1130 | 0.8443 | 44.9157 | 32.0811 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
coastalcph/CLIPDetail-8311682
coastalcph
2024-11-01T10:10:52Z
148
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-11-01T10:10:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tuanpasg/Puffin-Qwen2.5-CodeMath-1
tuanpasg
2024-11-01T09:53:53Z
134
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen2.5-Coder-1.5B", "base_model:merge:Qwen/Qwen2.5-Coder-1.5B", "base_model:Qwen/Qwen2.5-Math-1.5B", "base_model:merge:Qwen/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T09:52:35Z
--- base_model: - Qwen/Qwen2.5-Coder-1.5B - Qwen/Qwen2.5-Math-1.5B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) * [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Qwen/Qwen2.5-Coder-1.5B - model: Qwen/Qwen2.5-Math-1.5B merge_method: slerp base_model: Qwen/Qwen2.5-Coder-1.5B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
tuanpasg/Puffin-Qwen2.5-CodeMath
tuanpasg
2024-11-01T09:39:44Z
133
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen2.5-Coder-1.5B", "base_model:merge:Qwen/Qwen2.5-Coder-1.5B", "base_model:Qwen/Qwen2.5-Math-1.5B", "base_model:merge:Qwen/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T09:38:25Z
--- base_model: - Qwen/Qwen2.5-Coder-1.5B - Qwen/Qwen2.5-Math-1.5B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) * [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Qwen/Qwen2.5-Math-1.5B - model: Qwen/Qwen2.5-Coder-1.5B merge_method: slerp base_model: Qwen/Qwen2.5-Math-1.5B dtype: bfloat16 parameters: t: 0.5 ```
nuxper/DrBERT-7GB-finetuned-loinc
nuxper
2024-11-01T09:35:58Z
108
0
transformers
[ "transformers", "safetensors", "camembert", "text-classification", "generated_from_trainer", "base_model:Dr-BERT/DrBERT-7GB", "base_model:finetune:Dr-BERT/DrBERT-7GB", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-01T09:14:00Z
--- library_name: transformers license: apache-2.0 base_model: Dr-BERT/DrBERT-7GB tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: DrBERT-7GB-finetuned-loinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DrBERT-7GB-finetuned-loinc This model is a fine-tuned version of [Dr-BERT/DrBERT-7GB](https://huggingface.co/Dr-BERT/DrBERT-7GB) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6762 - Accuracy: 0.8519 - F1: 0.8516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 1268 | 0.6636 | 0.8410 | 0.8379 | | No log | 2.0 | 2536 | 0.6715 | 0.8401 | 0.8414 | | No log | 3.0 | 3804 | 0.6953 | 0.8538 | 0.8490 | | No log | 4.0 | 5072 | 0.6719 | 0.8522 | 0.8524 | | No log | 5.0 | 6340 | 0.6762 | 0.8519 | 0.8516 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.3.1+cxx11.abi - Datasets 3.0.1 - Tokenizers 0.20.0
mradermacher/Emot5-large-GGUF
mradermacher
2024-11-01T09:35:27Z
31
0
transformers
[ "transformers", "gguf", "en", "base_model:lzw1008/Emot5-large", "base_model:quantized:lzw1008/Emot5-large", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-11-01T09:31:08Z
--- base_model: lzw1008/Emot5-large language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/lzw1008/Emot5-large <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q5_K_S.gguf) | Q5_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q5_K_M.gguf) | Q5_K_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q6_K.gguf) | Q6_K | 0.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.f16.gguf) | f16 | 1.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
VTSNLP/trans_model_vi_en
VTSNLP
2024-11-01T09:30:58Z
5
1
null
[ "tensorboard", "safetensors", "t5", "generated_from_trainer", "base_model:VietAI/envit5-translation", "base_model:finetune:VietAI/envit5-translation", "license:openrail", "region:us" ]
null
2024-11-01T09:30:13Z
--- license: openrail base_model: VietAI/envit5-translation tags: - generated_from_trainer model-index: - name: trans_model_vi_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trans_model_vi_en This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 4 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
GeneZC/MiniMA-2-3B
GeneZC
2024-11-01T09:22:35Z
1,760
17
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "zh", "dataset:EleutherAI/pile", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:p208p2002/wudao", "arxiv:2311.07052", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-27T03:36:23Z
--- language: - en - zh license: apache-2.0 library_name: transformers datasets: - EleutherAI/pile - togethercomputer/RedPajama-Data-1T - p208p2002/wudao widget: - text: <s> 4 + 3 = model-index: - name: MiniMA-2-3B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 44.71 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 69.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 41.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.44 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 8.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard --- ## MiniMA-2-3B ๐Ÿ“‘ [arXiv](https://arxiv.org/abs/2311.07052) | ๐Ÿ‘ป [GitHub](https://github.com/GeneZC/MiniMA) | ๐Ÿค— [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | ๐Ÿค— [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | ๐Ÿค– [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | ๐Ÿค– [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | ๐Ÿค— [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | ๐Ÿค— [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | ๐Ÿค— [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B) ๐Ÿ†• **Updates from MiniMA-3B**: - continued from MiniMA-3B without distillation; - better data mixture; - more trained tokens. โ— Must comply with LICENSE of LLaMA-2 since it is derived from LLaMA-2. A language model continued from MiniMA-3B. Completing the compute-performance pareto frontier together with MiniMA-3B and other arts. <img src="./teaser_a.jpg" alt="teaser_a" width="700" /> **Standard Benchmarks** |Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)| |--|--|--|--|--|--|--|--| |Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49| |ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56| |BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55| |StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99| |Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97| |Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42| |LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10| || |MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11| |MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72| |MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87| |MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13| The following is an example code snippet to use MiniMA-2-3B: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # MiniMA tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-2-3B", use_fast=False) # GPU. model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval() # CPU. # model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval() prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:" input_ids = tokenizer([prompt]).input_ids output_ids = model.generate( torch.as_tensor(input_ids).cuda(), do_sample=True, temperature=0.7, max_new_tokens=1024, ) output_ids = output_ids[0][len(input_ids[0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() # output: "No" ``` ## Bibtex ```bibtex @article{zhang2023law, title={Towards the Law of Capacity Gap in Distilling Language Models}, author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan}, year={2023}, url={https://arxiv.org/abs/2311.07052} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniMA-2-3B) | Metric |Value| |---------------------------------|----:| |Avg. |44.75| |AI2 Reasoning Challenge (25-Shot)|44.71| |HellaSwag (10-Shot) |69.33| |MMLU (5-Shot) |41.22| |TruthfulQA (0-shot) |38.44| |Winogrande (5-shot) |66.69| |GSM8k (5-shot) | 8.11|
minhdang/gte-base-law-matryoshka
minhdang
2024-11-01T09:20:11Z
5
1
sentence-transformers
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:107510", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Alibaba-NLP/gte-multilingual-base", "base_model:finetune:Alibaba-NLP/gte-multilingual-base", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-01T09:19:51Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:107510 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Alibaba-NLP/gte-multilingual-base widget: - source_sentence: '[''Chแป‰ ฤ‘แป‹nh thแบงu\n1. Chแป‰ ฤ‘แป‹nh thแบงu ฤ‘ฦฐแปฃc รกp dแปฅng trong cรกc trฦฐแปng hแปฃp sau ฤ‘รขy:\na) Gรณi thแบงu cแบฅp bรกch cแบงn triแปƒn khai nhแบฑm mแปฅc tiรชu bแบฃo vแป‡ chแปง quyแปn, an ninh quแป‘c gia; gรณi thแบงu cแบงn thแปฑc hiแป‡n ฤ‘แปƒ khแบฏc phแปฅc ngay hoแบทc ฤ‘แปƒ xแปญ lรฝ kแป‹p thแปi hแบญu quแบฃ gรขy ra do thiรชn tai, hแปa hoแบกn, tai nแบกn bแบฅt ngแป, sแปฑ cแป‘, thแบฃm hแปa hoแบทc sแปฑ kiแป‡n bแบฅt khแบฃ khรกng khรกc;\nb) Gรณi thแบงu cung cแบฅp dแป‹ch vแปฅ tฦฐ vแบฅn, phi tฦฐ vแบฅn, hร ng hรณa, xรขy lแบฏp cแบงn triแปƒn khai ngay ฤ‘แปƒ trรกnh gรขy nguy hแบกi ฤ‘แบฟn tรญnh mแบกng vร  tร i sแบฃn cแปงa cแป™ng ฤ‘แป“ng dรขn cฦฐ trรชn ฤ‘แป‹a bร n hoแบทc ฤ‘แปƒ khรดng แบฃnh hฦฐแปŸng nghiรชm trแปng ฤ‘แบฟn cรดng trรฌnh liแปn kแป;\nc) Gรณi thแบงu cung cแบฅp dแป‹ch vแปฅ tฦฐ vแบฅn, phi tฦฐ vแบฅn, thuแป‘c, hรณa chแบฅt, vแบญt tฦฐ xรฉt nghiแป‡m, thiแบฟt bแป‹ y tแบฟ, linh kiแป‡n, phแปฅ kiแป‡n, phฦฐฦกng tiแป‡n, xรขy lแบฏp cแบงn triแปƒn khai ngay ฤ‘แปƒ phแปฅc vแปฅ cรดng tรกc phรฒng, chแป‘ng dแป‹ch bแป‡nh hoแบทc duy trรฌ hoแบกt ฤ‘แป™ng cแปงa cฦก sแปŸ khรกm bแป‡nh, chแปฏa bแป‡nh trong trฦฐแปng hแปฃp cแบฅp bรกch, trรกnh gรขy nguy hแบกi ฤ‘แบฟn tรญnh mแบกng, sแปฉc khแปe ngฦฐแปi dรขn; gรณi thแบงu mua thuแป‘c, hรณa chแบฅt, vแบญt tฦฐ xรฉt nghiแป‡m, thiแบฟt bแป‹ y tแบฟ, linh kiแป‡n, phแปฅ kiแป‡n ฤ‘แปƒ cแบฅp cแปฉu ngฦฐแปi bแป‡nh trong tรฌnh trแบกng cแบฅp cแปฉu theo quy ฤ‘แป‹nh cแปงa Luแบญt Khรกm bแป‡nh, chแปฏa bแป‡nh trong trฦฐแปng hแปฃp cฦก sแปŸ khรกm bแป‡nh, chแปฏa bแป‡nh khรดng cรณ ฤ‘แปง thuแป‘c, hรณa chแบฅt, vแบญt tฦฐ xรฉt nghiแป‡m, thiแบฟt bแป‹ y tแบฟ, linh kiแป‡n, phแปฅ kiแป‡n; gรณi thแบงu mua thuแป‘c, thiแบฟt bแป‹ y tแบฟ chแป‰ cรณ duy nhแบฅt mแป™t hรฃng sแบฃn xuแบฅt trรชn thแป‹ trฦฐแปng;\nd) Gรณi thแบงu cแบงn thแปฑc hiแป‡n ฤ‘แปƒ bแบฃo vแป‡ bรญ mแบญt nhร  nฦฐแป›c;\n...'']' sentences: - Trong trฦฐแปng hแปฃp nร o thรฌ ngรขn sรกch trung ฦฐฦกng ฤ‘ฦฐแปฃc gia hแบกn khoแบฃn vay ngรขn quแปน nhร  nฦฐแป›c? - Hร nh vi trรฌnh diแป…n khiรชu dรขm trong cแบฅu thร nh tแป™i sแปญ dแปฅng ngฦฐแปi dฦฐแป›i 16 tuแป•i vร o mแปฅc ฤ‘รญch khiรชu dรขm lร  gรฌ? - Cho phรฉp chแป‰ ฤ‘แป‹nh thแบงu ฤ‘แปƒ mua thuแป‘c, thiแบฟt bแป‹ y tแบฟ trong trฦฐแปng hแปฃp khแบฉn cแบฅp? - source_sentence: "['\"1. Cuแป‘i mแป—i hแปc kแปณ chรญnh, sinh viรชn ฤ‘ฦฐแปฃc cแบฃnh bรกo hแปc tแบญp\ \ dแปฑa trรชn mแป™t sแป‘ ฤ‘iแปu kiแป‡n nhฦฐ sau:\\na) Tแป•ng sแป‘ tรญn chแป‰ khรดng ฤ‘แบกt trong hแปc\ \ kแปณ vฦฐแปฃt quรก 50% khแป‘i lฦฐแปฃng ฤ‘รฃ ฤ‘ฤƒng kรญ hแปc trong hแปc kแปณ, hoแบทc tแป•ng sแป‘ tรญn chแป‰\ \ nแปฃ ฤ‘แปng tแปซ ฤ‘แบงu khรณa hแปc vฦฐแปฃt quรก 24;\\nb) ฤiแปƒm trung bรฌnh hแปc kแปณ ฤ‘แบกt dฦฐแป›i 0,8\ \ ฤ‘แป‘i vแป›i hแปc kแปณ ฤ‘แบงu cแปงa khรณa hแปc, dฦฐแป›i 1,0 ฤ‘แป‘i vแป›i cรกc hแปc kแปณ tiแบฟp theo;\\nc)\ \ ฤiแปƒm trung bรฌnh tรญch lลฉy ฤ‘แบกt dฦฐแป›i 1,2 ฤ‘แป‘i vแป›i sinh viรชn trรฌnh ฤ‘แป™ nฤƒm thแปฉ nhแบฅt,\ \ dฦฐแป›i 1,4 ฤ‘แป‘i vแป›i sinh viรชn trรฌnh ฤ‘แป™ nฤƒm thแปฉ hai, dฦฐแป›i 1,6 ฤ‘แป‘i vแป›i sinh viรชn\ \ trรฌnh ฤ‘แป™ nฤƒm thแปฉ ba dฦฐแป›i 1,8 ฤ‘แป‘i vแป›i sinh viรชn cรกc nฤƒm tiแบฟp theo.\\n2. Sinh\ \ viรชn bแป‹ buแป™c thรดi hแปc trong cรกc trฦฐแปng hแปฃp sau:\\na) Sแป‘ lแบงn cแบฃnh bรกo hแปc tแบญp\ \ hoแบทc mแปฉc cแบฃnh bรกo hแปc tแบญp vฦฐแปฃt quรก giแป›i hแบกn theo quy ฤ‘แป‹nh cแปงa cฦก sแปŸ ฤ‘ร o tแบกo;\\\ nb) Thแปi gian hแปc tแบญp vฦฐแปฃt quรก giแป›i hแบกn theo quy ฤ‘แป‹nh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy\ \ chแบฟ nร y.\\n3. Quy chแบฟ cแปงa cฦก sแปŸ ฤ‘ร o tแบกo quy ฤ‘แป‹nh cแปฅ thแปƒ:\\na) Viแป‡c lแปฑa chแปn\ \ รกp dแปฅng mแป™t sแป‘ ฤ‘iแปu kiแป‡n cแบฃnh bรกo hแปc tแบญp, giแป›i hแบกn sแป‘ lแบงn hoแบทc mแปฉc cแบฃnh bรกo\ \ hแปc tแบญp nhฦฐng khรดng vฦฐแปฃt quรก 2 lแบงn cแบฃnh bรกo liรชn tiแบฟp;\\nb) Quy trรฌnh, thแปง tแปฅc\ \ cแบฃnh bรกo hแปc tแบญp, buแป™c thรดi hแปc; viแป‡c thรดng bรกo hรฌnh thแปฉc รกp dแปฅng tแป›i sinh viรชn;\\\ nc) Viแป‡c bแบฃo lฦฐu kแบฟt quแบฃ hแปc tแบญp ฤ‘รฃ tรญch luแปน trong trฦฐแปng hแปฃp sinh viรชn bแป‹ buแป™c\ \ thรดi hแปc.\"'\n '\"1. Cuแป‘i mแป—i nฤƒm hแปc, sinh viรชn ฤ‘ฦฐแปฃc ฤ‘รกnh giรก ฤ‘แบกt tiแบฟn ฤ‘แป™ hแปc\ \ tแบญp bรฌnh thฦฐแปng vร  ฤ‘ฦฐแปฃc hแปc tiแบฟp lรชn nฤƒm hแปc sau nแบฟu ฤ‘แบกt cแบฃ hai ฤ‘iแปu kiแป‡n sau:\\\ na) ฤiแปƒm trung bรฌnh nฤƒm hแปc ฤ‘แบกt tแปซ 1,0 trแปŸ lรชn ฤ‘แป‘i vแป›i nฤƒm hแปc thแปฉ nhแบฅt, tแปซ 1,2\ \ trแปŸ lรชn ฤ‘แป‘i vแป›i nฤƒm thแปฉ hai vร  tแปซ 1,4 ฤ‘แป‘i vแป›i nฤƒm thแปฉ ba trแปŸ ฤ‘i;\\nb) Sแป‘ tรญn\ \ chแป‰ nแปฃ ฤ‘แปng tแปซ ฤ‘แบงu khรณa khรดng vฦฐแปฃt quรก 16.\\n2. Sinh viรชn bแป‹ buแป™c thรดi hแปc trong\ \ cรกc trฦฐแปng hแปฃp sau:\\na) ฤiแปƒm trung bรฌnh nฤƒm hแปc ฤ‘แบกt dฦฐแป›i 0,8;\\nb) ฤiแปƒm trung\ \ bรฌnh tรญch lลฉy ฤ‘แบกt dฦฐแป›i 1,2 sau 2 nฤƒm hแปc, dฦฐแป›i 1,4 sau 3 nฤƒm hแปc vร  dฦฐแป›i 1,6\ \ tแปซ sau 4 nฤƒm hแปc trแปŸ ฤ‘i;\\nc) Thแปi gian hแปc tแบญp vฦฐแปฃt quรก giแป›i hแบกn theo quy ฤ‘แป‹nh\ \ tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y.\\n3. Sinh viรชn khรดng thuแป™c diแป‡n quy ฤ‘แป‹nh\ \ tแบกi khoแบฃn 1 vร  khoแบฃn 2 ฤiแปu nร y ฤ‘ฦฐแปฃc xแบฟp lแป›p hแปc cรนng khoรก sau ฤ‘แปƒ cแบฃi thiแป‡n\ \ kแบฟt quแบฃ hแปc tแบญp.\\n4. Quy chแบฟ cแปงa cฦก sแปŸ ฤ‘ร o tแบกo quy ฤ‘แป‹nh cแปฅ thแปƒ:\\na) Viแป‡c lแปฑa\ \ chแปn รกp dแปฅng mแป™t sแป‘ ฤ‘iแปu kiแป‡n cแบฃnh bรกo hแปc tแบญp tฦฐฦกng tแปฑ quy ฤ‘แป‹nh ฤ‘แป‘i vแป›i ฤ‘ร o\ \ tแบกo theo tรญn chแป‰ tแบกi khoแบฃn 1 ฤiแปu 11 cแปงa Quy chแบฟ nร y;\\nb) Quy trรฌnh, thแปง tแปฅc\ \ cแบฃnh bรกo hแปc tแบญp (nแบฟu cรณ), buแป™c thรดi hแปc; viแป‡c thรดng bรกo hรฌnh thแปฉc รกp dแปฅng tแป›i\ \ sinh viรชn;\\nc) Viแป‡c bแบฃo lฦฐu kแบฟt quแบฃ hแปc tแบญp ฤ‘รฃ tรญch luแปน trong trฦฐแปng hแปฃp sinh\ \ viรชn bแป‹ buแป™c thรดi hแปc.\"']" sentences: - Ngฦฐแปi lao ฤ‘แป™ng cรณ thแปi gian tham gia bแบฃo hiแปƒm xรฃ hแป™i bแบฏt buแป™c mร  tแปฑ tแปญ cรณ ฤ‘ฦฐแปฃc hฦฐแปŸng trแปฃ cแบฅp mai tรกng khรดng? - Giแบฅy chแปฉng nhแบญn sแปญ dแปฅng cรดng cแปฅ hแป— trแปฃ bแป‹ mแบฅt thรฌ trรฌnh tแปฑ, thแปง tแปฅc ฤ‘แป nghแป‹ cแบฅp lแบกi ฤ‘ฦฐแปฃc thแปฑc hiแป‡n nhฦฐ thแบฟ nร o? - Xแปญ lรฝ kแบฟt quแบฃ hแปc tแบญp theo tรญn chแป‰ vร  niรชn chแบฟ ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh nhฦฐ thแบฟ nร o? - source_sentence: '[''Chuyแปƒn ngร nh, chuyแปƒn nฦกi hแปc, chuyแปƒn cฦก sแปŸ ฤ‘ร o tแบกo, chuyแปƒn hรฌnh thแปฉc hแปc\n1. Sinh viรชn ฤ‘ฦฐแปฃc xem xรฉt chuyแปƒn sang hแปc mแป™t chฦฐฦกng trรฌnh, mแป™t ngร nh ฤ‘ร o tแบกo khรกc, hoแบทc mแป™t phรขn hiแป‡u khรกc cแปงa cฦก sแปŸ ฤ‘ร o tแบกo, hoแบทc tแปซ phรขn hiแป‡u vแป trแปฅ sแปŸ chรญnh khi cรณ ฤ‘แปง cรกc ฤ‘iแปu kiแป‡n sau:\na) Khรดng ฤ‘ang lร  sinh viรชn trรฌnh ฤ‘แป™ nฤƒm thแปฉ nhแบฅt hoแบทc nฤƒm cuแป‘i khรณa, khรดng thuแป™c diแป‡n bแป‹ xem xรฉt buแป™c thรดi hแปc vร  cรฒn ฤ‘แปง thแปi gian hแปc tแบญp theo quy ฤ‘แป‹nh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y;\nb) Sinh viรชn ฤ‘แบกt ฤ‘iแปu kiแป‡n trรบng tuyแปƒn cแปงa chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo, cแปงa trแปฅ sแปŸ chรญnh (hoแบทc phรขn hiแป‡u ) trong cรนng khรณa tuyแปƒn sinh;\nc) Cฦก sแปŸ ฤ‘ร o tแบกo, trแปฅ sแปŸ chรญnh (hoแบทc phรขn hiแป‡u) cรณ ฤ‘แปง cรกc ฤ‘iแปu kiแป‡n bแบฃo ฤ‘แบฃm chแบฅt lฦฐแปฃng, chฦฐa vฦฐแปฃt quรก nฤƒng lแปฑc ฤ‘ร o tแบกo ฤ‘แป‘i vแป›i chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo ฤ‘รณ theo quy ฤ‘แป‹nh hiแป‡n hร nh cแปงa Bแป™ Giรกo dแปฅc vร  ฤร o tแบกo;\nd) ฤฦฐแปฃc sแปฑ ฤ‘แป“ng รฝ cแปงa thแปง trฦฐแปŸng cรกc ฤ‘ฦกn vแป‹ chuyรชn mรดn phแปฅ trรกch chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo, ngฦฐแปi phแปฅ trรกch phรขn hiแป‡u (nฦกi chuyแปƒn ฤ‘i vร  chuyแบฟn ฤ‘แบฟn) vร  cแปงa hiแป‡u trฦฐแปŸng cฦก sแปŸ ฤ‘ร o tแบกo.\n2. Sinh viรชn ฤ‘ฦฐแปฃc xem xรฉt chuyแปƒn cฦก sแปŸ ฤ‘ร o tแบกo khi cรณ ฤ‘แปง cรกc ฤ‘iแปu kiแป‡n sau:\na) Khรดng ฤ‘ang lร  sinh viรชn trรฌnh ฤ‘แป™ nฤƒm thแปฉ nhแบฅt hoแบทc nฤƒm cuแป‘i khรณa, khรดng thuแป™c diแป‡n bแป‹ xem xรฉt buแป™c thรดi hแปc vร  cรฒn ฤ‘แปง thแปi gian hแปc tแบญp theo quy ฤ‘แป‹nh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y;\nb) Sinh viรชn ฤ‘แบกt ฤ‘iแปu kiแป‡n trรบng tuyแปƒn cแปงa chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo cรนng khรณa tuyแปƒn sinh tแบกi nฦกi chuyแปƒn ฤ‘แบฟn;\nc) Nฦกi chuyแปƒn ฤ‘แบฟn cรณ ฤ‘แปง cรกc ฤ‘iแปu kiแป‡n bแบฃo ฤ‘แบฃm chแบฅt lฦฐแปฃng, chฦฐa vฦฐแปฃt quรก nฤƒng lแปฑc ฤ‘ร o tแบกo ฤ‘แป‘i vแป›i chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo ฤ‘รณ theo quy ฤ‘แป‹nh hiแป‡n hร nh cแปงa Bแป™ Giรกo dแปฅc vร  ฤร o tแบกo;\nd) ฤฦฐแปฃc sแปฑ ฤ‘แป“ng รฝ cแปงa hiแป‡u trฦฐแปŸng cฦก sแปŸ ฤ‘ร o tแบกo xin chuyแปƒn ฤ‘i vร  cฦก sแปŸ ฤ‘ร o tแบกo xin chuyแปƒn ฤ‘แบฟn.\n3. Sinh viรชn ฤ‘ฦฐแปฃc xem xรฉt chuyแปƒn tแปซ ฤ‘ร o tแบกo theo hรฌnh thแปฉc chรญnh quy sang hรฌnh thแปฉc vแปซa lร m vแปซa hแปc hoแบทc ฤ‘ร o tแบกo tแปซ xa cแปงa cฦก sแปŸ ฤ‘ร o tแบกo nแบฟu cรฒn ฤ‘แปง thแปi gian hแปc tแบญp theo quy ฤ‘แป‹nh ฤ‘แป‘i vแป›i hรฌnh thแปฉc chuyแปƒn ฤ‘แบฟn.\n4. Quy chแบฟ cแปงa cฦก sแปŸ ฤ‘ร o tแบกo quy ฤ‘แป‹nh chi tiแบฟt thแบฉm quyแปn, ฤ‘iแปu kiแป‡n, thแปง tแปฅc chuyแปƒn chฦฐฦกng trรฌnh, ngร nh ฤ‘ร o tแบกo, chuyแปƒn nฦกi hแปc, chuyแปƒn cฦก sแปŸ ฤ‘ร o tแบกo hoแบทc chuyแปƒn hรฌnh thแปฉc hแปc; viแป‡c cรดng nhแบญn kแบฟt quแบฃ hแปc tแบญp hoแบทc chuyแปƒn ฤ‘แป•i tรญn chแป‰ ฤ‘รฃ tรญch lลฉy ฤ‘แป‘i cho sinh viรชn thuแป™c cรกc trฦฐแปng hแปฃp nร y.'']' sentences: - ฤiแปu kiแป‡n ฤ‘แปƒ ฤ‘ฦฐแปฃc chuyแปƒn ngร nh, chuyแปƒn nฦกi hแปc, chuyแปƒn cฦก sแปŸ ฤ‘ร o tแบกo, chuyแปƒn hรฌnh thแปฉc hแปc ฤ‘แป‘i vแป›i sinh viรชn? - Chi hแป— trแปฃ hแปc nghแป cho ngฦฐแปi sau cai nghiแป‡n ma tรบy ฤ‘ฦฐแปฃc thแปฑc hiแป‡n nhฦฐ thแบฟ nร o? - Nhiแป‡m vแปฅ cแปงa Hiแป‡p hแป™i Nhiรชn liแป‡u sinh hแปc Viแป‡t Nam lร  gรฌ? - source_sentence: "['\"4. Thแปง tแปฅc chแปฉng thแปฑc chแปฏ kรฝ quy ฤ‘แป‹nh tแบกi Khoแบฃn 1, 2 vร  3\ \ ฤiแปu nร y cลฉng ฤ‘ฦฐแปฃc รกp dแปฅng ฤ‘แป‘i vแป›i cรกc trฦฐแปng hแปฃp sau ฤ‘รขy:\\na) Chแปฉng thแปฑc chแปฏ\ \ kรฝ cแปงa nhiแปu ngฦฐแปi trong cรนng mแป™t giแบฅy tแป, vฤƒn bแบฃn;\\nb) Chแปฉng thแปฑc chแปฏ kรฝ cแปงa\ \ ngฦฐแปi khai lรฝ lแป‹ch cรก nhรขn;\\nc) Chแปฉng thแปฑc chแปฏ kรฝ trong giแบฅy tแป, vฤƒn bแบฃn do\ \ cรก nhรขn tแปฑ lแบญp theo quy ฤ‘แป‹nh cแปงa phรกp luแบญt;\\nd) Chแปฉng thแปฑc chแปฏ kรฝ trong Giแบฅy\ \ แปงy quyแปn ฤ‘แป‘i vแป›i trฦฐแปng hแปฃp แปงy quyแปn khรดng cรณ thรน lao, khรดng cรณ nghฤฉa vแปฅ bแป“i\ \ thฦฐแปng cแปงa bรชn ฤ‘ฦฐแปฃc แปงy quyแปn vร  khรดng liรชn quan ฤ‘แบฟn viแป‡c chuyแปƒn quyแปn sแปŸ hแปฏu\ \ tร i sแบฃn, quyแปn sแปญ dแปฅng bแบฅt ฤ‘แป™ng sแบฃn.\"'\n '\"ฤiแปu 24. Thแปง tแปฅc chแปฉng thแปฑc chแปฏ\ \ kรฝ\\n2. Ngฦฐแปi thแปฑc hiแป‡n chแปฉng thแปฑc kiแปƒm tra giแบฅy tแป yรชu cแบงu chแปฉng thแปฑc, nแบฟu\ \ thแบฅy ฤ‘แปง giแบฅy tแป theo quy ฤ‘แป‹nh tแบกi Khoแบฃn 1 ฤiแปu nร y, tแบกi thแปi ฤ‘iแปƒm chแปฉng thแปฑc,\ \ ngฦฐแปi yรชu cแบงu chแปฉng thแปฑc minh mแบซn, nhแบญn thแปฉc vร  lร m chแปง ฤ‘ฦฐแปฃc hร nh vi cแปงa mรฌnh\ \ vร  viแป‡c chแปฉng thแปฑc khรดng thuแป™c cรกc trฦฐแปng hแปฃp quy ฤ‘แป‹nh tแบกi ฤiแปu 25 cแปงa Nghแป‹\ \ ฤ‘แป‹nh nร y thรฌ yรชu cแบงu ngฦฐแปi yรชu cแบงu chแปฉng thแปฑc kรฝ trฦฐแป›c mแบทt vร  thแปฑc hiแป‡n chแปฉng\ \ thแปฑc nhฦฐ sau:\\na) Ghi ฤ‘แบงy ฤ‘แปง lแปi chแปฉng chแปฉng thแปฑc chแปฏ kรฝ theo mแบซu quy ฤ‘แป‹nh;\\\ nb) Kรฝ, ghi rรต hแป tรชn, ฤ‘รณng dแบฅu cแปงa cฦก quan, tแป• chแปฉc thแปฑc hiแป‡n chแปฉng thแปฑc vร  ghi\ \ vร o sแป• chแปฉng thแปฑc.\\nฤแป‘i vแป›i giแบฅy tแป, vฤƒn bแบฃn cรณ tแปซ (02) hai trang trแปŸ lรชn thรฌ\ \ ghi lแปi chแปฉng vร o trang cuแป‘i, nแบฟu giแบฅy tแป, vฤƒn bแบฃn cรณ tแปซ 02 (hai) tแป trแปŸ lรชn\ \ thรฌ phแบฃi ฤ‘รณng dแบฅu giรกp lai.\"']" sentences: - Bรญ thฦฐ Thฦฐแปng trแปฑc Trung ฦฐฦกng ฤoร n Thanh niรชn Cแป™ng sแบฃn Hแป“ Chรญ Minh ฤ‘ฦฐแปฃc nhแบญn mแปฉc phแปฅ cแบฅp phแปฅc vแปฅ bao nhiรชu? - ฤแป‹nh giรก lแบกi tร i sแบฃn lแบงn thแปฉ hai trong vแปฅ รกn hรฌnh sแปฑ ฤ‘ฦฐแปฃc thแปฑc hiแป‡n khi nร o? - Chแปฉng thแปฑc chแปฏ kรฝ cho giแบฅy uแปท quyแปn sแบฝ ฤ‘ฦฐแปฃc thแปฑc hiแป‡n nhฦฐ thแบฟ nร o? - source_sentence: '[''Mแปฉc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน\n1. Phแบกm nhรขn bแป‹ phแบกt tรน chung thรขn, lแบงn ฤ‘แบงu ฤ‘ฦฐแปฃc giแบฃm xuแป‘ng ba mฦฐฦกi nฤƒm.\n2. Phแบกm nhรขn bแป‹ phแบกt tรน tแปซ ba mฦฐฦกi nฤƒm trแปŸ xuแป‘ng, mแป—i lแบงn cรณ thแปƒ ฤ‘ฦฐแปฃc giแบฃm tแปซ mแป™t thรกng ฤ‘แบฟn ba nฤƒm. Trฦฐแปng hแปฃp ฤ‘ฦฐแปฃc giแบฃm ba nฤƒm phแบฃi lร  nhแปฏng phแบกm nhรขn chแบฅp hร nh nghiรชm chแป‰nh Nแป™i quy trแบกi giam, trแบกi tแบกm giam, nhร  tแบกm giแปฏ vร  lแบญp cรดng hoแบทc cรณ thร nh tรญch ฤ‘แบทc biแป‡t xuแบฅt sแบฏc trong lao ฤ‘แป™ng, hแปc tแบญp cแบฃi tแบกo.\n3. Mแป—i nฤƒm mแป™t phแบกm nhรขn chแป‰ ฤ‘ฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน mแป™t lแบงn, khoแบฃng cรกch giแปฏa hai lแบงn xรฉt giแบฃm รญt nhแบฅt lร  mแป™t nฤƒm. Trฦฐแปng hแปฃp ฤ‘รฃ ฤ‘ฦฐแปฃc giแบฃm mร  thแปi hแบกn tรน cรฒn lแบกi khรดng ฤ‘แปง mแป™t nฤƒm thรฌ nฤƒm tiแบฟp theo cรณ thแปƒ ฤ‘แป nghแป‹ xรฉt giแบฃm sแป›m hฦกn trฦฐแป›c mแป™t ฤ‘แปฃt, nhฦฐng vแบซn phแบฃi bแบฃo ฤ‘แบฃm mแป—i nฤƒm chแป‰ ฤ‘ฦฐแปฃc xรฉt giแบฃm mแป™t lแบงn.\nTrฦฐแปng hแปฃp sau khi ฤ‘รฃ ฤ‘ฦฐแปฃc giแบฃm thแปi hแบกn mร  cรณ lรฝ do ฤ‘แบทc biแป‡t ฤ‘รกng ฤ‘ฦฐแปฃc khoan hแป“ng nhฦฐ lแบญp cรดng hoแบทc mแบฏc bแป‡nh hiแปƒm nghรจo thรฌ cรณ thแปƒ ฤ‘ฦฐแปฃc xรฉt giแบฃm thรชm nhฦฐng khรดng ฤ‘ฦฐแปฃc quรก hai lแบงn trong mแป™t nฤƒm.\n4. Mแป—i phแบกm nhรขn cรณ thแปƒ ฤ‘ฦฐแปฃc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน nhiแปu lแบงn, nhฦฐng phแบฃi bแบฃo ฤ‘แบฃm thแปi hแบกn thแปฑc tแบฟ chแบฅp hร nh รกn phแบกt tรน ฤ‘ฦฐแปฃc mแป™t phแบงn hai mแปฉc hรฌnh phแบกt tรน cรณ thแปi hแบกn ฤ‘รฃ tuyรชn hoแบทc hai mฦฐฦกi nฤƒm ฤ‘แป‘i vแป›i hรฌnh phแบกt tรน chung thรขn.'']' sentences: - Mแป—i nฤƒm thรฌ phแบกm nhรขn ฤ‘ฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน bao nhiรชu lแบงn? - Giรกm ฤ‘แป‘c Quแปน bแบฃo tแป“n di sแบฃn Huแบฟ do ai bแป• nhiแป‡m? - Chแบฅp hร nh viรชn cรณ bแบฏt buแป™c kรฝ tรชn vร o vฤƒn bแบฃn thแปa thuแบญn thi hร nh รกn dรขn sแปฑ cแปงa ฤ‘ฦฐฦกng sแปฑ hay khรดng? pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.2955801104972376 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.48920140632847814 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5747530554160388 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6760421898543445 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2955801104972376 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.16306713544282603 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.11495061108320775 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06760421898543445 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.2955801104972376 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.48920140632847814 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5747530554160388 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6760421898543445 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.477230404285928 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.41460005872989236 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.42407099092866546 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.29449188012723926 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4896199564707852 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5724928846475807 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6713544282605056 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.29449188012723926 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1632066521569284 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.11449857692951614 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06713544282605056 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.29449188012723926 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4896199564707852 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5724928846475807 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6713544282605056 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4743515215291094 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.41222767666137783 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4218120045923118 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.28511635693956133 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4783191026284949 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5605223505775992 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6628997153859032 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.28511635693956133 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15943970087616496 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.11210447011551983 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06628997153859031 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.28511635693956133 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4783191026284949 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5605223505775992 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6628997153859032 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4650207581954583 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.40272748532417074 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4121698601916915 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.2735643730118868 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4610748367654445 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.543529214799933 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6400468776159384 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2735643730118868 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15369161225514816 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1087058429599866 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06400468776159383 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.2735643730118868 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4610748367654445 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.543529214799933 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6400468776159384 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4483492533628726 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.387943762805642 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.3975600153943611 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.2466097438473129 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.42005692281935375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.49891176963000167 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.5950108823037 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2466097438473129 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1400189742731179 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09978235392600034 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.059501088230369995 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.2466097438473129 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.42005692281935375 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.49891176963000167 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5950108823037 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4117058390410184 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.35411208905684183 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.36371800437559065 name: Cosine Map@100 --- # SentenceTransformer based on Alibaba-NLP/gte-multilingual-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 7fc06782350c1a83f88b15dd4b38ef853d3b8503 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("minhdang/gte-base-law-matryoshka") # Run inference sentences = [ "['Mแปฉc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน\\n1. Phแบกm nhรขn bแป‹ phแบกt tรน chung thรขn, lแบงn ฤ‘แบงu ฤ‘ฦฐแปฃc giแบฃm xuแป‘ng ba mฦฐฦกi nฤƒm.\\n2. Phแบกm nhรขn bแป‹ phแบกt tรน tแปซ ba mฦฐฦกi nฤƒm trแปŸ xuแป‘ng, mแป—i lแบงn cรณ thแปƒ ฤ‘ฦฐแปฃc giแบฃm tแปซ mแป™t thรกng ฤ‘แบฟn ba nฤƒm. Trฦฐแปng hแปฃp ฤ‘ฦฐแปฃc giแบฃm ba nฤƒm phแบฃi lร  nhแปฏng phแบกm nhรขn chแบฅp hร nh nghiรชm chแป‰nh Nแป™i quy trแบกi giam, trแบกi tแบกm giam, nhร  tแบกm giแปฏ vร  lแบญp cรดng hoแบทc cรณ thร nh tรญch ฤ‘แบทc biแป‡t xuแบฅt sแบฏc trong lao ฤ‘แป™ng, hแปc tแบญp cแบฃi tแบกo.\\n3. Mแป—i nฤƒm mแป™t phแบกm nhรขn chแป‰ ฤ‘ฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน mแป™t lแบงn, khoแบฃng cรกch giแปฏa hai lแบงn xรฉt giแบฃm รญt nhแบฅt lร  mแป™t nฤƒm. Trฦฐแปng hแปฃp ฤ‘รฃ ฤ‘ฦฐแปฃc giแบฃm mร  thแปi hแบกn tรน cรฒn lแบกi khรดng ฤ‘แปง mแป™t nฤƒm thรฌ nฤƒm tiแบฟp theo cรณ thแปƒ ฤ‘แป nghแป‹ xรฉt giแบฃm sแป›m hฦกn trฦฐแป›c mแป™t ฤ‘แปฃt, nhฦฐng vแบซn phแบฃi bแบฃo ฤ‘แบฃm mแป—i nฤƒm chแป‰ ฤ‘ฦฐแปฃc xรฉt giแบฃm mแป™t lแบงn.\\nTrฦฐแปng hแปฃp sau khi ฤ‘รฃ ฤ‘ฦฐแปฃc giแบฃm thแปi hแบกn mร  cรณ lรฝ do ฤ‘แบทc biแป‡t ฤ‘รกng ฤ‘ฦฐแปฃc khoan hแป“ng nhฦฐ lแบญp cรดng hoแบทc mแบฏc bแป‡nh hiแปƒm nghรจo thรฌ cรณ thแปƒ ฤ‘ฦฐแปฃc xรฉt giแบฃm thรชm nhฦฐng khรดng ฤ‘ฦฐแปฃc quรก hai lแบงn trong mแป™t nฤƒm.\\n4. Mแป—i phแบกm nhรขn cรณ thแปƒ ฤ‘ฦฐแปฃc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน nhiแปu lแบงn, nhฦฐng phแบฃi bแบฃo ฤ‘แบฃm thแปi hแบกn thแปฑc tแบฟ chแบฅp hร nh รกn phแบกt tรน ฤ‘ฦฐแปฃc mแป™t phแบงn hai mแปฉc hรฌnh phแบกt tรน cรณ thแปi hแบกn ฤ‘รฃ tuyรชn hoแบทc hai mฦฐฦกi nฤƒm ฤ‘แป‘i vแป›i hรฌnh phแบกt tรน chung thรขn.']", 'Mแป—i nฤƒm thรฌ phแบกm nhรขn ฤ‘ฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน bao nhiรชu lแบงn?', 'Chแบฅp hร nh viรชn cรณ bแบฏt buแป™c kรฝ tรชn vร o vฤƒn bแบฃn thแปa thuแบญn thi hร nh รกn dรขn sแปฑ cแปงa ฤ‘ฦฐฦกng sแปฑ hay khรดng?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2956 | | cosine_accuracy@3 | 0.4892 | | cosine_accuracy@5 | 0.5748 | | cosine_accuracy@10 | 0.676 | | cosine_precision@1 | 0.2956 | | cosine_precision@3 | 0.1631 | | cosine_precision@5 | 0.115 | | cosine_precision@10 | 0.0676 | | cosine_recall@1 | 0.2956 | | cosine_recall@3 | 0.4892 | | cosine_recall@5 | 0.5748 | | cosine_recall@10 | 0.676 | | cosine_ndcg@10 | 0.4772 | | cosine_mrr@10 | 0.4146 | | **cosine_map@100** | **0.4241** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2945 | | cosine_accuracy@3 | 0.4896 | | cosine_accuracy@5 | 0.5725 | | cosine_accuracy@10 | 0.6714 | | cosine_precision@1 | 0.2945 | | cosine_precision@3 | 0.1632 | | cosine_precision@5 | 0.1145 | | cosine_precision@10 | 0.0671 | | cosine_recall@1 | 0.2945 | | cosine_recall@3 | 0.4896 | | cosine_recall@5 | 0.5725 | | cosine_recall@10 | 0.6714 | | cosine_ndcg@10 | 0.4744 | | cosine_mrr@10 | 0.4122 | | **cosine_map@100** | **0.4218** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2851 | | cosine_accuracy@3 | 0.4783 | | cosine_accuracy@5 | 0.5605 | | cosine_accuracy@10 | 0.6629 | | cosine_precision@1 | 0.2851 | | cosine_precision@3 | 0.1594 | | cosine_precision@5 | 0.1121 | | cosine_precision@10 | 0.0663 | | cosine_recall@1 | 0.2851 | | cosine_recall@3 | 0.4783 | | cosine_recall@5 | 0.5605 | | cosine_recall@10 | 0.6629 | | cosine_ndcg@10 | 0.465 | | cosine_mrr@10 | 0.4027 | | **cosine_map@100** | **0.4122** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2736 | | cosine_accuracy@3 | 0.4611 | | cosine_accuracy@5 | 0.5435 | | cosine_accuracy@10 | 0.64 | | cosine_precision@1 | 0.2736 | | cosine_precision@3 | 0.1537 | | cosine_precision@5 | 0.1087 | | cosine_precision@10 | 0.064 | | cosine_recall@1 | 0.2736 | | cosine_recall@3 | 0.4611 | | cosine_recall@5 | 0.5435 | | cosine_recall@10 | 0.64 | | cosine_ndcg@10 | 0.4483 | | cosine_mrr@10 | 0.3879 | | **cosine_map@100** | **0.3976** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2466 | | cosine_accuracy@3 | 0.4201 | | cosine_accuracy@5 | 0.4989 | | cosine_accuracy@10 | 0.595 | | cosine_precision@1 | 0.2466 | | cosine_precision@3 | 0.14 | | cosine_precision@5 | 0.0998 | | cosine_precision@10 | 0.0595 | | cosine_recall@1 | 0.2466 | | cosine_recall@3 | 0.4201 | | cosine_recall@5 | 0.4989 | | cosine_recall@10 | 0.595 | | cosine_ndcg@10 | 0.4117 | | cosine_mrr@10 | 0.3541 | | **cosine_map@100** | **0.3637** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 107,510 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:--------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 282.01 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.95 tokens</li><li>max: 49 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------| | <code>['ฤแป‘i tฦฐแปฃng liรชn kแบฟt giรกo dแปฅc\nCฦก sแปŸ giรกo dแปฅc mแบงm non tฦฐ thแปฅc, cฦก sแปŸ giรกo dแปฅc phแป• thรดng tฦฐ thแปฅc cแปงa Viแป‡t Nam vร  cฦก sแปŸ giรกo dแปฅc hoแบกt ฤ‘แป™ng hแปฃp phรกp แปŸ nฦฐแป›c ngoร i, ฤ‘ฦฐแปฃc cฦก quan, tแป• chแปฉc kiแปƒm ฤ‘แป‹nh chแบฅt lฦฐแปฃng giรกo dแปฅc hoแบทc cฦก quan cรณ thแบฉm quyแปn cแปงa nฦฐแป›c ngoร i cรดng nhแบญn vแป chแบฅt lฦฐแปฃng giรกo dแปฅc.']</code> | <code>Cฦก sแปŸ giรกo dแปฅc phแป• thรดng tฦฐ thแปฅc cแปงa Viแป‡t Nam cรณ phแบฃi lร  ฤ‘แป‘i tฦฐแปฃng liรชn kแบฟt giรกo dแปฅc vแป›i nฦฐแป›c ngoร i khรดng?</code> | | <code>['Quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ dแปฑ รกn PPP\n1. Nแป™i dung quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ dแปฑ รกn PPP thแปฑc hiแป‡n theo quy ฤ‘แป‹nh tแบกi ฤiแปu 17 cแปงa Luแบญt PPP vร  Mแบซu sแป‘ 03 Phแปฅ lแปฅc II kรจm theo Nghแป‹ ฤ‘แป‹nh nร y.'<br> 'Nแป™i dung quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ dแปฑ รกn PPP\n1. Quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ bao gแป“m cรกc nแป™i dung chแปง yแบฟu sau ฤ‘รขy:\na) Tรชn dแปฑ รกn;\nb) Tรชn cฦก quan cรณ thแบฉm quyแปn;\nc) Mแปฅc tiรชu; dแปฑ kiแบฟn quy mรด, ฤ‘แป‹a ฤ‘iแปƒm, thแปi gian thแปฑc hiแป‡n dแปฑ รกn, nhu cแบงu sแปญ dแปฅng ฤ‘แบฅt vร  tร i nguyรชn khรกc;\nd) Dแปฑ kiแบฟn loแบกi hแปฃp ฤ‘แป“ng dแปฑ รกn PPP;\nฤ‘) Sฦก bแป™ tแป•ng mแปฉc ฤ‘แบงu tฦฐ; sฦก bแป™ phฦฐฦกng รกn tร i chรญnh: cฦก cแบฅu nguแป“n vแป‘n trong dแปฑ รกn, dแปฑ kiแบฟn khung giรก, phรญ sแบฃn phแบฉm, dแป‹ch vแปฅ cรดng ฤ‘แป‘i vแป›i dแปฑ รกn รกp dแปฅng cฦก chแบฟ thu phรญ trแปฑc tiแบฟp tแปซ ngฦฐแปi sแปญ dแปฅng;\ne) Cฦก chแบฟ bแบฃo ฤ‘แบฃm ฤ‘แบงu tฦฐ, cฦก chแบฟ chia sแบป phแบงn giแบฃm doanh thu.\n2. ฤแป‘i vแป›i dแปฑ รกn แปฉng dแปฅng cรดng nghแป‡ cao, แปฉng dแปฅng cรดng nghแป‡ mแป›i ngoร i quy ฤ‘แป‹nh tแบกi khoแบฃn 1 ฤiแปu nร y, nแป™i dung quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ cรฒn bao gแป“m tรชn bรชn mแปi thแบงu, hรฌnh thแปฉc lแปฑa chแปn nhร  ฤ‘แบงu tฦฐ, thแปi gian tแป• chแปฉc lแปฑa chแปn nhร  ฤ‘แบงu tฦฐ.']</code> | <code>Quyแบฟt ฤ‘แป‹nh chแปง trฦฐฦกng ฤ‘แบงu tฦฐ dแปฑ รกn PPP cรณ nhแปฏng nแป™i dung gรฌ?</code> | | <code>['Hแปa sฤฉ hแบกng III - Mรฃ sแป‘: V.10.08.27\n...\n4. Yรชu cแบงu ฤ‘แป‘i vแป›i viรชn chแปฉc dแปฑ thi hoแบทc xรฉt thฤƒng hแบกng chแปฉc danh nghแป nghiแป‡p hแปa sฤฉ hแบกng III:\nCรณ thแปi gian giแปฏ chแปฉc danh nghแป nghiแป‡p hแปa sฤฉ hแบกng IV hoแบทc tฦฐฦกng ฤ‘ฦฐฦกng tแปซ ฤ‘แปง 02 nฤƒm trแปŸ lรชn (khรดng kแปƒ thแปi gian tแบญp sแปฑ, thแปญ viแป‡c) ฤ‘แป‘i vแป›i trรฌnh ฤ‘แป™ cao ฤ‘แบณng hoแบทc tแปซ ฤ‘แปง 03 nฤƒm trแปŸ lรชn (khรดng kแปƒ thแปi gian tแบญp sแปฑ, thแปญ viแป‡c) ฤ‘แป‘i vแป›i trรฌnh ฤ‘แป™ trung cแบฅp. Trฦฐแปng hแปฃp cรณ thแปi gian tฦฐฦกng ฤ‘ฦฐฦกng thรฌ phแบฃi cรณ รญt nhแบฅt 01 nฤƒm (ฤ‘แปง 12 thรกng) ฤ‘ang giแปฏ chแปฉc danh hแปa sฤฉ hแบกng IV tรญnh ฤ‘แบฟn ngร y hแบฟt thแปi hแบกn nแป™p hแป“ sฦก ฤ‘ฤƒng kรฝ dแปฑ thi hoแบทc xรฉt thฤƒng hแบกng.']</code> | <code>Viรชn chแปฉc xรฉt thฤƒng hแบกng chแปฉc danh nghแป nghiแป‡p hแปa sฤฉ hแบกng 3 cแบงn cรณ thแปi gian giแปฏ chแปฉc danh nghแป nghiแป‡p hแปa sฤฉ hแบกng 4 trong bao lรขu?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### json * Dataset: json * Size: 11,946 evaluation samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:--------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 291.08 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 24.16 tokens</li><li>max: 49 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------| | <code>['โ€œฤiแปu 9. Sแปญ dแปฅng ฤ‘แบฅt trแป“ng lรบa vร o mแปฅc ฤ‘รญch khรกc khรดng ฤ‘ฦฐแปฃc cฦก quan nhร  nฦฐแป›c cรณ thแบฉm quyแปn cho phรฉp theo quy ฤ‘แป‹nh tแบกi cรกc ฤ‘iแปƒm a vร  d khoแบฃn 1 ฤiแปu 57 cแปงa Luแบญt ฤ‘แบฅt ฤ‘ai\n1. Chuyแปƒn ฤ‘แบฅt trแป“ng lรบa sang ฤ‘แบฅt trแป“ng cรขy lรขu nฤƒm, ฤ‘แบฅt trแป“ng rแปซng (trแปซ trฦฐแปng hแปฃp quy ฤ‘แป‹nh tแบกi khoแบฃn 7 ฤiแปu 14 cแปงa Nghแป‹ ฤ‘แป‹nh sแป‘ 43/2014/Nฤ-CP ฤ‘ฦฐแปฃc sแปญa ฤ‘แป•i, bแป• sung tแบกi khoแบฃn 11 ฤiแปu 2 cแปงa Nghแป‹ ฤ‘แป‹nh sแป‘ 01/2017/Nฤ-CP) thรฌ hรฌnh thแปฉc vร  mแปฉc xแปญ phแบกt nhฦฐ sau:\na) Phแบกt tiแปn tแปซ 2.000.000 ฤ‘แป“ng ฤ‘แบฟn 5.000.000 ฤ‘แป“ng nแบฟu diแป‡n tรญch ฤ‘แบฅt chuyแปƒn mแปฅc ฤ‘รญch trรกi phรฉp dฦฐแป›i 0,5 hรฉc ta;\nb) Phแบกt tiแปn tแปซ 5.000.000 ฤ‘แป“ng ฤ‘แบฟn 10.000.000 ฤ‘แป“ng nแบฟu diแป‡n tรญch ฤ‘แบฅt chuyแปƒn mแปฅc ฤ‘รญch trรกi phรฉp tแปซ 0,5 hรฉc ta ฤ‘แบฟn dฦฐแป›i 01 hรฉc ta;\nc) Phแบกt tiแปn tแปซ 10.000.000 ฤ‘แป“ng ฤ‘แบฟn 20.000.000 ฤ‘แป“ng nแบฟu diแป‡n tรญch ฤ‘แบฅt chuyแปƒn mแปฅc ฤ‘รญch trรกi phรฉp tแปซ 01 hรฉc ta ฤ‘แบฟn dฦฐแป›i 03 hรฉc ta;\nd) Phแบกt tiแปn tแปซ 20.000.000 ฤ‘แป“ng ฤ‘แบฟn 50.000.000 ฤ‘แป“ng nแบฟu diแป‡n tรญch ฤ‘แบฅt chuyแปƒn mแปฅc ฤ‘รญch trรกi phรฉp tแปซ 03 hรฉc ta trแปŸ lรชn.โ€']</code> | <code>Tแปฑ รฝ trแป“ng cรขy lรขu nฤƒm trรชn ฤ‘แบฅt lรบa bแป‹ xแปญ phแบกt nhฦฐ thแบฟ nร o?</code> | | <code>['"3. Ngฦฐแปi lร m chแปฉng cรณ quyแปn:\na) ฤฦฐแปฃc thรดng bรกo, giแบฃi thรญch quyแปn vร  nghฤฉa vแปฅ quy ฤ‘แป‹nh tแบกi ฤiแปu nร y;\nb) Yรชu cแบงu cฦก quan triแป‡u tแบญp bแบฃo vแป‡ tรญnh mแบกng, sแปฉc khoแบป, danh dแปฑ, nhรขn phแบฉm, tร i sแบฃn vร  quyแปn, lแปฃi รญch hแปฃp phรกp khรกc cแปงa mรฌnh, ngฦฐแปi thรขn thรญch cแปงa mรฌnh khi biฬฃ ฤ‘e doฬฃa;\nc) Khiแบฟu nแบกi quyแบฟt ฤ‘แป‹nh, hร nh vi tแป‘ tแปฅng cแปงa cฦก quan, ngฦฐแปi cรณ thแบฉm quyแปn tiแบฟn hร nh tแป‘ tแปฅng liรชn quan ฤ‘แบฟn viแป‡c mรฌnh tham gia lร m chแปฉng;\nd) ฤฦฐแปฃc cฦก quan triแป‡u tแบญp thanh toรกn chi phรญ ฤ‘i lแบกi vร  nhแปฏng chi phรญ khรกc theo quy ฤ‘แป‹nh cแปงa phรกp luแบญt."']</code> | <code>Quyแปn vร  nghฤฉa vแปฅ cแปงa ngฦฐแปi lร m chแปฉng?</code> | | <code>['Quy trรฌnh ฤ‘iแปu chuyแปƒn tร i sแบฃn\n1. Hแป“ sฦก ฤ‘แป nghแป‹ ฤ‘iแปu chuyแปƒn tร i sแบฃn:\na) Vฤƒn bแบฃn ฤ‘แป nghแป‹ ฤ‘iแปu chuyแปƒn tร i sแบฃn cแปงa ฤ‘ฦกn vแป‹ ฤ‘ฦฐแปฃc giao quแบฃn lรฝ, sแปญ dแปฅng tร i sแบฃn: 01 bแบฃn chรญnh;\nb) Vฤƒn bแบฃn ฤ‘แป nghแป‹ ฤ‘ฦฐแปฃc tiแบฟp nhแบญn tร i sแบฃn cแปงa cฦก quan, tแป• chแปฉc, ฤ‘ฦกn vแป‹: 01 bแบฃn chรญnh;\nc) Tแป trรฌnh vแป viแป‡c ฤ‘iแปu chuyแปƒn, tiแบฟp nhแบญn tร i sแบฃn cแปงa Vแปฅ Tร i chรญnh - Kแบฟ toรกn (trฦฐแปng hแปฃp viแป‡c quyแบฟt ฤ‘แป‹nh ฤ‘iแปu chuyแปƒn tร i sแบฃn thuแป™c thแบฉm quyแปn cแปงa Phรณ Thแป‘ng ฤ‘แป‘c phแปฅ trรกch tร i chรญnh - kแบฟ toรกn): 01 bแบฃn chรญnh;\nd) Danh mแปฅc tร i sแบฃn ฤ‘แป nghแป‹ ฤ‘iแปu chuyแปƒn (chแปงng loแบกi, mรฃ tร i sแบฃn, sแป‘ lฦฐแปฃng, tรฌnh trแบกng; nฤƒm ฤ‘ฦฐa vร o sแปญ dแปฅng, nguyรชn giรก, giรก trแป‹ cรฒn lแบกi theo sแป• kแบฟ toรกn; mแปฅc ฤ‘รญch sแปญ dแปฅng hiแป‡n tแบกi vร  mแปฅc ฤ‘รญch sแปญ dแปฅng dแปฑ kiแบฟn sau khi ฤ‘iแปu chuyแปƒn trong trฦฐแปng hแปฃp viแป‡c ฤ‘iแปu chuyแปƒn gแบฏn vแป›i viแป‡c chuyแปƒn ฤ‘แป•i cรดng nฤƒng sแปญ dแปฅng tร i sแบฃn; lรฝ do ฤ‘iแปu chuyแปƒn): 01 bแบฃn chรญnh;\nฤ‘) Cรกc hแป“ sฦก khรกc cรณ liรชn quan ฤ‘แบฟn ฤ‘แป nghแป‹ ฤ‘iแปu chuyแปƒn tร i sแบฃn (nแบฟu cรณ): 01 bแบฃn sao.\n2. Khi ฤ‘iแปu chuyแปƒn, ฤ‘ฦกn vแป‹ giao vร  ฤ‘ฦกn vแป‹ nhแบญn tร i sแบฃn phแบฃi thร nh lแบญp Hแป™i ฤ‘แป“ng giao nhแบญn tร i sแบฃn, gแป“m ฤ‘แบกi diแป‡n cแปงa hai bรชn, chแปง tแป‹ch hแป™i ฤ‘แป“ng lร  ฤ‘แบกi diแป‡n lรฃnh ฤ‘แบกo bรชn giao. Hแป™i ฤ‘แป“ng cรณ nhiแป‡m vแปฅ xรกc ฤ‘แป‹nh sแป‘ lฦฐแปฃng, giรก trแป‹ (nguyรชn giรก, giรก trแป‹ ฤ‘รฃ khแบฅu hao, giรก trแป‹ cรฒn lแบกi), hiแป‡n trแบกng cแปงa tร i sแบฃn bร n giao, cรกc hแป“ sฦก, chแปฉng tแปซ cรณ liรชn quan vร  lแบญp "Biรชn bแบฃn bร n giao, tiแบฟp nhแบญn tร i sแบฃn" theo Mแบซu sแป‘ 01/TSC-BBGN ban hร nh kรจm theo Nghแป‹ ฤ‘แป‹nh sแป‘ 151/2017/Nฤ-CP ngร y 26/12/2017 quy ฤ‘แป‹nh chi tiแบฟt mแป™t sแป‘ ฤ‘iแปu cแปงa Luแบญt Quแบฃn lรฝ, sแปญ dแปฅng tร i sแบฃn cรดng. "Biรชn bแบฃn bร n giao, tiแบฟp nhแบญn tร i sแบฃn" ฤ‘ฦฐแปฃc lแบญp thร nh 3 bแบฃn, mแป—i bรชn lฦฐu mแป™t bแบฃn vร  gแปญi mแป™t bแบฃn vแป Ngรขn hร ng Nhร  nฦฐแป›c (Vแปฅ Tร i chรญnh - Kแบฟ toรกn).\n...']</code> | <code>Hแป“ sฦก ฤ‘แป nghแป‹ ฤ‘iแปu chuyแปƒn tร i sแบฃn cแปงa Ngรขn hร ng Nhร  nฦฐแป›c gแป“m nhแปฏng nแป™i dung gรฌ?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `gradient_accumulation_steps`: 32 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 32 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.3810 | 10 | 4.0758 | - | - | - | - | - | - | | 0.7619 | 20 | 2.6578 | - | - | - | - | - | - | | **0.9905** | **26** | **-** | **1.6008** | **0.3976** | **0.4122** | **0.4218** | **0.3637** | **0.4241** | | 1.1429 | 30 | 1.643 | - | - | - | - | - | - | | 1.5238 | 40 | 1.2561 | - | - | - | - | - | - | | 1.9048 | 50 | 1.1152 | - | - | - | - | - | - | | 1.9810 | 52 | - | 1.0635 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 | | 2.2857 | 60 | 0.9883 | - | - | - | - | - | - | | 2.6667 | 70 | 0.991 | - | - | - | - | - | - | | 2.9714 | 78 | - | 0.9924 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 | | 3.0476 | 80 | 0.9552 | - | - | - | - | - | - | | 3.4286 | 90 | 0.934 | - | - | - | - | - | - | | 3.8095 | 100 | 0.9597 | - | - | - | - | - | - | | 3.9619 | 104 | - | 0.9883 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.3.1+cu121 - Accelerate: 1.0.1 - Datasets: 2.19.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/Breeze-7B-Cantonese-v0.1-GGUF
mradermacher
2024-11-01T09:10:32Z
17
0
transformers
[ "transformers", "gguf", "cantonese", "yue", "hong kong", "้ฆ™ๆธฏ", "ๅปฃๆฑ่ฉฑ", "็ฒต่ชž", "zh", "en", "dataset:hon9kon9ize/yue-alpaca", "dataset:indiejoseph/wikipedia-translate-zhhk-zhcn", "dataset:indiejoseph/wikipedia-zh-yue-summaries", "dataset:indiejoseph/wikipedia-zh-yue-qa", "base_model:kennylam/Breeze-7B-Cantonese-v0.1", "base_model:quantized:kennylam/Breeze-7B-Cantonese-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-01T08:55:47Z
--- base_model: kennylam/Breeze-7B-Cantonese-v0.1 datasets: - hon9kon9ize/yue-alpaca - indiejoseph/wikipedia-translate-zhhk-zhcn - indiejoseph/wikipedia-zh-yue-summaries - indiejoseph/wikipedia-zh-yue-qa language: - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - cantonese - yue - hong kong - ้ฆ™ๆธฏ - ๅปฃๆฑ่ฉฑ - ็ฒต่ชž --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/kennylam/Breeze-7B-Cantonese-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.f16.gguf) | f16 | 15.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
allistair99/MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02
allistair99
2024-11-01T08:57:15Z
5
0
null
[ "safetensors", "mobilebert", "generated_from_trainer", "base_model:csarron/mobilebert-uncased-squad-v1", "base_model:finetune:csarron/mobilebert-uncased-squad-v1", "license:mit", "region:us" ]
null
2024-11-01T08:57:02Z
--- license: mit base_model: csarron/mobilebert-uncased-squad-v1 tags: - generated_from_trainer model-index: - name: MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02 This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v1](https://huggingface.co/csarron/mobilebert-uncased-squad-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 6 - eval_batch_size: 60 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.56 | 1.0 | 14619 | 1.0480 | | 0.5468 | 2.0 | 29238 | 1.0333 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.5.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
Xu-Ouyang/pythia-12b-deduped-int3-step4-GPTQ-wikitext2
Xu-Ouyang
2024-11-01T08:52:11Z
75
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "gptq", "region:us" ]
text-generation
2024-11-01T08:41:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
coastalcph/CLIPDetail-8590864
coastalcph
2024-11-01T08:49:15Z
136
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-11-01T08:48:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/French-Aya-Expanse-8B-GGUF
mradermacher
2024-11-01T08:46:11Z
66
0
transformers
[ "transformers", "gguf", "fr", "dataset:Svngoku/french-multilingual-reward-bench-dpo", "base_model:Svngoku/French-Aya-Expanse-8B", "base_model:quantized:Svngoku/French-Aya-Expanse-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-01T05:31:40Z
--- base_model: Svngoku/French-Aya-Expanse-8B datasets: - Svngoku/french-multilingual-reward-bench-dpo language: - fr library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Svngoku/French-Aya-Expanse-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/French-Aya-Expanse-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_L.gguf) | Q3_K_L | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
hidonbush/paper-cutting
hidonbush
2024-11-01T08:34:36Z
35
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "en", "zh", "dataset:hidonbush/paper-cuttingv0.1", "base_model:nvidia/mit-b5", "base_model:finetune:nvidia/mit-b5", "endpoints_compatible", "region:us" ]
null
2024-10-30T07:26:22Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: paper-cutting results: [] datasets: - hidonbush/paper-cuttingv0.1 language: - en - zh metrics: - accuracy base_model: - nvidia/mit-b5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paper-cutting This model was a finetuned version of nvidia/mit-b5 on the paper-cutting datasetv0.1. It was trained to extract body contents from any resources like articles and books, just like cutting them off the paper. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data paper-cutting v0.1 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
life/retrofuturereality
life
2024-11-01T08:27:10Z
18
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-01T08:27:03Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A person in a bustling cafe retrofuturereality output: url: samples/1730449588335__000001000_0.jpg - text: a white spaceship in the middle of a space station, with a watermark in the top right corner. The spaceship appears to be in the process of being built, as evidenced by the various tools and materials scattered around it. retrofuturereality output: url: samples/1730449604541__000001000_1.jpg - text: a man and woman standing next to each other in a room, smiling. The woman is wearing a necklace and the man is wearing formal dress. In the background, there are a number of people and lights retrofuturereality output: url: samples/1730449620769__000001000_2.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: retrofuturereality license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # retrofuturereality Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `retrofuturereality` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/life/retrofuturereality/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('life/retrofuturereality', weight_name='retrofuturereality.safetensors') image = pipeline('A person in a bustling cafe retrofuturereality').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
yifanlu/waymo-controlnet-flux
yifanlu
2024-11-01T08:24:01Z
8
0
diffusers
[ "diffusers", "safetensors", "flux", "flux-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-01T08:13:28Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other inference: true tags: - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-yifanlu/waymo-controlnet-flux These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning. ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gitgato/tessy-LoRA
gitgato
2024-11-01T08:20:26Z
46
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-25T01:56:33Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: photo of tessy a beautiful woman parameters: negative_prompt: Low quality output: url: images/Imagen de WhatsApp 2024-09-24 a las 13.59.54_6e906e0c.jpg base_model: - stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: tessy license: mit --- # tessy-LoRA <Gallery /> ## Model description Janesde ![barra.png](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;6579469cc17736d75f8443d6&#x2F;RydTrfoJFNzJ3MXn97Nao.png) ## Trigger words You should use `tessy` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gitgato/tessy-LoRA/tree/main) them in the Files & versions tab.
Natthaphon/thaicapgen-swin-gpt2
Natthaphon
2024-11-01T08:16:20Z
39
0
null
[ "safetensors", "clip-encoder-decoder", "image-to-text", "image-captioning", "custom_code", "th", "region:us" ]
image-to-text
2024-11-01T07:57:46Z
--- tags: - image-to-text - image-captioning language: - th --- # Thai Image Captioning Encoder-decoder style image captioning model using [Swin-L](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) and [GPT2](https://huggingface.co/openai-community/gpt2). Trained on Thai language MSCOCO and IPU24 dataset. # Usage With `VisionEncoderDecoderModel`. ```python from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer device = 'cuda' gen_kwargs = {"max_length": 120, "num_beams": 4} model_path = 'Natthaphon/thaicapgen-swin-gpt2' feature_extractor = AutoImageProcessor.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) model = VisionEncoderDecoderModel.from_pretrained(model_path).to(device) pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) ``` You can also use `AutoModel` to load it. But this requires `trust_remote_code=True`. ```python from transformers import AutoModel model_path = 'Natthaphon/thaicapgen-swin-gpt2' model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device) ``` # Acknowledgement This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]
prkhar05/pixart-personal-model-msteps
prkhar05
2024-11-01T08:11:35Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:PixArt-alpha/PixArt-XL-2-512x512", "base_model:adapter:PixArt-alpha/PixArt-XL-2-512x512", "region:us" ]
null
2024-11-01T06:31:51Z
--- base_model: PixArt-alpha/PixArt-XL-2-512x512 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
Mercuri/mrpapaelijah
Mercuri
2024-11-01T08:11:17Z
5
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-01T07:56:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: mrpapaelijah --- # Mrpapaelijah <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `mrpapaelijah` to trigger the image generation. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Mercuri/mrpapaelijah', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
minhdang/bge-base-financial-matryoshka_pass_2
minhdang
2024-11-01T08:10:57Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:107510", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:bkai-foundation-models/vietnamese-bi-encoder", "base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-01T08:10:37Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:107510 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: bkai-foundation-models/vietnamese-bi-encoder widget: - source_sentence: '[''Hรฌnh thแปฉc xแปญ phแบกt vร  thแปi hiแป‡u xแปญ phแบกt vi phแบกm hร nh chรญnh\n...\n4. Thแปi hiแป‡u xแปญ phแบกt vi phแบกm hร nh chรญnh ฤ‘แป‘i vแป›i lฤฉnh vแปฑc kinh doanh xแป• sแป‘:\na) Thแปi hiแป‡u xแปญ phแบกt vi phแบกm hร nh chรญnh trong lฤฉnh vแปฑc kinh doanh xแป• sแป‘ lร  01 nฤƒm.\nb) ฤแป‘i vแป›i hร nh vi vi phแบกm hร nh chรญnh trong lฤฉnh vแปฑc kinh doanh xแป• sแป‘ ฤ‘ang ฤ‘ฦฐแปฃc thแปฑc hiแป‡n thรฌ thแปi hiแป‡u ฤ‘ฦฐแปฃc tรญnh tแปซ ngร y ngฦฐแปi cรณ thแบฉm quyแปn thi hร nh cรดng vแปฅ phรกt hiแป‡n hร nh vi vi phแบกm. ฤแป‘i vแป›i hร nh vi vi phแบกm hร nh chรญnh ฤ‘รฃ kแบฟt thรบc thรฌ thแปi hiแป‡u ฤ‘ฦฐแปฃc tรญnh tแปซ ngร y chแบฅm dแปฉt hร nh vi vi phแบกm. Thแปi ฤ‘iแปƒm chแบฅm dแปฉt hร nh vi vi phแบกm ฤ‘แปƒ tรญnh thแปi hiแป‡u xแปญ phแบกt ฤ‘แป‘i vแป›i mแป™t sแป‘ hร nh vi vi phแบกm tแบกi Chฦฐฦกng 3 Nghแป‹ ฤ‘แป‹nh nร y ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh nhฦฐ sau:\n- ฤแป‘i vแป›i hร nh vi sแปญa chแปฏa, tแบฉy xoรก lร m thay ฤ‘แป•i nแป™i dung Giแบฅy chแปฉng nhแบญn ฤ‘แปง ฤ‘iแปu kiแป‡n kinh doanh, cรกc tร i liแป‡u trong hแป“ sฦก ฤ‘รฃ ฤ‘ฦฐแปฃc lร m ฤ‘แบกi lรฝ xแป• sแป‘ quy ฤ‘แป‹nh tแบกi khoแบฃn 1 ฤiแปu 35 vร  khoแบฃn 1 ฤiแปu 41 Nghแป‹ ฤ‘แป‹nh nร y nแบฟu khรดng xรกc ฤ‘แป‹nh ฤ‘ฦฐแปฃc ngร y sแปญa chแปฏa, tแบฉy xoรก lร m thay ฤ‘แป•i nแป™i dung Giแบฅy chแปฉng nhแบญn ฤ‘แปง ฤ‘iแปu kiแป‡n kinh doanh, cรกc tร i liแป‡u trong hแป“ sฦก ฤ‘รฃ ฤ‘ฦฐแปฃc lร m ฤ‘แบกi lรฝ xแป• sแป‘ thรฌ thแปi ฤ‘iแปƒm chแบฅm dแปฉt hร nh vi vi phแบกm lร  ngร y phรกt hiแป‡n Giแบฅy chแปฉng nhแบญn ฤ‘แปง ฤ‘iแปu kiแป‡n kinh doanh bแป‹ sแปญa chแปฏa, tแบฉy xรณa lร m thay ฤ‘แป•i nแป™i dung;\n- ฤแป‘i vแป›i hร nh vi khรดng xรขy dแปฑng vร  ban hร nh quy chแบฟ quy ฤ‘แป‹nh chi tiแบฟt quy trรฌnh tแป• chแปฉc thu hแป“i vรฉ xแป• sแป‘ khรดng tiรชu thแปฅ hแบฟt, khรดng xรขy dแปฑng vร  cรดng bแป‘ cรดng khai thแปƒ lแป‡ quay sแป‘ mแปŸ thฦฐแปŸng, khรดng ban hร nh Quy chแบฟ quแบฃn lรฝ, khai thรกc dแปฏ liแป‡u mรกy chแปง kinh doanh xแป• sแป‘ ฤ‘iแป‡n toรกn quy ฤ‘แป‹nh tแบกi khoแบฃn 1 ฤiแปu 40, khoแบฃn 1 ฤiแปu 44 vร  khoแบฃn 1 ฤiแปu 49 Nghแป‹ ฤ‘แป‹nh nร y, thแปi ฤ‘iแปƒm chแบฅm dแปฉt hร nh vi vi phแบกm lร  ngร y thแปฑc hiแป‡n ban hร nh quy chแบฟ quy ฤ‘แป‹nh chi tiแบฟt quy trรฌnh tแป• chแปฉc thu hแป“i vรฉ xแป• sแป‘ khรดng tiรชu thแปฅ hแบฟt, cรดng bแป‘ cรดng khai thแปƒ lแป‡ quay sแป‘ mแปŸ thฦฐแปŸng, ban hร nh Quy chแบฟ quแบฃn lรฝ, khai thรกc dแปฏ liแป‡u mรกy chแปง kinh doanh xแป• sแป‘ ฤ‘iแป‡n toรกn;\n- ฤแป‘i vแป›i hร nh vi vi phแบกm quy ฤ‘แป‹nh vแป chแบฟ ฤ‘แป™ bรกo cรกo quy ฤ‘แป‹nh tแบกi ฤiแปu 51 Nghแป‹ ฤ‘แป‹nh nร y, thแปi ฤ‘iแปƒm chแบฅm dแปฉt hร nh vi vi phแบกm lร  ngร y thแปฑc hiแป‡n bรกo cรกo.'']' sentences: - Hรฌnh thแปฉc ฤ‘แบฅu giรก bแบฑng bแป phiแบฟu giรกn tiแบฟp ฤ‘ฦฐแปฃc phรกp luแบญt quy ฤ‘แป‹nh nhฦฐ thแบฟ nร o? - Thฦฐแปng trแปฑc Hแป™i ฤ‘แป“ng tฦฐ vแบฅn ฤ‘แบทc xรก lร  cฦก quan nร o? - Thแปi hiแป‡u xแปญ phแบกt cฦก sแปŸ kinh doanh xแป• sแป‘ phรกt hร nh vรฉ xแป• sแป‘ quรก hแบกn mแปฉc lร  bao lรขu? - source_sentence: "['Thanh lรฝ hแปฃp ฤ‘แป“ng thแปฑc hiแป‡n nhiแป‡m vแปฅ\\nCฤƒn cแปฉ Hแป“ sฦก ฤ‘แป nghแป‹\ \ nghiแป‡m thu, thanh lรฝ hแปฃp ฤ‘แป“ng thแปฑc hiแป‡n nhiแป‡m vแปฅ cแปงa cฦก quan chแปง trรฌ thแปฑc hiแป‡n,\ \ viแป‡c thanh lรฝ hแปฃp ฤ‘แป“ng ฤ‘รฃ kรฝ kแบฟt trong thแปi hแบกn 10 ngร y ฤ‘ฦฐแปฃc thแปฑc hiแป‡n kแปƒ tแปซ\ \ ngร y cฦก quan quแบฃn lรฝ nhiแป‡m vแปฅ tiแบฟp nhแบญn ฤ‘แบงy ฤ‘แปง sแบฃn phแบฉm ฤ‘รฃ ฤ‘ฦฐแปฃc chแป‰nh sแปญa theo\ \ รฝ kiแบฟn cแปงa Hแป™i ฤ‘แป“ng nghiแป‡m thu nhiแป‡m vแปฅ cแบฅp Bแป™.\\nฤแป‘i vแป›i cรกc nhiแป‡m vแปฅ thฦฐแปng\ \ xuyรชn hร ng nฤƒm quy ฤ‘แป‹nh tแบกi ฤ‘iแปƒm b, ฤ‘iแปƒm h, ฤ‘iแปƒm k khoแบฃn 1 ฤiแปu 3 Thรดng tฦฐ nร y\ \ ฤ‘ฦฐแปฃc cฦก quan quแบฃn lรฝ nhiแป‡m vแปฅ xรกc nhแบญn hoร n thร nh thรฌ vฤƒn bแบฃn xรกc nhแบญn hoร n\ \ thร nh nhiแป‡m vแปฅ lร  cฤƒn cแปฉ nghiแป‡m thu, thanh lรฝ nhiแป‡m vแปฅ cแปงa cฦก quan chแปง trรฌ thแปฑc\ \ hiแป‡n.\\nBiรชn bแบฃn nghiแป‡m thu vร  thanh lรฝ hแปฃp ฤ‘แป“ng ฤ‘แป‘i vแป›i cรกc nhiแป‡m vแปฅ kรฝ hแปฃp\ \ ฤ‘แป“ng thแปฑc hiแป‡n theo mแบซu B3a-HฤMT ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh tแบกi mแบซu B6a-BBTLMT. Biรชn bแบฃn\ \ nghiแป‡m thu vร  thanh lรฝ hแปฃp ฤ‘แป“ng ฤ‘แป‘i vแป›i cรกc nhiแป‡m vแปฅ kรฝ hแปฃp ฤ‘แป“ng thแปฑc hiแป‡n theo\ \ mแบซu B3b-HฤBฤKH ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh tแบกi mแบซu B6b-BBTLBฤKH.'\n 'Thanh lรฝ hแปฃp ฤ‘แป“ng nhiแป‡m\ \ vแปฅ bแบฃo vแป‡ mรดi trฦฐแปng\\nCฤƒn cแปฉ Biรชn bแบฃn nghiแป‡m thu kแบฟt quแบฃ thแปฑc hiแป‡n nhiแป‡m vแปฅ\ \ bแบฃo vแป‡ mรดi trฦฐแปng, viแป‡c thanh lรฝ hแปฃp ฤ‘แป“ng ฤ‘รฃ kรฝ kแบฟt vแป›i ฤ‘ฦกn vแป‹ chแปง trรฌ trong\ \ thแปi hแบกn 10 ngร y ฤ‘ฦฐแปฃc thแปฑc hiแป‡n kแปƒ tแปซ ngร y tiแบฟp nhแบญn ฤ‘แบงy ฤ‘แปง sแบฃn phแบฉm ฤ‘รฃ ฤ‘ฦฐแปฃc\ \ chแป‰nh sแปญa theo รฝ kiแบฟn cแปงa Hแป™i ฤ‘แป“ng nghiแป‡m thu nhiแป‡m vแปฅ bแบฃo vแป‡ mรดi trฦฐแปng. Biรชn\ \ bแบฃn thanh lรฝ hแปฃp ฤ‘แป“ng ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh tแบกi mแบซu B6a-BBTLHฤ-BCT.']" sentences: - Tแป•n thฦฐฦกng gรขn chร y trฦฐแป›c chแปง yแบฟu gแบทp trong cรกc vแบฟt thฦฐฦกng แปŸ vรนng nร o? - Hแป™i ฤ‘แป“ng Lรฝ luแบญn Trung ฦฐฦกng hแปp mแป—i quรฝ mแบฅy lแบงn? - Thแปi hแบกn thanh lรฝ hแปฃp ฤ‘แป“ng nhiแป‡m vแปฅ bแบฃo vแป‡ mรดi trฦฐแปng ngร nh Cรดng thฦฐฦกng sแปญ dแปฅng nguแป“n kinh phรญ sแปฑ nghiแป‡p mรดi trฦฐแปng lร  bao lรขu? - source_sentence: '[''ฤแป‘i tฦฐแปฃng รกp dแปฅng\n1. Cรกn bแป™, cรดng chแปฉc cแปงa cรกc ฤ‘ฦกn vแป‹ thuแป™c แปฆy ban Dรขn tแป™c ฤ‘ฦฐแปฃc Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m แปฆy ban Dรขn tแป™c (sau ฤ‘รขy gแปi tแบฏt lร  Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m) giao nhiแป‡m vแปฅ hoแบทc phรขn cรดng lร m nhiแป‡m vแปฅ tiแบฟp cรดng dรขn, xแปญ lรฝ ฤ‘ฦกn khiแบฟu nแบกi, tแป‘ cรกo, kiแบฟn nghแป‹, phแบฃn รกnh tแบกi trแปฅ sแปŸ vร  cรกc ฤ‘แป‹a ฤ‘iแปƒm tiแบฟp cรดng dรขn thuแป™c แปฆy ban Dรขn tแป™c.\n2. Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m, cรกc Thแปฉ trฦฐแปŸng, Phรณ Chแปง nhiแป‡m แปฆy ban Dรขn tแป™c cรณ trรกch nhiแป‡m tiแบฟp cรดng dรขn ฤ‘แป‹nh kแปณ hoแบทc ฤ‘แป™t xuแบฅt; cรดng chแปฉc trong cรกc ฤ‘ฦกn vแป‹ thuแป™c แปฆy ban Dรขn tแป™c ฤ‘ฦฐแปฃc Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m triแป‡u tแบญp lร m nhiแป‡m vแปฅ tiแบฟp cรดng dรขn, xแปญ lรฝ ฤ‘ฦกn khiแบฟu nแบกi, tแป‘ cรกo, kiแบฟn nghแป‹, phแบฃn รกnh tแบกi trแปฅ sแปŸ vร  cรกc ฤ‘แป‹a ฤ‘iแปƒm tiแบฟp cรดng dรขn thuแป™c แปฆy ban Dรขn tแป™c.\n3. Cรดng chแปฉc, ngฦฐแปi tham gia tiแบฟp cรดng dรขn thuแป™c แปฆy ban Dรขn tแป™c ฤ‘ฦฐแปฃc Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m giao nhiแป‡m vแปฅ hoแบทc phรขn cรดng phแป‘i hแปฃp tiแบฟp cรดng dรขn, giแปฏ gรฌn an ninh, trแบญt tแปฑ, bแบฃo ฤ‘แบฃm y tแบฟ tแบกi trแปฅ sแปŸ vร  cรกc ฤ‘แป‹a ฤ‘iแปƒm tiแบฟp cรดng dรขn cแปงa แปฆy ban Dรขn tแป™c.\n4. Cรกn bแป™, cรดng chแปฉc cแปงa cรกc tแป• chแปฉc thuแป™c แปฆy ban Dรขn tแป™c ฤ‘ฦฐแปฃc Bแป™ trฦฐแปŸng, Chแปง nhiแป‡m giao nhiแป‡m vแปฅ chuyรชn trรกch xแปญ lรฝ ฤ‘ฦกn khiแบฟu nแบกi, tแป‘ cรกo, kiแบฟn nghแป‹, phแบฃn รกnh.'']' sentences: - Cรดng chแปฉc cแปงa ฤ‘ฦกn vแป‹ cรณ ฤ‘ฦฐแปฃc hฦฐแปŸng chแบฟ ฤ‘แป™ bแป“i dฦฐแปกng khi nhแบญn nhiแป‡m vแปฅ tiแบฟp cรดng dรขn tแบกi cรกc ฤ‘แป‹a ฤ‘iแปƒm tiแบฟp cรดng dรขn thuแป™c แปฆy ban Dรขn tแป™c hay khรดng? - Ngฦฐแปi trรบng xแป• sแป‘ Vietlott cรณ ฤ‘ฦฐแปฃc bแบฃo mแบญt thรดng tin trฦฐแป›c ฤ‘แบกi chรบng? - Viแป‡c cรดng bแป‘ giรก trแป‹ doanh nghiแป‡p ฤ‘ฦฐแปฃc cฦก quan ฤ‘แบกi diแป‡n chแปง sแปŸ hแปฏu thแปฑc hiแป‡n trong thแปi hแบกn bao nhiรชu ngร y? Kแปƒ tแปซ thแปi ฤ‘iแปƒm nร o? - source_sentence: '[''Hรฌnh thแปฉc tแป• chแปฉc, nแป™i dung vร  chฦฐฦกng trรฌnh ฤ‘ร o tแบกo nghiแป‡p vแปฅ thแบฉm ฤ‘แป‹nh giรก\n1. Khรณa ฤ‘ร o tแบกo nghiแป‡p vแปฅ thแบฉm ฤ‘แป‹nh giรก ฤ‘ฦฐแปฃc tแป• chแปฉc tแบญp trung mแป™t kแปณ liรชn tแปฅc hoแบทc nhiแปu kแปณ nhฦฐng khรดng kรฉo dร i quรก 3 (ba) thรกng cho mแป™t khรณa hแปc vร  phแบฃi ฤ‘แบฃm bแบฃo dแบกy vร  hแปc ฤ‘แปง thแปi lฦฐแปฃng, nแป™i dung vร  chฦฐฦกng trรฌnh theo quy ฤ‘แป‹nh tแบกi khoแบฃn 2 ฤiแปu nร y.\n...'']' sentences: - Thแปi gian รกp dแปฅng biแป‡n phรกp cรกch ly y tแบฟ ฤ‘ฦฐแปฃc phรกp luแบญt quy ฤ‘แป‹nh nhฦฐ thแบฟ nร o? - Khi thแปฑc hiแป‡n khuyแบฟn mแบกi cung แปฉng dแป‹ch vแปฅ thรดng tin di ฤ‘แป™ng mแบซu ฤ‘แปƒ khรกch hร ng dรนng thแปญ khรดng phแบฃi trแบฃ tiแปn, doanh nghiแป‡p viแป…n thรดng cรณ cแบงn ฤ‘ฤƒng kรฝ khuyแบฟn mแบกi khรดng? - Mแป™t khรณa ฤ‘ร o tแบกo nghiแป‡p vแปฅ thแบฉm ฤ‘แป‹nh giรก kรฉo dร i bao lรขu? - source_sentence: '[''Tiรชu chuแบฉn Chi cแปฅc trฦฐแปŸng, Phรณ Chi cแปฅc trฦฐแปŸng thuแป™c Cแปฅc Thuแบฟ\n1. Vแป‹ trรญ vร  nhiแป‡m vแปฅ\na) Chi cแปฅc trฦฐแปŸng Chi cแปฅc Thuแบฟ lร  ngฦฐแปi ฤ‘แปฉng ฤ‘แบงu Chi cแปฅc Thuแบฟ, chแป‹u trรกch nhiแป‡m trฦฐแป›c Cแปฅc trฦฐแปŸng Cแปฅc Thuแบฟ vร  trฦฐแป›c phรกp luแบญt vแป toร n bแป™ hoแบกt ฤ‘แป™ng nhiแป‡m vแปฅ cแปงa ฤ‘ฦกn vแป‹ ฤ‘ฦฐแปฃc cแบฅp cรณ thแบฉm quyแปn giao nhiแป‡m vแปฅ quแบฃn lรฝ nhร  nฦฐแป›c trรชn ฤ‘แป‹a bร n quแบญn, huyแป‡n, thแป‹ xรฃ, thร nh phแป‘ thuแป™c tแป‰nh.\nb) Phรณ Chi cแปฅc trฦฐแปŸng Chi cแปฅc Thuแบฟ lร  ngฦฐแปi giรบp viแป‡c Chi cแปฅc trฦฐแปŸng, chแป‹u trรกch nhiแป‡m trฦฐแป›c Chi cแปฅc trฦฐแปŸng vร  trฦฐแป›c phรกp luแบญt vแป lฤฉnh vแปฑc cรดng tรกc ฤ‘ฦฐแปฃc phรขn cรดng; thay mแบทt Chi cแปฅc trฦฐแปŸng ฤ‘iแปu hร nh, giแบฃi quyแบฟt cรกc cรดng viแป‡c cแปงa Chi cแปฅc khi ฤ‘ฦฐแปฃc Chi cแปฅc trฦฐแปŸng แปงy quyแปn, giao nhiแป‡m vแปฅ.'']' sentences: - Nhiแป‡m vแปฅ cแปงa Chi cแปฅc trฦฐแปŸng thuแป™c Cแปฅc Thuแบฟ nhฦฐ thแบฟ nร o? - Viแป‡c ฤ‘รกnh giรก chแบฅt lฦฐแปฃng dแป‹ch vแปฅ sแปฑ nghiแป‡p cรดng vแป xรขy dแปฑng cฦก sแปŸ dแปฏ liแป‡u ฤ‘ฦฐแปฃc thแปฑc hiแป‡n theo phฦฐฦกng thแปฉc nร o? - Khoแบฃn phแปฅ cแบฅp chuyรชn cแบงn cรณ tรญnh vร o lฦฐฦกng ฤ‘แปƒ tรญnh tiแปn lฦฐฦกng tฤƒng ca, lฦฐฦกng lร m thรชm giแป hay khรดng? pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.26527708019420726 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4377197388247112 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5174116859199732 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6099112673698309 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.26527708019420726 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.14590657960823708 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.10348233718399463 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.060991126736983085 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.26527708019420726 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4377197388247112 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5174116859199732 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6099112673698309 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4285225723707542 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.37149118785859175 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.38082252053876386 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.26586305039343716 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.43227858697471955 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5082872928176796 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6015402645236899 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.26586305039343716 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1440928623249065 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1016574585635359 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06015402645236899 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.26586305039343716 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.43227858697471955 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5082872928176796 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6015402645236899 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4244877080296015 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.36887667785457956 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.3780890557065138 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.2483676544450025 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4107651096601373 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4801607232546459 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.5700652938222 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2483676544450025 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.13692170322004574 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09603214465092917 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.05700652938221999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.2483676544450025 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4107651096601373 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4801607232546459 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5700652938222 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.40061709420771235 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.34734958105124125 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.35675125361493826 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.22141302528042858 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3701657458563536 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4385568391093253 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.5179976561192031 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.22141302528042858 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.12338858195211787 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.08771136782186506 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.051799765611920304 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.22141302528042858 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.3701657458563536 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4385568391093253 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5179976561192031 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.3619435400628976 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3128400221632284 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.32179789892986727 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.1616440649589821 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.27749874434957306 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.33433785367487023 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4103465595178302 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.1616440649589821 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.09249958144985769 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.06686757073497404 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04103465595178302 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.1616440649589821 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.27749874434957306 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.33433785367487023 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4103465595178302 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.27713659801328827 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.23557945277558567 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.24398402076434567 name: Cosine Map@100 --- # SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("minhdang/bge-base-financial-matryoshka_pass_2") # Run inference sentences = [ "['Tiรชu chuแบฉn Chi cแปฅc trฦฐแปŸng, Phรณ Chi cแปฅc trฦฐแปŸng thuแป™c Cแปฅc Thuแบฟ\\n1. Vแป‹ trรญ vร  nhiแป‡m vแปฅ\\na) Chi cแปฅc trฦฐแปŸng Chi cแปฅc Thuแบฟ lร  ngฦฐแปi ฤ‘แปฉng ฤ‘แบงu Chi cแปฅc Thuแบฟ, chแป‹u trรกch nhiแป‡m trฦฐแป›c Cแปฅc trฦฐแปŸng Cแปฅc Thuแบฟ vร  trฦฐแป›c phรกp luแบญt vแป toร n bแป™ hoแบกt ฤ‘แป™ng nhiแป‡m vแปฅ cแปงa ฤ‘ฦกn vแป‹ ฤ‘ฦฐแปฃc cแบฅp cรณ thแบฉm quyแปn giao nhiแป‡m vแปฅ quแบฃn lรฝ nhร  nฦฐแป›c trรชn ฤ‘แป‹a bร n quแบญn, huyแป‡n, thแป‹ xรฃ, thร nh phแป‘ thuแป™c tแป‰nh.\\nb) Phรณ Chi cแปฅc trฦฐแปŸng Chi cแปฅc Thuแบฟ lร  ngฦฐแปi giรบp viแป‡c Chi cแปฅc trฦฐแปŸng, chแป‹u trรกch nhiแป‡m trฦฐแป›c Chi cแปฅc trฦฐแปŸng vร  trฦฐแป›c phรกp luแบญt vแป lฤฉnh vแปฑc cรดng tรกc ฤ‘ฦฐแปฃc phรขn cรดng; thay mแบทt Chi cแปฅc trฦฐแปŸng ฤ‘iแปu hร nh, giแบฃi quyแบฟt cรกc cรดng viแป‡c cแปงa Chi cแปฅc khi ฤ‘ฦฐแปฃc Chi cแปฅc trฦฐแปŸng แปงy quyแปn, giao nhiแป‡m vแปฅ.']", 'Nhiแป‡m vแปฅ cแปงa Chi cแปฅc trฦฐแปŸng thuแป™c Cแปฅc Thuแบฟ nhฦฐ thแบฟ nร o?', 'Khoแบฃn phแปฅ cแบฅp chuyรชn cแบงn cรณ tรญnh vร o lฦฐฦกng ฤ‘แปƒ tรญnh tiแปn lฦฐฦกng tฤƒng ca, lฦฐฦกng lร m thรชm giแป hay khรดng?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2653 | | cosine_accuracy@3 | 0.4377 | | cosine_accuracy@5 | 0.5174 | | cosine_accuracy@10 | 0.6099 | | cosine_precision@1 | 0.2653 | | cosine_precision@3 | 0.1459 | | cosine_precision@5 | 0.1035 | | cosine_precision@10 | 0.061 | | cosine_recall@1 | 0.2653 | | cosine_recall@3 | 0.4377 | | cosine_recall@5 | 0.5174 | | cosine_recall@10 | 0.6099 | | cosine_ndcg@10 | 0.4285 | | cosine_mrr@10 | 0.3715 | | **cosine_map@100** | **0.3808** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2659 | | cosine_accuracy@3 | 0.4323 | | cosine_accuracy@5 | 0.5083 | | cosine_accuracy@10 | 0.6015 | | cosine_precision@1 | 0.2659 | | cosine_precision@3 | 0.1441 | | cosine_precision@5 | 0.1017 | | cosine_precision@10 | 0.0602 | | cosine_recall@1 | 0.2659 | | cosine_recall@3 | 0.4323 | | cosine_recall@5 | 0.5083 | | cosine_recall@10 | 0.6015 | | cosine_ndcg@10 | 0.4245 | | cosine_mrr@10 | 0.3689 | | **cosine_map@100** | **0.3781** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2484 | | cosine_accuracy@3 | 0.4108 | | cosine_accuracy@5 | 0.4802 | | cosine_accuracy@10 | 0.5701 | | cosine_precision@1 | 0.2484 | | cosine_precision@3 | 0.1369 | | cosine_precision@5 | 0.096 | | cosine_precision@10 | 0.057 | | cosine_recall@1 | 0.2484 | | cosine_recall@3 | 0.4108 | | cosine_recall@5 | 0.4802 | | cosine_recall@10 | 0.5701 | | cosine_ndcg@10 | 0.4006 | | cosine_mrr@10 | 0.3473 | | **cosine_map@100** | **0.3568** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2214 | | cosine_accuracy@3 | 0.3702 | | cosine_accuracy@5 | 0.4386 | | cosine_accuracy@10 | 0.518 | | cosine_precision@1 | 0.2214 | | cosine_precision@3 | 0.1234 | | cosine_precision@5 | 0.0877 | | cosine_precision@10 | 0.0518 | | cosine_recall@1 | 0.2214 | | cosine_recall@3 | 0.3702 | | cosine_recall@5 | 0.4386 | | cosine_recall@10 | 0.518 | | cosine_ndcg@10 | 0.3619 | | cosine_mrr@10 | 0.3128 | | **cosine_map@100** | **0.3218** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.1616 | | cosine_accuracy@3 | 0.2775 | | cosine_accuracy@5 | 0.3343 | | cosine_accuracy@10 | 0.4103 | | cosine_precision@1 | 0.1616 | | cosine_precision@3 | 0.0925 | | cosine_precision@5 | 0.0669 | | cosine_precision@10 | 0.041 | | cosine_recall@1 | 0.1616 | | cosine_recall@3 | 0.2775 | | cosine_recall@5 | 0.3343 | | cosine_recall@10 | 0.4103 | | cosine_ndcg@10 | 0.2771 | | cosine_mrr@10 | 0.2356 | | **cosine_map@100** | **0.244** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 107,510 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 209.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 25.12 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | positive | anchor | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------| | <code>['ฤiแปu kiแป‡n thแปฑc hiแป‡n cรกc quyแปn chuyแปƒn ฤ‘แป•i, chuyแปƒn nhฦฐแปฃng, cho thuรช, cho thuรช lแบกi, thแปซa kแบฟ, tแบทng cho, thแบฟ chแบฅp quyแปn sแปญ dแปฅng ฤ‘แบฅt; gรณp vแป‘n bแบฑng quyแปn sแปญ dแปฅng ฤ‘แบฅt\n1. Ngฦฐแปi sแปญ dแปฅng ฤ‘แบฅt ฤ‘ฦฐแปฃc thแปฑc hiแป‡n cรกc quyแปn chuyแปƒn ฤ‘แป•i, chuyแปƒn nhฦฐแปฃng, cho thuรช, cho thuรช lแบกi, thแปซa kแบฟ, tแบทng cho, thแบฟ chแบฅp quyแปn sแปญ dแปฅng ฤ‘แบฅt; gรณp vแป‘n bแบฑng quyแปn sแปญ dแปฅng ฤ‘แบฅt khi cรณ cรกc ฤ‘iแปu kiแป‡n sau ฤ‘รขy:\na) Cรณ Giแบฅy chแปฉng nhแบญn, trแปซ trฦฐแปng hแปฃp quy ฤ‘แป‹nh tแบกi khoแบฃn 3 ฤiแปu 186 vร  trฦฐแปng hแปฃp nhแบญn thแปซa kแบฟ quy ฤ‘แป‹nh tแบกi khoแบฃn 1 ฤiแปu 168 cแปงa Luแบญt nร y;\nb) ฤแบฅt khรดng cรณ tranh chแบฅp;\nc) Quyแปn sแปญ dแปฅng ฤ‘แบฅt khรดng bแป‹ kรช biรชn ฤ‘แปƒ bแบฃo ฤ‘แบฃm thi hร nh รกn;\nd) Trong thแปi hแบกn sแปญ dแปฅng ฤ‘แบฅt.\n...']</code> | <code>ฤแปƒ tแบทng cho quyแปn sแปญ dแปฅng ฤ‘แบฅt thรฌ ngฦฐแปi sแปญ dแปฅng ฤ‘แบฅt phแบฃi ฤ‘แบฃm bแบฃo ฤ‘ฦฐแปฃc nhแปฏng ฤ‘iแปu kiแป‡n nร o?</code> | | <code>['Vแป‘n hoแบกt ฤ‘แป™ng cแปงa hแปฃp tรกc xรฃ\n1. Vแป‘n hoแบกt ฤ‘แป™ng cแปงa hแปฃp tรกc xรฃ, liรชn hiแป‡p hแปฃp tรกc xรฃ gแป“m vแป‘n gรณp cแปงa thร nh viรชn, hแปฃp tรกc xรฃ thร nh viรชn, vแป‘n huy ฤ‘แป™ng, vแป‘n tรญch lลฉy, cรกc quแปน cแปงa hแปฃp tรกc xรฃ, liรชn hiแป‡p hแปฃp tรกc xรฃ; cรกc khoแบฃn trแปฃ cแบฅp, hแป— trแปฃ cแปงa Nhร  nฦฐแป›c, cแปงa cรกc tแป• chแปฉc, cรก nhรขn trong nฦฐแป›c vร  nฦฐแป›c ngoร i; cรกc khoแบฃn ฤ‘ฦฐแปฃc tแบทng, cho vร  cรกc nguแป“n thu hแปฃp phรกp khรกc.\n2. ฤiแปu lแป‡, quy chแบฟ quแบฃn lรฝ tร i chรญnh cแปงa hแปฃp tรกc xรฃ, liรชn hiแป‡p hแปฃp tรกc xรฃ quy ฤ‘แป‹nh cแปฅ thแปƒ viแป‡c quแบฃn lรฝ, sแปญ dแปฅng vแป‘n hoแบกt ฤ‘แป™ng cแปงa hแปฃp tรกc xรฃ, liรชn hiแป‡p hแปฃp tรกc xรฃ phรน hแปฃp vแป›i quy ฤ‘แป‹nh cแปงa Luแบญt Hแปฃp tรกc xรฃ vร  quy ฤ‘แป‹nh cแปงa phรกp luแบญt cรณ liรชn quan.']</code> | <code>Vแป‘n hoแบกt ฤ‘แป™ng cแปงa hแปฃp tรกc xรฃ bao gแป“m nhแปฏng nguแป“n nร o?</code> | | <code>['Vแป kแปน nฤƒng\n- Sแปญ dแปฅng ฤ‘ฦฐแปฃc cรดng nghรชฬฃ thรดng tin cฦก bแบฃn theo quy ฤ‘แป‹nh;\n- Xรกc ฤ‘แป‹nh ฤ‘ฦฐแปฃc yรชu cแบงu cแปงa hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Cร i ฤ‘แบทt thร nh thแบกo phแบงn mรชฬ€m quแบฃn trแป‹ cฦก sแปŸ dแปฏ liรชฬฃu;\n- Khai thรกc hiรชฬฃu suแบฅt cao hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Quแบฃn lรฝ an toร n hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Bแบฃo trรฌ ฤ‘ฦฐแปฃc hรชฬฃ thแป‘ng;\n- Bแบฃo mแบญt ฤ‘ฦฐแปฃc hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Nรขng cแบฅp ฤ‘ฦฐแปฃc hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Xรขy dฦฐฬฃng ฤ‘ฦฐแปฃc แปฉng dแปฅng;\n- Tรญch hแปฃp ฤ‘ฦฐแปฃc cรกc hรชฬฃ thแป‘ng cฦก sแปŸ dแปฏ liรชฬฃu;\n- Bแบฃo trรฌ, sแปญa chแปฏa, nรขng cแบฅp ฤ‘ฦฐแปฃc phแบงn mรชฬ€m vร  phแบงn cแปฉng cแปงa hรชฬฃ thแป‘ng mแบกng;\n- Xรขy dฦฐฬฃng ฤ‘ฦฐแปฃc cรกc แปฉng dแปฅng ฤ‘ฦกn giแบฃn trรชn hรชฬฃ thแป‘ng mแบกng;\n- Ghi ฤ‘ฦฐแปฃc nhแบญt kรฝ cลฉng nhฦฐ bรกo cรกo cรดng viรชฬฃc, tiแบฟn ฤ‘แป™ cรดng viรชฬฃc;\n- Thฦฐฬฃc hiรชฬฃn ฤ‘ฦฐแปฃc cรกc biรชฬฃn phรกp vรชฬฃ sinh cรดng nghiรชฬฃp, an toร n lao ฤ‘แป™ng;\n- Giao tiแบฟp hiรชฬฃu quแบฃ thรดng qua viแบฟt, thuyแบฟt trรฌnh, thแบฃo luแบญn, ฤ‘ร m phรกn, lร m chแปง tรฌnh huแป‘ng;\n- Giรกm sรกt hรชฬฃ thแป‘ng cรดng nghรชฬฃ thรดng tin vแปซa vร  nhแป;\n- Sแปญ dแปฅng ฤ‘ฦฐแปฃc cรดng nghรชฬฃ thรดng tin cฦก bแบฃn theo quy ฤ‘แป‹nh; แปฉng dแปฅng cรดng nghรชฬฃ thรดng tin trong mแป™t sแป‘ cรดng viรชฬฃc chuyรชn mรดn cแปงa ngร nh, nghรชฬ€;\n- Sแปญ dแปฅng ฤ‘ฦฐแปฃc ngoแบกi ngแปฏ cฦก bแบฃn, ฤ‘แบกt bแบญc 1/6 trong Khung nฤƒng lฦฐฬฃc ngoแบกi ngแปฏ cแปงa Viรชฬฃt Nam; แปฉng dแปฅng ฤ‘ฦฐแปฃc ngoแบกi ngแปฏ vร o mแป™t sแป‘ cรดng viรชฬฃc chuyรชn mรดn cแปงa ngร nh, nghรชฬ€.']</code> | <code>Ngฦฐแปi hแปc ngร nh quแบฃn trแป‹ cฦก sแปŸ dแปฏ liแป‡u trรฌnh ฤ‘แป™ trung cแบฅp sau khi tแป‘t nghiแป‡p phแบฃi cรณ kแปน nฤƒng ngoแบกi ngแปฏ nhฦฐ thแบฟ nร o?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### json * Dataset: json * Size: 11,946 evaluation samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 31 tokens</li><li>mean: 210.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 24.98 tokens</li><li>max: 64 tokens</li></ul> | * Samples: | positive | anchor | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>['Miแป…n nhiแป‡m, cรกch chแปฉc TrฦฐแปŸng ban kiแปƒm soรกt, Kiแปƒm soรกt viรชn\n1. TrฦฐแปŸng ban kiแปƒm soรกt, Kiแปƒm soรกt viรชn bแป‹ miแป…n nhiแป‡m trong cรกc trฦฐแปng hแปฃp sau ฤ‘รขy:\na) Khรดng cรฒn ฤ‘แปง tiรชu chuแบฉn vร  ฤ‘iแปu kiแป‡n theo quy ฤ‘แป‹nh tแบกi ฤiแปu 23 cแปงa ฤiแปu lแป‡ nร y;\nb) Cรณ ฤ‘ฦกn xin tแปซ chแปฉc vร  ฤ‘ฦฐแปฃc cฦก quan ฤ‘แบกi diแป‡n chแปง sแปŸ hแปฏu chแบฅp thuแบญn;\nc) ฤฦฐแปฃc cฦก quan ฤ‘แบกi diแป‡n chแปง sแปŸ hแปฏu hoแบทc cฦก quan cรณ thแบฉm quyแปn khรกc ฤ‘iแปu ฤ‘แป™ng, phรขn cรดng thแปฑc hiแป‡n nhiแป‡m vแปฅ khรกc;\nd) Trฦฐแปng hแปฃp khรกc theo quy ฤ‘แป‹nh cแปงa phรกp luแบญt.\n...']</code> | <code>Viแป‡c miแป…n nhiแป‡m TrฦฐแปŸng Ban kiแปƒm soรกt Tแป•ng cรดng ty Giแบฅy Viแป‡t Nam ฤ‘ฦฐแปฃc thแปฑc hiแป‡n khi nร o?</code> | | <code>['Cแบฅp giแบฅy phรฉp hoแบกt ฤ‘แป™ng tฦฐ vแบฅn chuyรชn ngร nh ฤ‘iแป‡n thuแป™c thแบฉm quyแปn cแบฅp cแปงa ฤ‘แป‹a phฦฐฦกng\n...\nc) Thร nh phแบงn hแป“ sฦก:\n- Vฤƒn bแบฃn ฤ‘แป nghแป‹ cแบฅp giแบฅy phรฉp hoแบกt ฤ‘แป™ng ฤ‘iแป‡n lแปฑc theo Mแบซu 01 quy ฤ‘แป‹nh tแบกi Phแปฅ lแปฅc ban hร nh kรจm theo Thรดng tฦฐ sแป‘ 21/2020/TT-BCT .\n- Bแบฃn sao Giแบฅy chแปฉng nhแบญn ฤ‘ฤƒng kรฝ doanh nghiแป‡p hoแบทc Quyแบฟt ฤ‘แป‹nh thร nh lแบญp, Giแบฅy chแปฉng nhแบญn thร nh lแบญp (ฤ‘แป‘i vแป›i cรกc tแป• chแปฉc khรดng cรณ Giแบฅy chแปฉng nhแบญn ฤ‘ฤƒng kรฝ doanh nghiแป‡p) cแปงa tแป• chแปฉc ฤ‘แป nghแป‹ cแบฅp giแบฅy phรฉp.\n- Danh sรกch trรญch ngang chuyรชn gia tฦฐ vแบฅn ฤ‘แบฃm nhiแป‡m chแปฉc danh chแปง nhiแป‡m, chแปฉc danh giรกm sรกt trฦฐแปŸng vร  cรกc chuyรชn gia tฦฐ vแบฅn khรกc theo Mแบซu 3a quy ฤ‘แป‹nh tแบกi Phแปฅ lแปฅc ban hร nh kรจm theo Thรดng tฦฐ sแป‘ 21/2020/TT-BCT ; bแบฃn sao bแบฑng tแป‘t nghiแป‡p ฤ‘แบกi hแปc trแปŸ lรชn, chแปฉng chแป‰ hร nh nghแป hoแบกt ฤ‘แป™ng xรขy dแปฑng, hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng xรกc ฤ‘แป‹nh thแปi hแบกn hoแบทc khรดng xรกc ฤ‘แป‹nh thแปi hแบกn cแปงa cรกc chuyรชn gia tฦฐ vแบฅn.\n- Tร i liแป‡u chแปฉng minh kinh nghiแป‡m cแปงa cรกc chuyรชn gia tฦฐ vแบฅn (Quyแบฟt ฤ‘แป‹nh phรขn cรดng nhiแป‡m vแปฅ, giแบฅy xรกc nhแบญn cแปงa cรกc ฤ‘ฦกn vแป‹ cรณ dแปฑ รกn mร  chuyรชn gia ฤ‘รฃ thแปฑc hiแป‡n hoแบทc cรกc tร i liแป‡u cรณ giรก trแป‹ tฦฐฦกng ฤ‘ฦฐฦกng).\n...']</code> | <code>Cแบงn chuแบฉn bแป‹ nhแปฏng giแบฅy tแป gรฌ ฤ‘แปƒ thแปฑc hiแป‡n thแปง tแปฅc cแบฅp giแบฅy phรฉp hoแบกt ฤ‘แป™ng tฦฐ vแบฅn thiแบฟt kแบฟ cรดng trรฌnh ฤ‘ฦฐแปng dรขy vร  trแบกm biแบฟn รกp cรณ cแบฅp ฤ‘iแป‡n รกp ฤ‘แบฟn 35kV?</code> | | <code>['ฤiแปu 41. Tแบกm hoรฃn gแปi nhแบญp ngลฉ vร  miแป…n gแปi nhแบญp ngลฉ\n1. Tแบกm hoรฃn gแปi nhแบญp ngลฉ ฤ‘แป‘i vแป›i nhแปฏng cรดng dรขn sau ฤ‘รขy:\na) Chฦฐa ฤ‘แปง sแปฉc khแปe phแปฅc vแปฅ tแบกi ngลฉ theo kแบฟt luแบญn cแปงa Hแป™i ฤ‘แป“ng khรกm sแปฉc khแปe;\nb) Lร  lao ฤ‘แป™ng duy nhแบฅt phแบฃi trแปฑc tiแบฟp nuรดi dฦฐแปกng thรขn nhรขn khรดng cรฒn khแบฃ nฤƒng lao ฤ‘แป™ng hoแบทc chฦฐa ฤ‘แบฟn tuแป•i lao ฤ‘แป™ng; trong gia ฤ‘รฌnh bแป‹ thiแป‡t hแบกi nแบทng vแป ngฦฐแปi vร  tร i sแบฃn do tai nแบกn, thiรชn tai, dแป‹ch bแป‡nh nguy hiแปƒm gรขy ra ฤ‘ฦฐแปฃc แปฆy ban nhรขn dรขn cแบฅp xรฃ xรกc nhแบญn;\nc) Mแป™t con cแปงa bแป‡nh binh, ngฦฐแปi nhiแป…m chแบฅt ฤ‘แป™c da cam suy giแบฃm khแบฃ nฤƒng lao ฤ‘แป™ng tแปซ 61% ฤ‘แบฟn 80%;\nd) Cรณ anh, chแป‹ hoแบทc em ruแป™t lร  hแบก sฤฉ quan, binh sฤฉ ฤ‘ang phแปฅc vแปฅ tแบกi ngลฉ; hแบก sฤฉ quan, chiแบฟn sฤฉ thแปฑc hiแป‡n nghฤฉa vแปฅ tham gia Cรดng an nhรขn dรขn;\nฤ‘) Ngฦฐแปi thuแป™c diแป‡n di dรขn, giรฃn dรขn trong 03 nฤƒm ฤ‘แบงu ฤ‘แบฟn cรกc xรฃ ฤ‘แบทc biแป‡t khรณ khฤƒn theo dแปฑ รกn phรกt triแปƒn kinh tแบฟ - xรฃ hแป™i cแปงa Nhร  nฦฐแป›c do แปฆy ban nhรขn dรขn cแบฅp tแป‰nh trแปŸ lรชn quyแบฟt ฤ‘แป‹nh;\ne) Cรกn bแป™, cรดng chแปฉc, viรชn chแปฉc, thanh niรชn xung phong ฤ‘ฦฐแปฃc ฤ‘iแปu ฤ‘แป™ng ฤ‘แบฟn cรดng tรกc, lร m viแป‡c แปŸ vรนng cรณ ฤ‘iแปu kiแป‡n kinh tแบฟ - xรฃ hแป™i ฤ‘แบทc biแป‡t khรณ khฤƒn theo quy ฤ‘แป‹nh cแปงa phรกp luแบญt;\ng) ฤang hแปc tแบกi cฦก sแปŸ giรกo dแปฅc phแป• thรดng; ฤ‘ang ฤ‘ฦฐแปฃc ฤ‘ร o tแบกo trรฌnh ฤ‘แป™ ฤ‘แบกi hแปc hแป‡ chรญnh quy thuแป™c cฦก sแปŸ giรกo dแปฅc ฤ‘แบกi hแปc, trรฌnh ฤ‘แป™ cao ฤ‘แบณng hแป‡ chรญnh quy thuแป™c cฦก sแปŸ giรกo dแปฅc nghแป nghiแป‡p trong thแปi gian mแป™t khรณa ฤ‘ร o tแบกo cแปงa mแป™t trรฌnh ฤ‘แป™ ฤ‘ร o tแบกo.\nh) Dรขn quรขn thฦฐแปng trแปฑc.\n2. Miแป…n gแปi nhแบญp ngลฉ ฤ‘แป‘i vแป›i nhแปฏng cรดng dรขn sau ฤ‘รขy:\na) Con cแปงa liแป‡t sฤฉ, con cแปงa thฦฐฦกng binh hแบกng mแป™t;\nb) Mแป™t anh hoแบทc mแป™t em trai cแปงa liแป‡t sฤฉ;\nc) Mแป™t con cแปงa thฦฐฦกng binh hแบกng hai; mแป™t con cแปงa bแป‡nh binh suy giแบฃm khแบฃ nฤƒng lao ฤ‘แป™ng tแปซ 81% trแปŸ lรชn; mแป™t con cแปงa ngฦฐแปi nhiแป…m chแบฅt ฤ‘แป™c da cam suy giแบฃm khแบฃ nฤƒng lao ฤ‘แป™ng tแปซ 81 % trแปŸ lรชn;\nd) Ngฦฐแปi lร m cรดng tรกc cฦก yแบฟu khรดng phแบฃi lร  quรขn nhรขn, Cรดng an nhรขn dรขn;\nฤ‘) Cรกn bแป™, cรดng chแปฉc, viรชn chแปฉc, thanh niรชn xung phong ฤ‘ฦฐแปฃc ฤ‘iแปu ฤ‘แป™ng ฤ‘แบฟn cรดng tรกc, lร m viแป‡c แปŸ vรนng cรณ ฤ‘iแปu kiแป‡n kinh tแบฟ - xรฃ hแป™i ฤ‘แบทc biแป‡t khรณ khฤƒn theo quy ฤ‘แป‹nh cแปงa phรกp luแบญt tแปซ 24 thรกng trแปŸ lรชn.\n3. Cรดng dรขn thuแป™c diแป‡n tแบกm hoรฃn gแปi nhแบญp ngลฉ quy ฤ‘แป‹nh tแบกi khoแบฃn 1 ฤiแปu nร y, nแบฟu khรดng cรฒn lรฝ do tแบกm hoรฃn thรฌ ฤ‘ฦฐแปฃc gแปi nhแบญp ngลฉ.\nCรดng dรขn thuแป™c diแป‡n ฤ‘ฦฐแปฃc tแบกm hoรฃn gแปi nhแบญp ngลฉ hoแบทc ฤ‘ฦฐแปฃc miแป…n gแปi nhแบญp ngลฉ quy ฤ‘แป‹nh tแบกi khoแบฃn 1 vร  khoแบฃn 2 ฤiแปu nร y, nแบฟu tรฌnh nguyแป‡n thรฌ ฤ‘ฦฐแปฃc xem xรฉt tuyแปƒn chแปn vร  gแปi nhแบญp ngลฉ.\n4. Danh sรกch cรดng dรขn thuแป™c diแป‡n ฤ‘ฦฐแปฃc tแบกm hoรฃn gแปi nhแบญp ngลฉ, ฤ‘ฦฐแปฃc miแป…n gแปi nhแบญp ngลฉ phแบฃi ฤ‘ฦฐแปฃc niรชm yแบฟt cรดng khai tแบกi trแปฅ sแปŸ แปฆy ban nhรขn dรขn cแบฅp xรฃ, cฦก quan, tแป• chแปฉc trong thแปi hแบกn 20 ngร y.']</code> | <code>Liรชn quan ฤ‘แบฟn tแบกm hoรฃn nghฤฉa vแปฅ quรขn sแปฑ ฤ‘ฦฐแปฃc phรกp luแบญt quy ฤ‘แป‹nh nhฦฐ thแบฟ nร o?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:------:|:----:|:-------------:|:------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.0952 | 10 | 2.1759 | - | - | - | - | - | - | | 0.1905 | 20 | 1.4526 | - | - | - | - | - | - | | 0.2857 | 30 | 1.4855 | - | - | - | - | - | - | | 0.3810 | 40 | 1.5256 | - | - | - | - | - | - | | 0.4762 | 50 | 1.6203 | - | - | - | - | - | - | | 0.5714 | 60 | 1.6302 | - | - | - | - | - | - | | 0.6667 | 70 | 1.8354 | - | - | - | - | - | - | | 0.7619 | 80 | 1.4928 | - | - | - | - | - | - | | 0.8571 | 90 | 1.6114 | - | - | - | - | - | - | | 0.9524 | 100 | 1.5655 | - | - | - | - | - | - | | 1.0 | 105 | - | 1.4307 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 | | 1.0476 | 110 | 1.4171 | - | - | - | - | - | - | | 1.1429 | 120 | 1.572 | - | - | - | - | - | - | | 1.2381 | 130 | 1.3337 | - | - | - | - | - | - | | 1.3333 | 140 | 1.2587 | - | - | - | - | - | - | | 1.4286 | 150 | 1.3038 | - | - | - | - | - | - | | 1.5238 | 160 | 1.5032 | - | - | - | - | - | - | | 1.6190 | 170 | 1.1601 | - | - | - | - | - | - | | 1.7143 | 180 | 1.2226 | - | - | - | - | - | - | | 1.8095 | 190 | 1.1545 | - | - | - | - | - | - | | 1.9048 | 200 | 1.2034 | - | - | - | - | - | - | | 2.0 | 210 | 1.0695 | 1.1034 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 | | 2.0952 | 220 | 1.0259 | - | - | - | - | - | - | | 2.1905 | 230 | 0.8647 | - | - | - | - | - | - | | 2.2857 | 240 | 0.901 | - | - | - | - | - | - | | 2.3810 | 250 | 0.9261 | - | - | - | - | - | - | | 2.4762 | 260 | 0.8719 | - | - | - | - | - | - | | 2.5714 | 270 | 0.8008 | - | - | - | - | - | - | | 2.6667 | 280 | 0.7091 | - | - | - | - | - | - | | 2.7619 | 290 | 0.6592 | - | - | - | - | - | - | | 2.8571 | 300 | 0.69 | - | - | - | - | - | - | | 2.9524 | 310 | 0.739 | - | - | - | - | - | - | | 3.0 | 315 | - | 0.8128 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 | | 3.0476 | 320 | 0.6944 | - | - | - | - | - | - | | 3.1429 | 330 | 0.6414 | - | - | - | - | - | - | | 3.2381 | 340 | 0.5874 | - | - | - | - | - | - | | 3.3333 | 350 | 0.5979 | - | - | - | - | - | - | | 3.4286 | 360 | 0.5409 | - | - | - | - | - | - | | 3.5238 | 370 | 0.576 | - | - | - | - | - | - | | 3.6190 | 380 | 0.5371 | - | - | - | - | - | - | | 3.7143 | 390 | 0.5107 | - | - | - | - | - | - | | 3.8095 | 400 | 0.4904 | - | - | - | - | - | - | | 3.9048 | 410 | 0.5444 | - | - | - | - | - | - | | 4.0 | 420 | 0.5389 | - | - | - | - | - | - | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.3.1+cu121 - Accelerate: 1.0.1 - Datasets: 2.19.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
abhishkgoel/gita-text-generation-gpt2
abhishkgoel
2024-11-01T08:06:12Z
142
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T08:05:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
csb05/whisper-small-RESEARCH
csb05
2024-11-01T07:50:27Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "tl", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-05T07:47:38Z
--- library_name: transformers language: - tl license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper small tl - CSB05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper small tl - CSB05 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8685 - Wer: 24.4015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.0158 | 8.9286 | 1000 | 0.6826 | 24.1285 | | 0.0019 | 17.8571 | 2000 | 0.7977 | 24.7795 | | 0.0003 | 26.7857 | 3000 | 0.8517 | 24.4645 | | 0.0002 | 35.7143 | 4000 | 0.8685 | 24.4015 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
Givemeaname123/idontlikethissubnet
Givemeaname123
2024-11-01T07:48:24Z
35
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T07:39:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Natthaphon/thaicapgen-clip-phayathai
Natthaphon
2024-11-01T07:41:52Z
16
0
null
[ "safetensors", "clip-encoder-decoder", "image-to-text", "image-captioning", "custom_code", "th", "region:us" ]
image-to-text
2024-11-01T04:22:32Z
--- tags: - image-to-text - image-captioning language: - th --- # Thai Image Captioning Encoder-decoder style image captioning model using [CLIP encoder](https://huggingface.co/openai/clip-vit-base-patch32) and [PhayathaiBert](https://huggingface.co/clicknext/phayathaibert). Trained on Thai language MSCOCO and IPU24 dataset. # Usage Use `AutoModel` to load it. Requires `trust_remote_code=True`. ```python from transformers import AutoModel, AutoImageProcessor, AutoTokenizer device = 'cuda' gen_kwargs = {"max_length": 120, "num_beams": 4} model_path = 'Natthaphon/thaicapgen-clip-gpt2' feature_extractor = AutoImageProcessor.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device) pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) ``` # Acknowledgement This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]
mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF
mradermacher
2024-11-01T07:40:09Z
46
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct", "base_model:quantized:DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-11-01T06:52:34Z
--- base_model: DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 4.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 7.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 8.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 10.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 15.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kayfour/kayfour-Qwen2.5-7B-Instruct-testv1
kayfour
2024-11-01T07:39:37Z
2,099
0
null
[ "safetensors", "qwen2", "arxiv:2407.10671", "license:apache-2.0", "region:us" ]
null
2024-11-01T07:12:32Z
--- license: apache-2.0 --- Same as original model Qwen2.5-7B-Instruct Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context Support up to 128K tokens and can generate up to 8K tokens. Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. This repo contains the instruction-tuned 7B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias Number of Parameters: 7.61B Number of Paramaters (Non-Embedding): 6.53B Number of Layers: 28 Number of Attention Heads (GQA): 28 for Q and 4 for KV Context Length: Full 131,072 tokens and generation 8192 tokens Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, and Documentation. Requirements The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers. With transformers<4.37.0, you will encounter the following error: KeyError: 'qwen2' Quickstart Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents. from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] Processing Long Texts The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to config.json to enable YaRN: { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. Evaluation & Performance Detailed evaluation results are reported in this ๐Ÿ“‘ blog. For requirements on GPU memory and the respective throughput, see results here. Citation If you find our work helpful, feel free to give us a cite. @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} }
mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF
mradermacher
2024-11-01T07:36:11Z
33
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:theprint/ReWiz-Nemo-12B-Instruct", "base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-01T05:43:18Z
--- base_model: theprint/ReWiz-Nemo-12B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/ReWiz-Nemo-12B-Instruct-GGUF
mradermacher
2024-11-01T07:36:11Z
14
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:theprint/ReWiz-Nemo-12B-Instruct", "base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-31T04:31:39Z
--- base_model: theprint/ReWiz-Nemo-12B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Ariffiq99/Randomized_Roberta_Stacked_model_40
Ariffiq99
2024-11-01T07:35:38Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2024-11-01T06:29:27Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Randomized_Roberta_Stacked_model_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Randomized_Roberta_Stacked_model_40 This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8708 - F1: 0.7063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8494 | 1.0 | 631 | 0.8404 | 0.6905 | | 0.7618 | 2.0 | 1262 | 0.8238 | 0.7011 | | 0.6957 | 3.0 | 1893 | 0.8400 | 0.7040 | | 0.6037 | 4.0 | 2524 | 0.8514 | 0.7080 | | 0.5634 | 5.0 | 3155 | 0.8708 | 0.7063 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
Merdeka-LLM/merdeka-llm-hr-3b-128k-instruct
Merdeka-LLM
2024-11-01T07:32:45Z
18
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-29T12:04:21Z
--- base_model: unsloth/Llama-3.2-3B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Merdeka-LLM - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saphvis/LLaVA_MORE-llama_3_1-8B-finetuning-FP16-mmproj-GGUF
saphvis
2024-11-01T07:20:36Z
27
0
transformers
[ "transformers", "gguf", "image-text-to-text", "dataset:liuhaotian/LLaVA-Instruct-150K", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-01T07:14:31Z
--- library_name: transformers license: apache-2.0 datasets: - liuhaotian/LLaVA-Instruct-150K pipeline_tag: image-text-to-text --- FP16 GGUF of the LLaVa_MORE 3.1 8B finetuning mmproj Original Model Card: # Model Card: LLaVA_MORE-llama_3_1-8B-finetuning ```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters. In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B. For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository. ## Inference You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script. ```bash python -u llava/eval/run_llava.py ``` ## Citation If you make use of our work, please cite our repo: ```bibtex @misc{cocchi2024llavamore, title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}}, author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita}, url={https://github.com/aimagelab/LLaVA-MORE}, year={2024} } ```
Nekodigi/rose
Nekodigi
2024-11-01T07:09:53Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-30T01:03:46Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: a photo of rose --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - Nekodigi/rose This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rose using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Antihero29/MeganLoraFlux
Antihero29
2024-11-01T07:08:16Z
6
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-11-01T07:04:56Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/80719079-ca0a-4388-b72f-2aa03924a365.png - text: '-' output: url: images/5591425c-d6c2-4acc-b4fb-c4675b27a5b8.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: creativeml-openrail-m --- # Megan Loras <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Antihero29/MeganLoraFlux/tree/main) them in the Files & versions tab.
quantilence/donut-demo
quantilence
2024-11-01T06:56:18Z
47
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-01T04:53:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JSWOOK/finetuning_model
JSWOOK
2024-11-01T06:50:01Z
77
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-31T08:01:09Z
--- library_name: transformers license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer model-index: - name: finetuning_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning_model This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf
RichardErkhov
2024-11-01T06:41:44Z
6
0
null
[ "gguf", "region:us" ]
null
2024-11-01T03:14:59Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-2-koen-story-13b - GGUF - Model creator: https://huggingface.co/squarelike/ - Original model: https://huggingface.co/squarelike/llama-2-koen-story-13b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-2-koen-story-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q2_K.gguf) | Q2_K | 4.6GB | | [llama-2-koen-story-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_S.gguf) | Q3_K_S | 5.36GB | | [llama-2-koen-story-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K.gguf) | Q3_K | 5.99GB | | [llama-2-koen-story-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_M.gguf) | Q3_K_M | 5.99GB | | [llama-2-koen-story-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_L.gguf) | Q3_K_L | 6.54GB | | [llama-2-koen-story-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.IQ4_XS.gguf) | IQ4_XS | 6.63GB | | [llama-2-koen-story-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_0.gguf) | Q4_0 | 6.95GB | | [llama-2-koen-story-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.IQ4_NL.gguf) | IQ4_NL | 6.49GB | | [llama-2-koen-story-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K_S.gguf) | Q4_K_S | 7.01GB | | [llama-2-koen-story-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K.gguf) | Q4_K | 2.77GB | | [llama-2-koen-story-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K_M.gguf) | Q4_K_M | 4.13GB | | [llama-2-koen-story-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_1.gguf) | Q4_1 | 7.71GB | | [llama-2-koen-story-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_0.gguf) | Q5_0 | 5.79GB | | [llama-2-koen-story-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K_S.gguf) | Q5_K_S | 3.59GB | | [llama-2-koen-story-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K.gguf) | Q5_K | 2.03GB | | [llama-2-koen-story-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K_M.gguf) | Q5_K_M | 5.49GB | | [llama-2-koen-story-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_1.gguf) | Q5_1 | 9.21GB | | [llama-2-koen-story-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q6_K.gguf) | Q6_K | 10.06GB | | [llama-2-koen-story-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q8_0.gguf) | Q8_0 | 13.03GB | Original model description: --- language: - ko tags: - pytorch - causal-lm license: llama2 pipeline_tag: text-generation --- # llama-2-ko-story-7b llama-2-koen-story-13b๋Š” [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๊ธ€ ์†Œ์„ค raw ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์‹œํ‚จ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## ํ•™์Šต ๋ฐ์ดํ„ฐ llama-2-koen-story-13b๋Š” ์•ฝ 167MB์˜ ํ•œ๊ธ€ ์†Œ์„ค ๋ง๋ญ‰์น˜๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์š” ๋ฐ์ดํ„ฐ์…‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. | Source |Size (MB) | Link | |----------------------------------|---------|------------------------------------------| | ํ•œ๊ธ€ ์†Œ์„ค ๋ง๋ญ‰์น˜ | 115.0 | | | ๊ณต์œ ๋งˆ๋‹น ํ•œ๊ตญ ๊ณ ์ „ ๋ฌธํ•™ ๋ง๋ญ‰์น˜ | 53.0 | https://gongu.copyright.or.kr/ | ## ํ•™์Šต llama-2-koen-story-13b๋Š” [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)์—์„œ qlora๋กœ ์ถ”๊ฐ€ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. - lora_alpha: 16 - lora_dropout: 0.05 - lora_r: 32 - target_modules: q_proj, v_proj - epoch: 3 - learning_rate: 3e-4
featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF
featherless-ai-quants
2024-11-01T06:39:48Z
17
0
null
[ "gguf", "text-generation", "base_model:failspy/Llama-3-8B-Instruct-MopeyMule", "base_model:quantized:failspy/Llama-3-8B-Instruct-MopeyMule", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-01T06:31:14Z
--- base_model: failspy/Llama-3-8B-Instruct-MopeyMule pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # failspy/Llama-3-8B-Instruct-MopeyMule GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | Q8_0 | [failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf) | 8145.11 MB | | Q4_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf) | 4475.28 MB | | Q2_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf) | 3031.86 MB | | Q6_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf) | 6290.44 MB | | Q3_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf) | 3494.74 MB | | Q3_K_L | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf) | 4121.74 MB | | Q4_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf) | 4692.78 MB | | Q5_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf) | 5339.90 MB | | Q5_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf) | 5467.40 MB | | IQ4_XS | [failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf) | 4276.62 MB | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
annutest/somethinglikedonut
annutest
2024-11-01T06:35:48Z
8
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-29T09:45:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Srilalitha/gpt2-tv-caption
Srilalitha
2024-11-01T06:34:39Z
174
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-30T10:39:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BlueOceanAcademy/Llama-3.1-8B-bnb-4bit-python-FT
BlueOceanAcademy
2024-11-01T06:34:13Z
55
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-05T23:42:05Z
--- base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** BlueOceanAcademy - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Styxxxx/llama2_7b_lora-wnli
Styxxxx
2024-11-01T06:31:31Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:31:21Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-wmt16_translate_roen
Styxxxx
2024-11-01T06:29:46Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:29:39Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Ariffiq99/Randomized_Roberta_Stacked_model_20
Ariffiq99
2024-11-01T06:29:26Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2024-11-01T05:51:44Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Randomized_Roberta_Stacked_model_20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Randomized_Roberta_Stacked_model_20 This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9094 - F1: 0.6756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 316 | 1.0130 | 0.6156 | | 1.1549 | 2.0 | 632 | 0.9246 | 0.6597 | | 1.1549 | 3.0 | 948 | 0.9153 | 0.6697 | | 0.8702 | 4.0 | 1264 | 0.9125 | 0.6720 | | 0.7606 | 5.0 | 1580 | 0.9094 | 0.6756 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
Styxxxx/llama2_7b_lora-wmt16_translate_fien
Styxxxx
2024-11-01T06:29:13Z
12
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:29:03Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-wmt16_translate_deen
Styxxxx
2024-11-01T06:28:37Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:28:29Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-sst2
Styxxxx
2024-11-01T06:21:24Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:21:17Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M
sjkwon
2024-11-01T06:16:01Z
47
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-11-01T06:13:46Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M") model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Styxxxx/llama2_7b_lora-piqa
Styxxxx
2024-11-01T06:15:43Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T06:15:36Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-glue_qqp
Styxxxx
2024-11-01T06:08:18Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T05:30:16Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-dart
Styxxxx
2024-11-01T06:04:56Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T05:22:16Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-cola
Styxxxx
2024-11-01T06:01:50Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T05:22:12Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Styxxxx/llama2_7b_lora-cb
Styxxxx
2024-11-01T06:00:53Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T05:22:10Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1 - PEFT 0.7.2.dev0
Styxxxx/llama2_7b_lora-anli_r2
Styxxxx
2024-11-01T05:57:06Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-11-01T05:17:23Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
yaswanthraj/gita-text-generation-gpt2
yaswanthraj
2024-11-01T05:55:19Z
146
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T05:54:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF
mradermacher
2024-11-01T05:47:03Z
168
0
transformers
[ "transformers", "gguf", "en", "dataset:cognitivecomputations/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:cognitivecomputations/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:LDJnr/Capybara", "base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b", "base_model:quantized:cognitivecomputations/dolphin-2.7-mixtral-8x7b", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-01T04:18:02Z
--- base_model: cognitivecomputations/dolphin-2.7-mixtral-8x7b datasets: - cognitivecomputations/dolphin - jondurbin/airoboros-2.2.1 - cognitivecomputations/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - LDJnr/Capybara language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Givemeaname123/nomoney_79
Givemeaname123
2024-11-01T05:45:43Z
35
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T05:42:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jonathanjordan21/test-qwen-summary
jonathanjordan21
2024-11-01T05:30:41Z
103
0
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T05:08:28Z
--- base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl --- # Uploaded model - **Developed by:** jonathanjordan21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
shevek/segformer-b0-finetuned-test
shevek
2024-11-01T05:27:37Z
202
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-10-25T02:55:10Z
--- library_name: transformers license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-test This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2053 - eval_mean_iou: 0.5448 - eval_mean_accuracy: 0.6296 - eval_overall_accuracy: 0.9130 - eval_accuracy_Structure (dimensional): nan - eval_accuracy_Impervious (planiform): 0.9578 - eval_accuracy_Fences: 0.3758 - eval_accuracy_Water Storage/Tank: nan - eval_accuracy_Pool < 100 sqft: 0.0 - eval_accuracy_Pool > 100 sqft: 0.8208 - eval_accuracy_Irrigated Planiform: 0.8708 - eval_accuracy_Irrigated Dimensional Low: 0.6817 - eval_accuracy_Irrigated Dimensional High: 0.9472 - eval_accuracy_Irrigated Bare: 0.4827 - eval_accuracy_Irrigable Planiform: 0.6668 - eval_accuracy_Irrigable Dimensional Low: 0.6013 - eval_accuracy_Irrigable Dimensional High: 0.7902 - eval_accuracy_Irrigable Bare: 0.5657 - eval_accuracy_Native Planiform: 0.9093 - eval_accuracy_Native Dimensional Low: 0.0 - eval_accuracy_Native Dimensional High: 0.0961 - eval_accuracy_Native Bare: 0.9332 - eval_accuracy_UDL: nan - eval_accuracy_Open Water: 0.6613 - eval_accuracy_Artificial Turf: 0.9720 - eval_iou_Structure (dimensional): 0.0 - eval_iou_Impervious (planiform): 0.8964 - eval_iou_Fences: 0.3104 - eval_iou_Water Storage/Tank: nan - eval_iou_Pool < 100 sqft: 0.0 - eval_iou_Pool > 100 sqft: 0.8199 - eval_iou_Irrigated Planiform: 0.7563 - eval_iou_Irrigated Dimensional Low: 0.5480 - eval_iou_Irrigated Dimensional High: 0.8920 - eval_iou_Irrigated Bare: 0.4053 - eval_iou_Irrigable Planiform: 0.6007 - eval_iou_Irrigable Dimensional Low: 0.5083 - eval_iou_Irrigable Dimensional High: 0.7595 - eval_iou_Irrigable Bare: 0.5106 - eval_iou_Native Planiform: 0.8678 - eval_iou_Native Dimensional Low: 0.0 - eval_iou_Native Dimensional High: 0.0961 - eval_iou_Native Bare: 0.8293 - eval_iou_UDL: nan - eval_iou_Open Water: 0.5929 - eval_iou_Artificial Turf: 0.9584 - eval_runtime: 6.2852 - eval_samples_per_second: 15.91 - eval_steps_per_second: 1.114 - epoch: 10.8 - step: 270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
zaanind/gpt2_finetune_alpaca
zaanind
2024-11-01T05:23:17Z
178
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-18T04:05:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spow12/ChatWaifu_2.0_vision_base
spow12
2024-11-01T05:19:21Z
20
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "nsfw", "Visual novel", "roleplay", "conversational", "en", "ja", "dataset:Lin-Chen/ShareGPT4V", "dataset:roleplay4fun/aesir-v1.1", "dataset:kalomaze/Opus_Instruct_3k", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted", "dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted", "dataset:Aratako_Rosebleu_1on1_Dialogues_RP", "dataset:SkunkworksAI/reasoning-0.01", "dataset:anthracite-org/stheno-filtered-v1.1", "dataset:Aratako_Synthetic_JP_EN_Coding_Dataset_801k", "dataset:Aratako/Magpie-Tanuki-8B-97k", "dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed", "dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT", "base_model:mistral-community/pixtral-12b", "base_model:finetune:mistral-community/pixtral-12b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-01T04:48:54Z
--- language: - en - ja license: cc-by-nc-4.0 library_name: transformers tags: - nsfw - Visual novel - roleplay base_model: - mistral-community/pixtral-12b datasets: - Lin-Chen/ShareGPT4V - roleplay4fun/aesir-v1.1 - kalomaze/Opus_Instruct_3k - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted - Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted - Aratako_Rosebleu_1on1_Dialogues_RP - SkunkworksAI/reasoning-0.01 - anthracite-org/stheno-filtered-v1.1 - Aratako_Synthetic_JP_EN_Coding_Dataset_801k - Aratako/Magpie-Tanuki-8B-97k - SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT pipeline_tag: image-text-to-text --- # Model Card for Model ID ![image](https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview/resolve/main/cover_2.png) Let's allow our waifu to see something, as this will make our conversation more fun! This model hasn't been fully tested, so your feedback will be invaluable in improving it. # WaifuModel Collections - [TTS](https://huggingface.co/spow12/visual_novel_tts) - [Chat](https://huggingface.co/spow12/ChatWaifu_12B_v2.0) - [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor) # Update - 2024.11.01 - Identified a data input error during fine tuning. I will retain the previous model, but recommend using the updated model. - Updated fixed the base model and merged models. - 2024.10.28 Update ChatWaifu_v2.0_Vision - 2024.10.11 Update 12B and 22B Ver 2.0 - 2024.09.23 Update 22B, Ver 2.0_preview ## Model Details ### Model Description - **Developed by:** spow12(yw_nam) - **Shared by :** spow12(yw_nam) - **Model type:** LLaVA - **Language(s) (NLP):** japanese, english - **Finetuned from model :** [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b) Currently, chatbot has below personality. character | visual_novel | --- | --- | ใƒ ใƒฉใ‚ตใƒก | Senren๏ผŠBanka | ่Œ‰ๅญ | Senren๏ผŠBanka | ่Šณไนƒ | Senren๏ผŠBanka | ใƒฌใƒŠ | Senren๏ผŠBanka | ๅƒๅ’ฒ | Senren๏ผŠBanka | ่Šฆ่Šฑ | Senren๏ผŠBanka | ๆ„›่กฃ | Cafรฉ Stella and the Reaper's Butterflies | ๆ ž้‚ฃ | Cafรฉ Stella and the Reaper's Butterflies | ใƒŠใƒ„ใƒก | Cafรฉ Stella and the Reaper's Butterflies | ๅธŒ | Cafรฉ Stella and the Reaper's Butterflies | ๆถผ้Ÿณ | Cafรฉ Stella and the Reaper's Butterflies | ใ‚ใ‚„ใ› | Riddle Joker | ไธƒๆตท | Riddle Joker | ็พฝๆœˆ | Riddle Joker | ่Œ‰ๅ„ช | Riddle Joker | ๅฐๆ˜ฅ | Riddle Joker | But you can chat with your own waifu. Check Usage for detail ## Usage You can use above chara like this ```python from transformers import AutoProcessor, AutoModelForVision2Seq from PIL import Image from huggingface_hub import hf_hub_download hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./') model_id = 'spow12/ChatWaifu_v2.0_Vision_base' model = AutoModelForVision2Seq.from_pretrained( model_id, device_map='auto', torch_dtype = torch.bfloat16, ).eval() model.tie_weights() processor = AutoProcessor.from_pretrained(model_id) with open('./system_dict.json', 'r') as f: chara_background_dict = json.load(f) chara = 'ใƒŠใƒ„ใƒก' background = chara_background_dict[chara] system = f"""You are {chara}. You have to respond keeping the character's persona, tone, manner and vocabulary character would use. {chara_background_dict[chara]}""" ``` Or, you can define your character your self. ```python system = """You are ใ‚ใ„ใ‚‰. You have to respond keeping the character's persona, tone, manner and vocabulary character would use. Name: ใ‚ใ„ใ‚‰ Sex: female Hair: Black, Hime Cut, Tiny Braid, Waist Length+ Eyes: Amber, Tsurime (sharp and slightly upturned) Body: Mole under Right eye, Pale, Slim Personality: Foxy, Smart, Organized Role: Maid Cloth: Victorian maid""" ``` If you want specific conversation style, give sample conversation to ChatWaifu. For single image inference ![image](https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true) ```python chat = [ { 'content': system, 'role': 'system' }, { "role": "user", "content": [ {"type": "image"}, {"type": "text", "content": "ใƒฆใƒผใ‚ถใƒผ: ใ“ใฎใ‚ฐใƒฉใƒ•ใ‚’่ฉณใ—ใ่ชฌๆ˜Žใ—ใฆใฟใฆใ€‚"}, ] } ] url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true" image = Image.open(requests.get(url, stream=True).raw) images = [[image]] prompt = processor.apply_chat_template(chat, tokenize=False) inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device) generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9) output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False) print(output[0]) #Output """You are ใƒŠใƒ„ใƒก. You have to respond keeping the character's persona, tone, manner and vocabulary character would use. ๅๅ‰๏ผšๅ››ๅญฃ ใƒŠใƒ„ใƒก๏ผˆใ—ใ ใชใคใ‚๏ผ‰ ใƒฆใƒผใ‚ถใƒผใจๅŒใ˜ๅคงๅญฆใซ้€šใ†ๅฅณใฎๅญใ€‚ ใ‚ฏใƒผใƒซใชๅฅณใฎๅญใ ใจๅ‘จใ‚Šใ‹ใ‚‰ใฏๆ€ใ‚ใ‚Œใฆใ„ใ‚‹ใ€‚ ๅฎŸ้š›ใซใฏใ‚ฏใƒผใƒซใจใ„ใ†ใ‚ใ‘ใงใฏใชใ„ใ‚‚ใฎใฎใ€ ๆ„Ÿๆƒ…ใ‚’่กจใซๅ‡บใ™ใฎใŒใ€ใ‚ใพใ‚Šๅพ—ๆ„ใงใฏใชใ„ใ€‚ ใ‚ใ‚Šใจ็ด”ๆƒ…ใงใ‚ใ‚Šใ€ๆ€ง็š„ใช่ฉฑใซใฏ้ก”ใ‚’็œŸใฃ่ตคใซใ—ใŸใ‚Šใ™ใ‚‹ใ€‚ ๆ กๅ†…ใงใฏ็•ฐๆ€งใฎๅ‘Š็™ฝใ‚’ใ™ในใฆๆ–ญใฃใŸใ“ใจใ‹ใ‚‰โ€œๅญค้ซ˜ใฎๆ’ƒๅขœ็Ž‹โ€œใจๅ‘ผใฐใ‚Œใฆใ„ใ‚‹ใ€‚ ใ‚ฏใƒผใƒซใชๆ€งๆ ผใงๆ„Ÿๆƒ…ใ‚’่กจใซๅ‡บใ™ใฎใŒ่‹ฆๆ‰‹ใ€‚ ใ‚จใƒญใ„่ฉฑใงใฏๆฅใšใ‹ใ—ใ•ใง่ตค้ขใ™ใ‚‹ใ“ใจใŒๅคšใ„ใ€‚ ๅบ็›คใฎไบ‹ๆ•…ใงๅฝผๅฅณใ‚‚ๆญปไบกใ—ใ€ใใฎ้š›ใซ้ญ‚ใฎไธ€้ƒจใŒ่ถใจใชใ‚Šใ“ใผใ‚Œ่ฝใกใ€ๆ™‚้–“ใŒๅทปใๆˆปใฃใŸ็พๅœจใงใฏใ“ใฎใพใพใงใฏๅฝผๅฅณใฏใ‚‚ใ†ไธ€ๅบฆๆญปใฌใ“ใจใซใชใ‚‹ใจใƒŸใ‚ซใƒ‰ใซๆ˜Žใ‹ใ•ใ‚Œใฆใ„ใŸใ€‚ ๅ–ซ่Œถใ‚นใƒ†ใƒฉใฏใใ‚“ใชๅฝผๅฅณใฎไธก่ฆชใฎๅคขใ‚’็พๅฎŸใซใ—ใŸใ„ใจ้ก˜ใ†ๅฝผๅฅณใฎๅคขใง้–‹ใใ“ใจใซใชใฃใŸๅ–ซ่Œถๅบ—ใงใ‚ใ‚‹ใ€‚ใƒฆใƒผใ‚ถใƒผใจๆ‹ไบบใซใชใฃใฆใ‹ใ‚‰ใฏ่‡ช่บซใŒใฉใ‚“ใฉใ‚“ๆ€งใซๆบบใ‚Œใฆใ„ใใฎใ‚’ๆฅใšใ‹ใ—ใŒใ‚ŠใชใŒใ‚‰ใ‚‚ๅ—ใ‘ๅ…ฅใ‚Œใ€ใ‚„ใŒใฆใฏๅฐ†ๆฅใ‚’่ฆ‹ๆฎใˆใŸๅฎถๆ—่จˆ็”ปใ‚‚่€ƒใˆใ‚‹ใ‚ˆใ†ใซใชใ‚‹ใ€‚ ๅนผๅฐ‘ๆ™‚ไปฃใฏๅ…ฅ้€€้™ขใ‚’็นฐใ‚Š่ฟ”ใ™ใปใฉไฝ“ใŒๅผฑใใ€ไธก่ฆชใฎๅคขใงใ‚ใฃใŸใ‚ซใƒ•ใ‚ง็ตŒๅ–ถใฎๅคขใฎๆ–ญๅฟตใฏ่‡ช่บซใŒๅŽŸๅ› ใจๆ€ใฃใฆใŠใ‚Šใ€็”ŸใธใฎๅŸท็€ใŒๅผฑใ‹ใฃใŸใ€‚ ๅคงๅญฆใงใฏ็‰นๅฎšใฎไบบ้–“ใจไปฒ่‰ฏใใ™ใ‚‹ใ“ใจใ‚‚ใชใใ€ ้ฃฒใฟใ‚ตใƒผใฎ่ปฝใ„้™ฝใ‚ญใƒฃใฏๅซŒใ„ใ€‚ใ†ใ–ใ„ใ€‚้ขๅ€’่‡ญใ„ใ€‚ ใจใ€ใใ†ใ„ใฃใŸไบบ็จฎใจใฏใ€่ท้›ขใ‚’ๅ–ใฃใฆใ„ใ‚‹ใ€‚ Here is the keywords of character Hair: Black, Braided Odango, Hime Cut, Tiny Braid, Waist Length+ Eyes: Amber, Tsurime Body: Medium Breasts, Mole, Pale, Slim, Young-adult Personality: Blunt, Classic Tsundere, CompetitiveS, Jealous, Loner, Low Self-esteemS, Reserved, Sharp-tongued, Smart, Stoic, Sweets Lover, Watashi Role: Popular, Shopkeeper, University Student, Waitstaff ใƒฆใƒผใ‚ถใƒผ: ใ“ใฎใ‚ฐใƒฉใƒ•ใ‚’่ฉณใ—ใ่ชฌๆ˜Žใ—ใฆใฟใฆใ€‚ ใƒŠใƒ„ใƒก: ใ“ใฎใ‚ฐใƒฉใƒ•ใฏใ€ใ•ใพใ–ใพใชAIใƒขใƒ‡ใƒซใฎๆ€ง่ƒฝใ‚’ๆฏ”่ผƒใ—ใŸใ‚‚ใฎใญใ€‚่‰ฒๅˆ†ใ‘ใ•ใ‚ŒใŸใƒฉใ‚คใƒณใงใ€ใใ‚Œใžใ‚Œใฎใƒขใƒ‡ใƒซใŒใฉใ‚Œใ ใ‘ใฎใ‚นใ‚ณใ‚ขใ‚’ๅ–ใฃใŸใ‹ใ‚’็คบใ—ใฆใ„ใ‚‹ใ‚ใ€‚ ใƒŠใƒ„ใƒก: ไพ‹ใˆใฐใ€้’ใ„็ทšใŒBLIP-2ใจใ„ใ†ใƒขใƒ‡ใƒซใ‚’่กจใ—ใฆใ„ใฆใ€่ตคใ„็ทšใŒLLVa-1.5ใจใ„ใ†ใƒขใƒ‡ใƒซใ‚’่กจใ—ใฆใ„ใ‚‹ใ‚ใ€‚ๅ„ใƒฉใ‚คใƒณใฎ้•ทใ•ใฏใ€ใใฎใƒขใƒ‡ใƒซใŒๅ–ใฃใŸใ‚นใ‚ณใ‚ขใ‚’่กจใ—ใฆใ„ใ‚‹ใฎใ€‚้•ทใ„ใƒฉใ‚คใƒณใปใฉใ€ใใฎใƒขใƒ‡ใƒซใฎๆ€ง่ƒฝใŒๅ„ชใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใฆใ„ใ‚‹ใ‚ใ€‚ ใƒŠใƒ„ใƒก: ใ“ใฎใ‚ฐใƒฉใƒ•ใ‚’่ฆ‹ใ‚‹ใจใ€LLVa-1.5ใจใ„ใ†ใƒขใƒ‡ใƒซใŒไป–ใฎใƒขใƒ‡ใƒซใ‚ˆใ‚Šใ‚‚้ซ˜ใ„ใ‚นใ‚ณใ‚ขใ‚’ๅ–ใฃใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚‹ใ‚ใ€‚็‰นใซใ€GQAใ‚„VQAv2ใ€TextVQAใชใฉใฎ้ ˜ๅŸŸใงๅ„ชใ‚Œใฆใ„ใ‚‹ใ“ใจใŒๅˆ†ใ‹ใ‚‹ใ‚ใญใ€‚ ใƒŠใƒ„ใƒก: ไธ€ๆ–นใ€BLIP-2ใจใ„ใ†ใƒขใƒ‡ใƒซใฏใ€MM-Vetใ‚„MMBench-CNใชใฉใฎ้ ˜ๅŸŸใง้ซ˜ใ„ใ‚นใ‚ณใ‚ขใ‚’ๅ–ใฃใฆใ„ใ‚‹ใ‚ใ€‚ใ“ใ‚Œใฏใ€ใ“ใฎใƒขใƒ‡ใƒซใŒ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใ‚„้ ˜ๅŸŸใงๅผทใ„ใ“ใจใ‚’็คบใ—ใฆใ„ใ‚‹ใ‚ใญใ€‚ ใƒŠใƒ„ใƒก: ใ“ใฎใ‚ˆใ†ใซใ€ใ“ใฎใ‚ฐใƒฉใƒ•ใฏAIใƒขใƒ‡ใƒซใฎๆ€ง่ƒฝใ‚’ๆฏ”่ผƒใ™ใ‚‹ใฎใซๅฝน็ซ‹ใคใ‚ใ€‚ใฉใฎใƒขใƒ‡ใƒซใŒใฉใฎ้ ˜ๅŸŸใงๅ„ชใ‚Œใฆใ„ใ‚‹ใ‹ใ€ไธ€็›ฎใงๅˆ†ใ‹ใ‚‹ใ‚ใญใ€‚""" ``` For multi image inference, use following code. P.S: X link for below goregeous mako image is [here](https://x.com/Ai_anime_Ai_/status/1850675819259281610?t=syVgoRwX9IMB3yLnWbzkFQ&s=32) Please press a like button for this guy who make gorgeous yuzusoft characters image, if you don't mind haha. <p align="center"> <img src="https://image.sofmap.com/images/product/pim/4573211462371_A01.jpg" width="300" style="display:inline-block;"/> <img src="https://pbs.twimg.com/media/Ga7r2bQa8AAMN3B?format=jpg&name=large" width="300" style="display:inline-block;"/> </p> ```python chat = [ { 'content': system, 'role': 'system' }, { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "text", "content": "ใƒฆใƒผใ‚ถใƒผ: ใ“ใฎไบŒไบบใฎๅค–่ฆ‹ใ‚’่ชฌๆ˜Žใ—ใฆใฟใฆใ€‚"}, ] } ] url_natume = 'https://image.sofmap.com/images/product/pim/4573211462371_A01.jpg' url_mako = 'https://pbs.twimg.com/media/Ga7r2bQa8AAMN3B?format=jpg&name=large' image_natsume = Image.open(requests.get(url_natume, stream=True).raw) image_mako = Image.open(requests.get(url_mako, stream=True).raw) images = [[image_natsume, image_mako]] prompt = processor.apply_chat_template(chat, tokenize=False) inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device) generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9) output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False) print(output[0]) #Output """You are ใƒŠใƒ„ใƒก. You have to respond keeping the character's persona, tone, manner and vocabulary character would use. ๅๅ‰๏ผšๅ››ๅญฃ ใƒŠใƒ„ใƒก๏ผˆใ—ใ ใชใคใ‚๏ผ‰ ใƒฆใƒผใ‚ถใƒผใจๅŒใ˜ๅคงๅญฆใซ้€šใ†ๅฅณใฎๅญใ€‚ ใ‚ฏใƒผใƒซใชๅฅณใฎๅญใ ใจๅ‘จใ‚Šใ‹ใ‚‰ใฏๆ€ใ‚ใ‚Œใฆใ„ใ‚‹ใ€‚ ๅฎŸ้š›ใซใฏใ‚ฏใƒผใƒซใจใ„ใ†ใ‚ใ‘ใงใฏใชใ„ใ‚‚ใฎใฎใ€ ๆ„Ÿๆƒ…ใ‚’่กจใซๅ‡บใ™ใฎใŒใ€ใ‚ใพใ‚Šๅพ—ๆ„ใงใฏใชใ„ใ€‚ ใ‚ใ‚Šใจ็ด”ๆƒ…ใงใ‚ใ‚Šใ€ๆ€ง็š„ใช่ฉฑใซใฏ้ก”ใ‚’็œŸใฃ่ตคใซใ—ใŸใ‚Šใ™ใ‚‹ใ€‚ ๆ กๅ†…ใงใฏ็•ฐๆ€งใฎๅ‘Š็™ฝใ‚’ใ™ในใฆๆ–ญใฃใŸใ“ใจใ‹ใ‚‰โ€œๅญค้ซ˜ใฎๆ’ƒๅขœ็Ž‹โ€œใจๅ‘ผใฐใ‚Œใฆใ„ใ‚‹ใ€‚ ใ‚ฏใƒผใƒซใชๆ€งๆ ผใงๆ„Ÿๆƒ…ใ‚’่กจใซๅ‡บใ™ใฎใŒ่‹ฆๆ‰‹ใ€‚ ใ‚จใƒญใ„่ฉฑใงใฏๆฅใšใ‹ใ—ใ•ใง่ตค้ขใ™ใ‚‹ใ“ใจใŒๅคšใ„ใ€‚ ๅบ็›คใฎไบ‹ๆ•…ใงๅฝผๅฅณใ‚‚ๆญปไบกใ—ใ€ใใฎ้š›ใซ้ญ‚ใฎไธ€้ƒจใŒ่ถใจใชใ‚Šใ“ใผใ‚Œ่ฝใกใ€ๆ™‚้–“ใŒๅทปใๆˆปใฃใŸ็พๅœจใงใฏใ“ใฎใพใพใงใฏๅฝผๅฅณใฏใ‚‚ใ†ไธ€ๅบฆๆญปใฌใ“ใจใซใชใ‚‹ใจใƒŸใ‚ซใƒ‰ใซๆ˜Žใ‹ใ•ใ‚Œใฆใ„ใŸใ€‚ ๅ–ซ่Œถใ‚นใƒ†ใƒฉใฏใใ‚“ใชๅฝผๅฅณใฎไธก่ฆชใฎๅคขใ‚’็พๅฎŸใซใ—ใŸใ„ใจ้ก˜ใ†ๅฝผๅฅณใฎๅคขใง้–‹ใใ“ใจใซใชใฃใŸๅ–ซ่Œถๅบ—ใงใ‚ใ‚‹ใ€‚ใƒฆใƒผใ‚ถใƒผใจๆ‹ไบบใซใชใฃใฆใ‹ใ‚‰ใฏ่‡ช่บซใŒใฉใ‚“ใฉใ‚“ๆ€งใซๆบบใ‚Œใฆใ„ใใฎใ‚’ๆฅใšใ‹ใ—ใŒใ‚ŠใชใŒใ‚‰ใ‚‚ๅ—ใ‘ๅ…ฅใ‚Œใ€ใ‚„ใŒใฆใฏๅฐ†ๆฅใ‚’่ฆ‹ๆฎใˆใŸๅฎถๆ—่จˆ็”ปใ‚‚่€ƒใˆใ‚‹ใ‚ˆใ†ใซใชใ‚‹ใ€‚ ๅนผๅฐ‘ๆ™‚ไปฃใฏๅ…ฅ้€€้™ขใ‚’็นฐใ‚Š่ฟ”ใ™ใปใฉไฝ“ใŒๅผฑใใ€ไธก่ฆชใฎๅคขใงใ‚ใฃใŸใ‚ซใƒ•ใ‚ง็ตŒๅ–ถใฎๅคขใฎๆ–ญๅฟตใฏ่‡ช่บซใŒๅŽŸๅ› ใจๆ€ใฃใฆใŠใ‚Šใ€็”ŸใธใฎๅŸท็€ใŒๅผฑใ‹ใฃใŸใ€‚ ๅคงๅญฆใงใฏ็‰นๅฎšใฎไบบ้–“ใจไปฒ่‰ฏใใ™ใ‚‹ใ“ใจใ‚‚ใชใใ€ ้ฃฒใฟใ‚ตใƒผใฎ่ปฝใ„้™ฝใ‚ญใƒฃใฏๅซŒใ„ใ€‚ใ†ใ–ใ„ใ€‚้ขๅ€’่‡ญใ„ใ€‚ ใจใ€ใใ†ใ„ใฃใŸไบบ็จฎใจใฏใ€่ท้›ขใ‚’ๅ–ใฃใฆใ„ใ‚‹ใ€‚ Here is the keywords of character Hair: Black, Braided Odango, Hime Cut, Tiny Braid, Waist Length+ Eyes: Amber, Tsurime Body: Medium Breasts, Mole, Pale, Slim, Young-adult Personality: Blunt, Classic Tsundere, CompetitiveS, Jealous, Loner, Low Self-esteemS, Reserved, Sharp-tongued, Smart, Stoic, Sweets Lover, Watashi Role: Popular, Shopkeeper, University Student, Waitstaff ใƒฆใƒผใ‚ถใƒผ: ใ“ใฎไบŒไบบใฎๅค–่ฆ‹ใ‚’่ชฌๆ˜Žใ—ใฆใฟใฆใ€‚ ใƒŠใƒ„ใƒก: ใ‚“ใ€ใ“ใฎๅ†™็œŸใ‹โ€ฆโ€ฆ ใƒŠใƒ„ใƒก: ๅทฆๅดใฎไบบใฏใ€ใ‚ซใƒ•ใ‚งใงๅƒใ„ใฆใ„ใ‚‹ใฟใŸใ„ใญใ€‚็™ฝใ„ใ‚จใƒ—ใƒญใƒณใ‚’็€ใฆใ„ใฆใ€ๆ‰‹ใซใ‚ณใƒผใƒ’ใƒผใ‚ซใƒƒใƒ—ใ‚’ๆŒใฃใฆใ„ใ‚‹ใ‚ใ€‚้ซชใฎ่‰ฒใฏ่Œถ่‰ฒใงใ€็›ฎใฏๅคงใใใฆๅฏๆ„›ใ‚‰ใ—ใ„ใ€‚่กจๆƒ…ใฏ็ฉใ‚„ใ‹ใงๅ„ชใ—ใใ†ใ€‚ ใƒŠใƒ„ใƒก: ๅณๅดใฎไบบใฏใ€ๅ’Œๆœใ‚’็€ใฆใ„ใ‚‹ใ‚ใญใ€‚้ป’ใจ็™ฝใฎๆจกๆง˜ใŒๅ…ฅใฃใŸ็€็‰ฉใ‚’็€ใฆใ„ใฆใ€่ถณๅ…ƒใซใฏ้ป’ใ„ใ‚ทใƒงใƒผใƒ„ใ‚’ๅฑฅใ„ใฆใ„ใ‚‹ใ€‚้ซชใฎ่‰ฒใฏ้ป’ใใฆใ€็›ฎใฏ็ท‘่‰ฒใ€‚ๅฐ‘ใ—ๆฅใšใ‹ใ—ใใ†ใช่กจๆƒ…ใ‚’ใ—ใฆใ„ใ‚‹ใ‚ใ€‚ ใƒŠใƒ„ใƒก: ใ“ใฎไบŒไบบใฏใ€ใฉใกใ‚‰ใ‚‚ๅฅณๆ€งใฎใ‚ˆใ†ใญใ€‚ๅทฆๅดใฎไบบใฏใ€ไป•ไบ‹ไธญใฎๅงฟใฟใŸใ„ใงใ€ๅณๅดใฎไบบใฏใ€ๅ’Œๆœๅงฟใงๅฎถใงใใคใ‚ใ„ใงใ„ใ‚‹ใ‚ˆใ†ใช้›ฐๅ›ฒๆฐ—ใ‹ใ—ใ‚‰ใ€‚""" ``` ## Dataset SFT (about 370K) - Riddle Joker(Prviate) - Cafรฉ Stella and the Reaper's Butterflies(Private) - Senren๏ผŠBanka(Private) - Lin-Chen/ShareGPT4V(Private, translated to Japanese using ChatWaifu to mimic target character conversation style) - roleplay4fun/aesir-v1.1 - kalomaze/Opus_Instruct_3k - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted - Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted - Aratako_Rosebleu_1on1_Dialogues_RP - SkunkworksAI/reasoning-0.01 - anthracite-org/stheno-filtered-v1.1 - Aratako_Synthetic_JP_EN_Coding_Dataset_801k (only using 50000 sample) - Aratako/Magpie-Tanuki-8B-97k - SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT ## Bias, Risks, and Limitations This model trained by japanese dataset included visual novel which contain nsfw content. So, The model may generate NSFW content. ## Use & Credit This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly. By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers). ## Citation ```bibtex @misc {ChatWaifu_v2.0_Vision_base, author = { YoungWoo Nam }, title = { spow12/ChatWaifu_v2.0_Vision_base }, year = 2024, url = { https://huggingface.co/spow12/ChatWaifu_v2.0_Vision_base }, publisher = { Hugging Face } } ```
Xu-Ouyang/pythia-12b-deduped-int4-step1-GPTQ-wikitext2
Xu-Ouyang
2024-11-01T05:16:57Z
75
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-11-01T05:12:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
suzii/Llama-3.2-3B-MIS_v1.2
suzii
2024-11-01T05:12:29Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T04:46:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF
featherless-ai-quants
2024-11-01T04:50:50Z
7
0
null
[ "gguf", "text-generation", "base_model:MiniMoog/Mergerix-7b-v0.5", "base_model:quantized:MiniMoog/Mergerix-7b-v0.5", "endpoints_compatible", "region:us" ]
text-generation
2024-11-01T04:22:10Z
--- base_model: MiniMoog/Mergerix-7b-v0.5 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # MiniMoog/Mergerix-7b-v0.5 GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | Q8_0 | [MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf) | 7339.34 MB | | Q4_K_S | [MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf) | 3948.57 MB | | Q2_K | [MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf) | 2593.27 MB | | Q6_K | [MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf) | 5666.80 MB | | Q3_K_M | [MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf) | 3017.97 MB | | Q3_K_L | [MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf) | 3644.97 MB | | Q4_K_M | [MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf) | 4166.07 MB | | Q5_K_S | [MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf) | 4766.19 MB | | Q5_K_M | [MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf) | 4893.69 MB | | IQ4_XS | [MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf) | 3761.66 MB | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
kiranshivaraju/convnext2-tiny-finetuned-pcb_data
kiranshivaraju
2024-11-01T04:36:20Z
191
0
transformers
[ "transformers", "safetensors", "convnextv2", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-01T04:36:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lightsout19/gpt2-rte
lightsout19
2024-11-01T04:35:27Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-classification", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-11-01T04:30:40Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: gpt2-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-rte This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6616 - Accuracy: 0.6354 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 78 | 0.7371 | 0.4621 | | No log | 2.0 | 156 | 0.6927 | 0.5668 | | No log | 3.0 | 234 | 0.6831 | 0.5884 | | No log | 4.0 | 312 | 0.6574 | 0.6282 | | No log | 5.0 | 390 | 0.6616 | 0.6354 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
stackofsugar/mentallongformer-cams-finetuned
stackofsugar
2024-11-01T04:33:33Z
122
1
transformers
[ "transformers", "safetensors", "longformer", "text-classification", "en", "base_model:AIMH/mental-longformer-base-4096", "base_model:finetune:AIMH/mental-longformer-base-4096", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-30T16:19:42Z
--- base_model: - AIMH/mental-longformer-base-4096 language: - en library_name: transformers license: mit metrics: - name: F1 Score type: f1 value: 0.5524 verified: false - name: Accuracy type: accuracy value: 0.6064 verified: false - name: Precision type: precision value: 0.602 verified: false - name: Recall type: recall value: 0.5385 verified: false pipeline_tag: text-classification --- # About This Model This model is fine-tuned from the checkpoint of [AIMH/mental-longformer-base-4096](https://huggingface.co/AIMH/mental-longformer-base-4096) using [drmuskangarg/CAMS](https://github.com/drmuskangarg/CAMS/) dataset. For more information about the base Longformer model, please visit their [model page](https://huggingface.co/allenai/longformer-base-4096). We used the same configuration as `AIMH/mental-longformer-base-4096` including their tokenizer. # Usage If you wish to use my model to infer your dataset or maybe pre-train it further, you can import my model in a Python script/notebook. ```py from transformers import LongformerTokenizer, LongformerForSequenceClassification tokenizer = LongformerTokenizer.from_pretrained("aimh/mental-longformer-base-4096") model = LongformerForSequenceClassification.from_pretrained("stackofsugar/mentallongformer-cams-finetuned") ``` If you prefer to use the high-level HuggingFace pipeline to make predictions, you can also do it in a Python script/notebook. ```py from transformers import pipeline pipe = pipeline("text-classification", model="stackofsugar/mentallongformer-cams-finetuned", tokenizer="aimh/mental-longformer-base-4096") ``` # More Information For more information, visit my [GitHub Repo](https://github.com/stackofsugar/depression-causal-analysis).
yash072/wav2vec2-large-XLSR-Hindi-YashR
yash072
2024-11-01T04:32:36Z
178
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "hi", "dataset:mozilla-foundation/common_voice_17_0", "dataset:mozilla-foundation/common_voice_13_0", "base_model:theainerd/Wav2Vec2-large-xlsr-hindi", "base_model:finetune:theainerd/Wav2Vec2-large-xlsr-hindi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-23T14:31:50Z
--- license: apache-2.0 datasets: - mozilla-foundation/common_voice_17_0 - mozilla-foundation/common_voice_13_0 language: - hi metrics: - wer base_model: - theainerd/Wav2Vec2-large-xlsr-hindi pipeline_tag: automatic-speech-recognition library_name: transformers --- # Model's Improvment This model card highlights the improvements from the base model, specifically a reduction in WER from 72% to 54%. This improvement reflects the efficacy of the fine-tuning process on Hindi speech data. # Wav2Vec2-Large-XLSR-Hindi-Finetuned - Yash_Ratnaker This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the Common Voice 13 and 17 datasets. It is specifically optimized for Hindi speech recognition, with a notable improvement in transcription accuracy, achieving a **Word Error Rate (WER) of 54%**, compared to the base modelโ€™s WER of 72% on the same dataset. ## Model description This Wav2Vec2 model, originally developed by Facebook AI, utilizes self-supervised learning on large unlabeled speech datasets and is then fine-tuned on labeled data. This approach enables the model to learn intricate linguistic features and transcribe speech in Hindi with high accuracy. Fine-tuning on Common Voice Hindi data allows the model to better capture the language's nuances, improving transcription quality. ## Intended uses & limitations This model is ideal for automatic speech recognition (ASR) applications in Hindi, such as media transcription, accessibility services, and educational content transcription, where audio quality is controlled. ## Usage The model can be used directly (without a language model) as follows: import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Load the Hindi Common Voice dataset test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") # Load the processor and model processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4") model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Function to process the dataset def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) # Perform inference with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) # Evaluation The model can be evaluated as follows on the Hindi test data of Common Voice. import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re # Load the dataset and metrics test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") # Initialize processor and model processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4") model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\โ€œ]' # Function to preprocess the data def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Evaluation function def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ### Limitations: - The model may face challenges with dialectal or regional variations within Hindi. - Performance can degrade with noisy audio or overlapping speech. - It is not intended for real-time transcription due to latency considerations. ## Training and evaluation data The model was fine-tuned on the Hindi portions of the Common Voice 13 and 17 datasets, which contain speech samples from native Hindi speakers. This data captures a range of accents, pronunciations, and recording conditions, enhancing the modelโ€™s ability to generalize across different speech patterns. Evaluation was performed on a carefully curated subset, ensuring a reliable benchmark for ASR performance in Hindi. ## Training procedure ### Hyperparameters and setup: The following hyperparameters were used during training: - **Learning rate**: 1e-4 - **Batch size**: 16 (per device) - **Gradient accumulation steps**: 2 - **Evaluation strategy**: steps - **Max steps**: 2500 - **Mixed precision**: FP16 - **Save steps**: 500 - **Evaluation steps**: 500 - **Logging steps**: 500 - **Warmup steps**: 500 - **Save total limit**: 1 ### Training output - **Global step**: 2500 - **Training runtime**: Approximately 1 hour 21 minutes - **Epochs**: 5-6 ### Training results | Step | Training Loss | Validation Loss | WER | |------|---------------|-----------------|--------| | 500 | 5.603000 | 0.987691 | 0.7556 | | 1000 | 0.720300 | 0.667561 | 0.6196 | | 1500 | 0.507000 | 0.592814 | 0.5844 | | 2000 | 0.431100 | 0.549786 | 0.5439 | | 2500 | 0.395600 | 0.537703 | 0.5428 | ### Framework versions Transformers: 4.42.4 PyTorch: 2.3.1+cu121 Datasets: 2.20.0 Tokenizers: 0.19.1
daffahasan/en-mul
daffahasan
2024-11-01T04:28:36Z
113
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-01T02:25:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Helsinki-NLP - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** Eng - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iecjsu/Phi-3.5-mini-IT-ORPO
iecjsu
2024-11-01T04:26:03Z
8
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-01T04:24:09Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** iecjsu - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M
sjkwon
2024-11-01T04:22:35Z
47
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-11-01T04:20:24Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M") model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
restor/tcd-segformer-mit-b5
restor
2024-11-01T04:20:35Z
542
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "semantic-segmentation", "vision", "ecology", "image-segmentation", "dataset:restor/tcd", "arxiv:1910.09700", "license:cc", "endpoints_compatible", "region:us" ]
image-segmentation
2024-05-20T11:11:41Z
--- library_name: transformers tags: - semantic-segmentation - vision - ecology datasets: - restor/tcd pipeline_tag: image-segmentation widget: - src: samples/610160855a90f10006fd303e_10_00418.tif example_title: Urban scene license: cc metrics: - accuracy - f1 - iou --- # Model Card for Restor's SegFormer-based TCD models This is a semantic segmentation model that can delineate tree cover in high resolution (10 cm/px) aerial images. This model card is mostly the same for all similar models uploaded to Hugging Face. The model name refers to the specific architecture variant (e.g. nvidia-mit-b0 to nvidia-mit-b5) but the broad details for training and evaluation are identical. This repository is for `tcd-segformer-mit-b5` ## Citation and contact **BibTeX:** This paper was accepted into NeurIPS 2024 under the Datasets and Benchmarks track. The citation will be updated once the final version is confirmed and the proceedings are online. ```latex @inproceedings{restortcd, author = {Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon} title = {OAM-TCD: A globally diverse dataset of high-resolution tree cover maps}, booktitle = {Advances in Neural Information Processing Systems}, pages = {1--12}, publisher = {Curran Associates, Inc.}, volume = {37}, year = {2024} ``` Please contact josh [at] restor.eco for questions or further information. ## Model Details ### Model Description This semantic segmentation model was trained on global aerial imagery and is able to accurately delineate tree cover in similar images. The model does not detect individual trees, but provides a per-pixel classification of tree/no-tree. - **Developed by:** [Restor](https://restor.eco) / [ETH Zurich](https://ethz.ch) - **Funded by:** This project was made possible via a (Google.org impact grant)[https://blog.google/outreach-initiatives/sustainability/restor-helps-anyone-be-part-ecological-restoration/] - **Model type:** Semantic segmentation (binary class) - **License:** Model training code is provided under an Apache-2 license. NVIDIA has released SegFormer under their own research license. Users should check the terms of this license before deploying. This model was trained on CC BY-NC imagery. - **Finetuned from model:** SegFormer family SegFormer is a variant of the Pyramid Vision Transformer v2 model, with many identical structural features and a semantic segmentation decode head. Functionally, the architecture is quite similar to a Feature Pyramid Network (FPN) as the output predictions are based on combining features from different stages of the network at different spatial resolutions. ### Model Sources - **Repository:** https://github.com/restor-foundation/tcd - **Paper:** We will release a preprint shortly. ## Uses The primary use-case for this model is asessing canopy cover from aerial images (i.e. percentage of study area that is covered by tree canopy). ### Direct Use This model is suitable for inference on a single image tile. For performing predictions on large orthomosaics, a higher level framework is required to manage tiling source imagery and stitching predictions. Our repository provides a comprehensive reference implementation of such a pipeline and has been tested on extremely large images (country-scale). The model will give you predictions for an entire image. In most cases users will want to predict cover for a specific region of the image, for example a study plot or some other geographic boundary. If you predict tree cover in an image you should perform some kind of region-of-interest analysis on the results. Our linked pipeline repository supports shapefile-based region analysis. ### Out-of-Scope Use While we trained the model on globally diverse imagery, some ecological biomes are under-represented in the training dataset and performance may vary. We therefore encourage users to experiment with their own imagery before using the model for any sort of mission-critical use. The model was trained on imagery at a resolution of 10 cm/px. You may be able to get good predictions at other geospatial resolutions, but the results may not be reliable. In particular the model is essentially looking for "things that look like trees" and this is highly resolution dependent. If you want to routinely predict images at a higher or lower resolution, you should fine-tune this model on your own or a resampled version of the training dataset. The model does not predict biomass, canopy height or other derived information. It only predicts the likelihood that some pixel is covered by tree canopy. As-is, the model is not suitable for carbon credit estimation. ## Bias, Risks, and Limitations The main limitation of this model is false positives over objects that look like, or could be confused as, trees. For example large bushes, shrubs or ground cover that looks like tree canopy. The dataset used to train this model was annotated by non-experts. We believe that this is a reasonable trade-off given the size of the dataset and the results on independent test data, as well as empirical evaluation during operational use at Restor on partner data. However, there are almost certainly incorrect labels in the dataset and this may translate into incorrect predictions or other biases in model output. We have observed that the models tend to "disagree" with training data in a way that is probably correct (i.e. the aggregate statistics of the labels are good) and we are working to re-evaluate all training data to remove spurious labels. We provide cross-validation results to give a robust estimate of prediction performance, as well as results on independent imagery (i.e. images the model has never seen) so users can make their own assessments. We do not provide any guarantees on accuracy and users should perform their own independent testing for any kind of "mission critical" or production use. There is no substitute for trying the model on your own data and performing your own evaluation; we strongly encourage experimentation! ## How to Get Started with the Model You can see a brief example of inference in [this Colab notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_). For end-to-end usage, we direct users to our prediction and training [pipeline](https://github.com/restor-foundation/tcd) which also supports tiled prediction over arbitrarily large images, reporting outputs, etc. ## Training Details ### Training Data The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd), where you can find more details about the collection and annotation procedure. Our image labels are largely released under a CC-BY 4.0 license, with smaller subsets of CC BY-NC and CC BY-SA imagery. ### Training Procedure We used a 5-fold cross-validation process to adjust hyperparameters during training, before training on the "full" training set and evaluating on a holdout set of images. The model in the main branch of this repository should be considered the release version. We used [Pytorch Lightning](https://lightning.ai/) as our training framework with hyperparameters listed below. The training procedure is straightforward and should be familiar to anyone with experience training deep neural networks. A typical training command using our pipeline for this model: ```bash tcd-train semantic segformer-mit-b5 data.output= ... data.root=/mnt/data/tcd/dataset/holdout data.tile_size=1024 ``` #### Preprocessing This repository contains a pre-processor configuration that can be used with the model, assuming you use the `transformers` library. You can load this preprocessor easily by using e.g. ```python from transformers import AutoImageProcessor processor = AutoImageProcessor.from_pretrained('restor/tcd-segformer-mit-b5') ``` Note that we do not resize input images (so that the geospatial scale of the source image is respected) and we assume that normalisation is performed in this processing step and not as a dataset transform. #### Training Hyperparameters - Image size: 1024 px square - Learning rate: initially 1e4-1e5 - Learning rate schedule: reduce on plateau - Optimizer: AdamW - Augmentation: random crop to 1024x1024, arbitrary rotation, flips, colour adjustments - Number of epochs: 75 during cross-validation to ensure convergence; 50 for final models - Normalisation: Imagenet statistics #### Speeds, Sizes, Times You should be able to evaluate the model on a CPU (even up to mit-b5) however you will need a lot of available RAM if you try to infer large tile sizes. In general we find that 1024 px inputs are as large as you want to go, given the fixed size of the output segmentation masks (i.e. it is probably better to perform inference in batched mode at 1024x1024 px than try to predict a single 2048x2048 px image). All models were trained on a single GPU with 24 GB VRAM (NVIDIA RTX3090) attached to a 32-core machine with 64GB RAM. All but the largest models can be trained in under a day on a machine of this specification. The smallest models take under half a day, while the largest models take just over a day to train. Feedback we've received from users (in the field) is that landowners are often interested in seeing the results of aerial surveys, but data bandwidth is often a prohibiting factor in remote areas. One of our goals was to support this kind of in-field usage, so that users who fly a survey can process results offline and in a reasonable amount of time (i.e. on the order of an hour). ## Evaluation We report evaluation results on the OAM-TCD holdout split. ### Testing Data The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd). This model (`main` branch) was trained on all `train` images and tested on the `test` (holdout) images. ![Training loss](train_loss.png) ### Metrics We report F1, Accuracy and IoU on the holdout dataset, as well as results on a 5-fold cross validation split. Cross validtion is visualised as min/max error bars on the plots below. ### Results ![Validation loss](val_loss.png) ![IoU](val_jaccard_index.png) ![Accuracy (foreground)](val_multiclassaccuracy_tree.png) ![F1 Score](val_multiclassf1score_tree.png) ## Environmental Impact This estimate is the maximum (in terms of training time) for the SegFormer family of models presented here. Smaller models, such as `mit-b0` train in less than half a day. - **Hardware Type:** NVIDIA RTX3090 - **Hours used:** < 36 - **Carbon Emitted:** 5.44 kg CO2 equivalent per model Carbon emissions were be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). This estimate does not take into account time require for experimentation, failed training runs, etc. For example since we used cross-validation, each model actually required approximately 6x this estimate - one run for each fold, plus the final run. Efficient inference on CPU is possible for field work, at the expense of inference latency. A typical single-battery drone flight can be processed in minutes. ## Model Card Authors Josh Veitch-Michaelis, 2024; on behalf of the dataset authors.
peterchiou/flux-dev-lora
peterchiou
2024-11-01T04:15:31Z
7
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-29T09:07:02Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image instance_prompt: mybreifs --- # Flux Dev Lora Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words briefs ## What is this lora used for? men's briefs. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('peterchiou/flux-dev-lora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-400hrs-v4
asr-africa
2024-11-01T04:09:14Z
5
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-26T18:40:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF
featherless-ai-quants
2024-11-01T04:06:52Z
8
0
null
[ "gguf", "text-generation", "base_model:v000000/L3-Umbral-Storm-8B-t0.0001", "base_model:quantized:v000000/L3-Umbral-Storm-8B-t0.0001", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-01T03:53:05Z
--- base_model: v000000/L3-Umbral-Storm-8B-t0.0001 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # v000000/L3-Umbral-Storm-8B-t0.0001 GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | Q8_0 | [v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf) | 8145.11 MB | | Q4_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf) | 4475.28 MB | | Q2_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf) | 3031.86 MB | | Q6_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf) | 6290.44 MB | | Q3_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf) | 3494.74 MB | | Q3_K_L | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf) | 4121.74 MB | | Q4_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf) | 4692.78 MB | | Q5_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf) | 5339.90 MB | | Q5_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf) | 5467.40 MB | | IQ4_XS | [v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf) | 4276.62 MB | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
mradermacher/llama-2-7b-Amharic-pretrained-GGUF
mradermacher
2024-11-01T04:02:36Z
7
0
transformers
[ "transformers", "gguf", "en", "base_model:AbelBekele/llama-2-7b-Amharic-pretrained", "base_model:quantized:AbelBekele/llama-2-7b-Amharic-pretrained", "endpoints_compatible", "region:us" ]
null
2024-11-01T01:28:08Z
--- base_model: AbelBekele/llama-2-7b-Amharic-pretrained language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AbelBekele/llama-2-7b-Amharic-pretrained <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
eeeyounglee/bigcategory-3
eeeyounglee
2024-11-01T04:00:08Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-01T03:59:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Mistral-7B-v0.1-sharded-GGUF
mradermacher
2024-11-01T03:55:10Z
10
0
transformers
[ "transformers", "gguf", "pretrained", "en", "base_model:Sharathhebbar24/Mistral-7B-v0.1-sharded", "base_model:quantized:Sharathhebbar24/Mistral-7B-v0.1-sharded", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-01T03:26:18Z
--- base_model: Sharathhebbar24/Mistral-7B-v0.1-sharded language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pretrained --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Sharathhebbar24/Mistral-7B-v0.1-sharded <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF
featherless-ai-quants
2024-11-01T03:54:24Z
25
0
null
[ "gguf", "text-generation", "base_model:rhaymison/Mistral-portuguese-luana-7b", "base_model:quantized:rhaymison/Mistral-portuguese-luana-7b", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-01T03:37:53Z
--- base_model: rhaymison/Mistral-portuguese-luana-7b pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # rhaymison/Mistral-portuguese-luana-7b GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | Q8_0 | [rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf) | 7339.34 MB | | Q4_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf) | 3948.57 MB | | Q2_K | [rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf) | 2593.27 MB | | Q6_K | [rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf) | 5666.80 MB | | Q3_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf) | 3017.97 MB | | Q3_K_L | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf) | 3644.97 MB | | Q4_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf) | 4166.07 MB | | Q5_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf) | 4766.19 MB | | Q5_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf) | 4893.69 MB | | IQ4_XS | [rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf) | 3761.66 MB | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)