modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
MaziyarPanahi/Chocolatine-3B-Instruct-DPO-v1.2-GGUF | MaziyarPanahi | 2024-10-20T16:39:42Z | 168 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:jpacifico/Chocolatine-3B-Instruct-DPO-v1.2",
"base_model:quantized:jpacifico/Chocolatine-3B-Instruct-DPO-v1.2",
"region:us",
"conversational"
] | text-generation | 2024-10-20T16:20:28Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Chocolatine-3B-Instruct-DPO-v1.2-GGUF
base_model: jpacifico/Chocolatine-3B-Instruct-DPO-v1.2
inference: false
model_creator: jpacifico
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Chocolatine-3B-Instruct-DPO-v1.2-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-v1.2-GGUF)
- Model creator: [jpacifico](https://huggingface.co/jpacifico)
- Original model: [jpacifico/Chocolatine-3B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2)
## Description
[MaziyarPanahi/Chocolatine-3B-Instruct-DPO-v1.2-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-v1.2-GGUF) contains GGUF format model files for [jpacifico/Chocolatine-3B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Addaci/mT5-small-experiment-13-checkpoint-2790 | Addaci | 2024-10-20T16:37:59Z | 11 | 0 | null | [
"safetensors",
"t5",
"en",
"la",
"dataset:MarineLives/raw-htr-handchecked-groundtruth-small",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"region:us"
] | null | 2024-10-20T11:38:55Z | ---
license: apache-2.0
datasets:
- MarineLives/raw-htr-handchecked-groundtruth-small
language:
- en
- la
base_model:
- google/mt5-small
--- |
katopz/kbtg-kpoint-v1-fused | katopz | 2024-10-20T16:29:25Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"mlx",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T16:17:34Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-3B-Instruct
---
# katopz/kbtg-kpoint-v1-fused
The Model [katopz/kbtg-kpoint-v1-fused](https://huggingface.co/katopz/kbtg-kpoint-v1-fused) was converted to MLX format from [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using mlx-lm version **0.19.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("katopz/kbtg-kpoint-v1-fused")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
1MK26/HydrogenGEN_BART | 1MK26 | 2024-10-20T16:14:45Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T16:13:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AppyFizz/calrealxl | AppyFizz | 2024-10-20T16:09:53Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-20T16:08:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### calrealxl on Stable Diffusion via Dreambooth
#### model by AppyFizz
This your the Stable Diffusion model fine-tuned the calrealxl concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **calrealxl woman**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
thangtrungnguyen/vietnamese-female-male-voice-classification-model | thangtrungnguyen | 2024-10-20T16:08:18Z | 157 | 1 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-10-20T13:27:39Z | ---
library_name: transformers
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: vietnamese-female-male-voice-classification-model
results:
- task:
name: Audio Classification
type: audio-classification
metrics:
- name: F1
type: f1
value: 0.9772036474164134
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vietnamese-gender-voice-classification-model
It achieves the following results on the evaluation set:
- Loss: 0.0946
- F1: 0.9772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.5172 | 0.9931 | 36 | 0.2619 | 0.9734 |
| 0.1429 | 1.9862 | 72 | 0.1295 | 0.9735 |
| 0.111 | 2.9793 | 108 | 0.1066 | 0.9742 |
| 0.0894 | 4.0 | 145 | 0.0982 | 0.9749 |
| 0.0812 | 4.9655 | 180 | 0.0946 | 0.9772 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0 |
mradermacher/Smoke_7B-i1-GGUF | mradermacher | 2024-10-20T16:02:06Z | 118 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:FourOhFour/Smoke_7B",
"base_model:quantized:FourOhFour/Smoke_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-20T13:17:41Z | ---
base_model: FourOhFour/Smoke_7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FourOhFour/Smoke_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Smoke_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF | MaziyarPanahi | 2024-10-20T15:59:29Z | 184 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"base_model:quantized:jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"region:us",
"conversational"
] | text-generation | 2024-10-20T15:40:51Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Chocolatine-3B-Instruct-DPO-Revised-GGUF
base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
inference: false
model_creator: jpacifico
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF)
- Model creator: [jpacifico](https://huggingface.co/jpacifico)
- Original model: [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised)
## Description
[MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF) contains GGUF format model files for [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
esahit/ul2-base-dutch-finetuned-oba-book-search | esahit | 2024-10-20T15:46:53Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:yhavinga/ul2-base-dutch",
"base_model:finetune:yhavinga/ul2-base-dutch",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T09:57:55Z | ---
library_name: transformers
license: apache-2.0
base_model: yhavinga/ul2-base-dutch
tags:
- generated_from_trainer
model-index:
- name: ul2-base-dutch-finetuned-oba-book-search
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ul2-base-dutch-finetuned-oba-book-search
This model is a fine-tuned version of [yhavinga/ul2-base-dutch](https://huggingface.co/yhavinga/ul2-base-dutch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5040
- Top-5-accuracy: 0.0597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top-5-accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------------:|
| 6.7559 | 0.0848 | 500 | 7.0741 | 0.0 |
| 7.3594 | 0.1696 | 1000 | 7.0888 | 0.0 |
| 7.4457 | 0.2544 | 1500 | 6.8574 | 0.0 |
| 7.6522 | 0.3392 | 2000 | 7.2824 | 0.0 |
| 7.4598 | 0.4239 | 2500 | 7.1592 | 0.0 |
| 7.4733 | 0.5087 | 3000 | 6.8309 | 0.0 |
| 7.1533 | 0.5935 | 3500 | 6.3314 | 0.0 |
| 7.1903 | 0.6783 | 4000 | 6.6715 | 0.0 |
| 12.2465 | 0.7631 | 4500 | 7.5477 | 0.0 |
| 7.0061 | 0.8479 | 5000 | 6.7576 | 0.0 |
| 6.7448 | 0.9327 | 5500 | 6.2698 | 0.0 |
| 6.4934 | 1.0175 | 6000 | 6.0520 | 0.0 |
| 6.7022 | 1.1023 | 6500 | 6.4743 | 0.0 |
| 6.6138 | 1.1870 | 7000 | 6.6552 | 0.0 |
| 6.1879 | 1.2718 | 7500 | 5.8394 | 0.0 |
| 6.3701 | 1.3566 | 8000 | 6.2708 | 0.0 |
| 6.0675 | 1.4414 | 8500 | 5.8804 | 0.0 |
| 5.9228 | 1.5262 | 9000 | 5.4786 | 0.0796 |
| 5.8256 | 1.6110 | 9500 | 5.8534 | 0.0 |
| 5.529 | 1.6958 | 10000 | 5.4673 | 0.0796 |
| 5.3783 | 1.7806 | 10500 | 5.1146 | 0.0 |
| 5.3029 | 1.8654 | 11000 | 5.1393 | 0.0 |
| 5.0497 | 1.9501 | 11500 | 4.8904 | 0.0 |
| 4.9395 | 2.0349 | 12000 | 4.7346 | 0.0 |
| 4.6926 | 2.1197 | 12500 | 4.6029 | 0.0 |
| 4.5387 | 2.2045 | 13000 | 4.3546 | 0.1393 |
| 4.3876 | 2.2893 | 13500 | 4.2308 | 0.0597 |
| 4.2131 | 2.3741 | 14000 | 4.1112 | 0.1990 |
| 4.0999 | 2.4589 | 14500 | 3.9334 | 0.0995 |
| 3.9525 | 2.5437 | 15000 | 3.8421 | 0.0 |
| 3.8629 | 2.6285 | 15500 | 3.7120 | 0.1592 |
| 3.7975 | 2.7132 | 16000 | 3.5973 | 0.0796 |
| 3.7205 | 2.7980 | 16500 | 3.5398 | 0.0796 |
| 3.6382 | 2.8828 | 17000 | 3.5131 | 0.2786 |
| 3.5967 | 2.9676 | 17500 | 3.5040 | 0.0597 |
### Framework versions
- Transformers 4.44.2
- Pytorch 1.13.0+cu116
- Datasets 3.0.0
- Tokenizers 0.19.1
|
cosmicthrillseeking/minisft | cosmicthrillseeking | 2024-10-20T15:40:15Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T15:38:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abhaykoul/HelpingAI2.5-prototype-v2 | Abhaykoul | 2024-10-20T15:33:45Z | 5 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotions",
"humanlike",
"eq",
"prototype",
"conversational",
"base_model:HelpingAI/HelpingAI2-9B",
"base_model:finetune:HelpingAI/HelpingAI2-9B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T15:28:37Z | ---
license: other
base_model:
- OEvortex/HelpingAI2-9B
pipeline_tag: text-generation
library_name: transformers
tags:
- HelpingAI
- Emotions
- humanlike
- eq
- prototype
---
# HelpingAI 2.5 Prototype
## Overview
Welcome to the **HelpingAI 2.5 Prototype**! This model is designed to provide emotionally intelligent conversational AI experiences. By understanding user emotions and context, HelpingAI 2.5 aims to enhance human-computer interactions, making them more meaningful and engaging.
## Key Features
- **Emotion Recognition:** Understands user emotions for tailored responses.
- **Contextual Understanding:** Adapts based on conversation history.
- **Multi-Domain Support:** Suitable for various applications including customer support, education, and personal assistance.
- **User Feedback Integration:** Continuously improves based on user interactions and feedback.
## Demo
Experience the model in action! Visit our [demo space](https://huggingface.co/spaces/Abhaykoul/HelpingAI2.5-prototype) to try out the HelpingAI 2.5 prototype.
## Getting Involved
We’re eager to hear your thoughts! Feel free to provide feedback or report issues via [discussion](https://huggingface.co/Abhaykoul/HelpingAI2.5-prototype-v2/discussions).
## Future Plans
We are excited to announce that we will be re-releasing **HelpingAI (3B, 3B Coder, and Flash)** with a new personality and more human-like features on **Diwali**! Additionally, the **HelpingAI 2.5 models** will be available on **November 17** 🎉.
## Acknowledgments
- [Hugging Face](https://huggingface.co) for providing an incredible platform for AI development.
- The open-source community for their continuous support and contributions.
---
Join us in shaping the future of AI! 🤖💖
|
sarpba/whisper-base-hungarian_v1 | sarpba | 2024-10-20T15:30:01Z | 253 | 7 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hu",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-12T15:55:47Z | ---
library_name: transformers
language:
- hu
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: Whisper Base Hungarian v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: fleurs
config: hu_hu
split: test
args: hu_hu
metrics:
- name: Wer
type: wer
value: 29.48142356294297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
A kezdeti próbálkozásokat mind eltávolítottam, ez a jelenleg rendelkezésre álló eszközök és technológia által létrehozható legjobb magyar nyelvere finomhangolt whisper base modell.
A többi magyar nyelvre finomhangolt base modelltől nagyságrendellek jobb eredményeket ér el minden adatkészleten!
# Whisper Base Hungarian
Ez a modell a finomhangolt változata a [openai/whisper-base](https://huggingface.co/openai/whisper-base) -nek sarpba/big_audio_data_hun adatkészleten.
Teszteredmények:
("google/fleurs", "hu_hu", "test") (képzés közbeni)
- Loss: 0.7999
- Wer Ortho: 33.8788
- Wer: 29.4814
("mozilla-foundation/common_voice_17_0", "hu", "test")
- WER: 25.58
- CER: 6.34
- Normalised WER: 21.18
- Normalised CER: 5.31
## Model description
Egyedi adatkészleten magyarta finomhangolt whisper base modell.
## Intended uses & limitations
Üzleti cálra a modell a hozzájárulásom nélkül nem használható! Magán célra szabadon felhasználható a whisper esedeti licenszfeltételei szerint! Commercial use of this fine-tuning is not permitted!
## Training and evaluation data
A modell hozzávetőleg 1200 óra gondosan válogatott magyar hanganyag alapján készült. A képzés során a tesztek a google/flerus-t használták a fejlődés ellenőrzésére.
Alatta a mozilla-foundation/common_voice_17_0 eredménye.
Egyik adatkészlet sem szerepelt a képzési adatok közt, a modell tesztanyaggal nem fertőzött!
## Training procedure
A képzés optimalizációja 3 napig futott a ray[tune] segítségével, a megtalált optimális képzési paraméterekkel a finomhangolás hozzávetőleg 17 órába telt!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2523 | 0.3770 | 1000 | 0.9703 | 50.8988 | 46.7185 |
| 0.1859 | 0.7539 | 2000 | 0.8605 | 43.4345 | 39.4103 |
| 0.127 | 1.1309 | 3000 | 0.8378 | 40.6107 | 36.0040 |
| 0.1226 | 1.5079 | 4000 | 0.8153 | 38.9189 | 34.1842 |
| 0.1105 | 1.8848 | 5000 | 0.7847 | 36.6018 | 32.1979 |
| 0.0659 | 2.2618 | 6000 | 0.8298 | 35.3752 | 30.6379 |
| 0.0594 | 2.6388 | 7000 | 0.8132 | 34.8255 | 30.2280 |
| 0.0316 | 3.0157 | 8000 | 0.7999 | 33.8788 | 29.4814 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MaziyarPanahi/Llama3.1-8B-Chinese-Chat-GGUF | MaziyarPanahi | 2024-10-20T15:19:24Z | 65 | 1 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:shenzhi-wang/Llama3.1-8B-Chinese-Chat",
"base_model:quantized:shenzhi-wang/Llama3.1-8B-Chinese-Chat",
"region:us",
"conversational"
] | text-generation | 2024-10-20T14:30:11Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama3.1-8B-Chinese-Chat-GGUF
base_model: shenzhi-wang/Llama3.1-8B-Chinese-Chat
inference: false
model_creator: shenzhi-wang
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama3.1-8B-Chinese-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Llama3.1-8B-Chinese-Chat-GGUF)
- Model creator: [shenzhi-wang](https://huggingface.co/shenzhi-wang)
- Original model: [shenzhi-wang/Llama3.1-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat)
## Description
[MaziyarPanahi/Llama3.1-8B-Chinese-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Llama3.1-8B-Chinese-Chat-GGUF) contains GGUF format model files for [shenzhi-wang/Llama3.1-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Anish/CHIPSAL-COLING-TASKC-BEST-F1-0.777380-MURIL-LARGE-CASED | Anish | 2024-10-20T15:15:41Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-20T15:14:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lyte/Llama-3.2-3B-Overthinker | Lyte | 2024-10-20T15:09:51Z | 75 | 19 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:Lyte/Reasoning-Paused",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-17T22:49:53Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
datasets:
- Lyte/Reasoning-Paused
pipeline_tag: text-generation
model-index:
- name: Llama-3.2-3B-Overthinker
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 64.08
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 20.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.64
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.9
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.06
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
---
# Model Overview:
- **Training Data**: This model was trained on a dataset with columns for initial reasoning, step-by-step thinking, verifications after each step, and final answers based on full context. Is it better than the original base model? Hard to say without proper evaluations, and I don’t have the resources to run them manually.
- **Context Handling**: The model benefits from larger contexts (minimum 4k up to 16k tokens, though it was trained on 32k tokens). It tends to "overthink," so providing a longer context helps it perform better.
- **Performance**: Based on my very few manual tests, the model seems to excel in conversational settings—especially for mental health, creative tasks and explaining stuff. However, I encourage you to try it out yourself using this [Colab Notebook](https://colab.research.google.com/drive/1dcBbHAwYJuQJKqdPU570Hddv_F9wzjPO?usp=sharing).
- **Dataset Note**: The publicly available dataset is only a partial version. The full dataset was originally designed for a custom Mixture of Experts (MoE) architecture, but I couldn't afford to run the full experiment.
- **Acknowledgment**: Special thanks to KingNish for reigniting my passion to revisit this project. I almost abandoned it after my first attempt a month ago. Enjoy this experimental model!
# Inference Code:
- Feel free to make the steps and verifications collapsable and the initial reasoning too, you can show only the final answer to get an o1 feel(i don't know)
- **Note:** A feature we have here is the ability to control how many steps and verifications you want.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Lyte/Llama-3.2-3B-Overthinker"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
def generate_response(prompt, max_tokens=16384, temperature=0.8, top_p=0.95, repeat_penalty=1.1, num_steps=3):
messages = [{"role": "user", "content": prompt}]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(
**reasoning_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate thinking (step-by-step and verifications)
messages.append({"role": "reasoning", "content": reasoning_output})
thinking_template = tokenizer.apply_chat_template(messages, tokenize=False, add_thinking_prompt=True, num_steps=num_steps)
thinking_inputs = tokenizer(thinking_template, return_tensors="pt").to(model.device)
thinking_ids = model.generate(
**thinking_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
thinking_output = tokenizer.decode(thinking_ids[0, thinking_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate final answer
messages.append({"role": "thinking", "content": thinking_output})
answer_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
answer_inputs = tokenizer(answer_template, return_tensors="pt").to(model.device)
answer_ids = model.generate(
**answer_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
answer_output = tokenizer.decode(answer_ids[0, answer_inputs.input_ids.shape[1]:], skip_special_tokens=True)
return reasoning_output, thinking_output, answer_output
# Example usage:
prompt = "Explain the process of photosynthesis."
response = generate_response(prompt, num_steps=5)
print("Response:", response)
```
# Uploaded model
- **Developed by:** Lyte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Notice:
- **The problem with runnning evals is that they won't make use of the correct template and it won't be a true eval then would it? so these barely test the model.**
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Lyte__Llama-3.2-3B-Overthinker)
| Metric |Value|
|-------------------|----:|
|Avg. |19.00|
|IFEval (0-Shot) |64.08|
|BBH (3-Shot) |20.10|
|MATH Lvl 5 (4-Shot)| 2.64|
|GPQA (0-shot) | 1.23|
|MuSR (0-shot) | 3.90|
|MMLU-PRO (5-shot) |22.06|
|
noxinc/bitnet_b1_58-large-Q5_K_M-GGUF | noxinc | 2024-10-20T15:08:06Z | 21 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:1bitLLM/bitnet_b1_58-large",
"base_model:quantized:1bitLLM/bitnet_b1_58-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T15:08:02Z | ---
license: mit
base_model: 1bitLLM/bitnet_b1_58-large
tags:
- llama-cpp
- gguf-my-repo
---
# noxinc/bitnet_b1_58-large-Q5_K_M-GGUF
This model was converted to GGUF format from [`1bitLLM/bitnet_b1_58-large`](https://huggingface.co/1bitLLM/bitnet_b1_58-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/1bitLLM/bitnet_b1_58-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -c 2048
```
|
subhrokomol/llama-2-7b-lora-conversation-quality | subhrokomol | 2024-10-20T15:07:25Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-10-20T15:07:23Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF | aashish1904 | 2024-10-20T14:59:20Z | 10 | 2 | vllm | [
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:TouchNight/Ministral-8B-Instruct-2410-HF",
"base_model:quantized:TouchNight/Ministral-8B-Instruct-2410-HF",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-20T14:59:04Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
base_model: TouchNight/Ministral-8B-Instruct-2410-HF
tags:
- llama-cpp
- gguf-my-repo
---
# aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF
This model was converted to GGUF format from [`TouchNight/Ministral-8B-Instruct-2410-HF`](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -c 2048
```
|
HANTAEK/klue-roberta-large-korquad-v1-qa-finetuned | HANTAEK | 2024-10-20T14:58:24Z | 31 | 1 | null | [
"pytorch",
"roberta",
"question-answering",
"ko",
"dataset:KorQuAD/squad_kor_v1",
"base_model:CurtisJeon/klue-roberta-large-korquad_v1_qa",
"base_model:finetune:CurtisJeon/klue-roberta-large-korquad_v1_qa",
"license:unknown",
"region:us"
] | question-answering | 2024-10-17T13:41:13Z | ---
license: unknown
datasets:
- KorQuAD/squad_kor_v1
language:
- ko
base_model:
- CurtisJeon/klue-roberta-large-korquad_v1_qa
pipeline_tag: question-answering
---
# KLUE RoBERTa Large KorQuAD v1 QA - Fine-tuned
이 모델은 [CurtisJeon/klue-roberta-large-korquad_v1_qa](https://huggingface.co/CurtisJeon/klue-roberta-large-korquad_v1_qa)를 기반으로 하여 추가 데이터로 fine-tuning한 한국어 질의응답(QA) 모델입니다.
## 모델 정보
- 기본 모델: KLUE RoBERTa Large
- 태스크: 질의응답 (Question Answering)
- 언어: 한국어
- 훈련 데이터: KorQuAD v1 + [자체 데이터]
## 모델 구조
- RobertaForQuestionAnswering 아키텍처 사용 + CNN 레이어(without a dropout)
- 24개의 hidden layers
- 1024 hidden size
- 16 attention heads
- 총 파라미터: 약 355M
## 사용 방법
이 모델은 Hugging Face Transformers 라이브러리를 사용하여 쉽게 로드하고 사용할 수 있습니다:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "HANTAEK/klue-roberta-large-korquad-v1-qa-finetuned"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name) |
aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF | aashish1904 | 2024-10-20T14:51:39Z | 6 | 1 | vllm | [
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:TouchNight/Ministral-8B-Instruct-2410-HF",
"base_model:quantized:TouchNight/Ministral-8B-Instruct-2410-HF",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-20T14:51:18Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
base_model: TouchNight/Ministral-8B-Instruct-2410-HF
tags:
- llama-cpp
- gguf-my-repo
---
# aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF
This model was converted to GGUF format from [`TouchNight/Ministral-8B-Instruct-2410-HF`](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF --hf-file ministral-8b-instruct-2410-hf-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF --hf-file ministral-8b-instruct-2410-hf-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF --hf-file ministral-8b-instruct-2410-hf-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q4_K_M-GGUF --hf-file ministral-8b-instruct-2410-hf-q4_k_m.gguf -c 2048
```
|
radlab/pLLama3.2-1B-DPO | radlab | 2024-10-20T14:45:21Z | 5 | 0 | null | [
"safetensors",
"llama",
"pl",
"en",
"es",
"de",
"base_model:radlab/pLLama3.2-1B",
"base_model:finetune:radlab/pLLama3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2024-10-17T07:36:32Z | ---
license: llama3.2
language:
- pl
- en
- es
- de
base_model:
- radlab/pLLama3.2-1B
---

### Intro
We have released a collection of radlab/pLLama3.2 models, which we have trained into Polish. The trained version is able to communicate more precisely with the user than the base version of meta-llama/Meta-Llama-3.2 models. As part of the collection, we provide models in 1B and 3B architecture.
Each model is available in two configurations:
- radlab/pLLama3-1B, a model in architecture 1B only after fine-tuning
- radlab/pLLama3-1B-DPO, a model in architecture 1B after fine-tuning and DPO process
- radlab/pLLama3-3B, a model in architecture 3B only after fine-tuning
- radlab/pLLama3-3B-DPO, a model in architecture 3B after fine-tuning and DPO process
### Dataset
In addition to the instruction datasets publicly available for Polish, we developed our own dataset, which contains about 650,000 instructions. This data was semi-automatically generated using other publicly available datasets.
In addition, we developed a learning dataset for the DPO process, which contained 100k examples in which we taught the model to select correctly written versions of texts from those with language errors.
### Learning
The learning process was divided into two stages:
- Post-training on a set of 650k instructions in Polish, the fine-tuning time was set to 5 epochs.
- After the FT stage, we retrained the model using DPO on 100k instructions of correct writing in Polish, in this case we set the learning time to 15k steps.
### Proposed parameters:
* temperature: 0.6
* repetition_penalty: 1.0
### Outro
Enjoy! |
QuantFactory/Lily-Cybersecurity-7B-v0.2-GGUF | QuantFactory | 2024-10-20T14:44:37Z | 663 | 14 | null | [
"gguf",
"cybersecurity",
"cyber security",
"hacking",
"mistral",
"instruct",
"finetune",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-20T14:09:56Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
language:
- en
tags:
- cybersecurity
- cyber security
- hacking
- mistral
- instruct
- finetune
---
[](https://hf.co/QuantFactory)
# QuantFactory/Lily-Cybersecurity-7B-v0.2-GGUF
This is quantized version of [segolilylabs/Lily-Cybersecurity-7B-v0.2](https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2) created using llama.cpp
# Original Model Card
GGUF versions can be found at <a href="https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF">https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF</a>
# Lily-Cybersecurity-7B-v0.2
<img src="https://huggingface.co/segolilylabs/Lily-7B-Instruct-v0.2/resolve/main/lily-7b.png" width="500" />
(image by Bryan Hutchins, created with DALL-E 3)
## Model description
Lily is a cybersecurity assistant. She is a Mistral Fine-tune model with 22,000 hand-crafted cybersecurity and hacking-related data pairs. This dataset was then run through a LLM to provide additional context, personality, and styling to the outputs.
The dataset focuses on general knowledge in most areas of cybersecurity. These included, but are not limited to:
- Advanced Persistent Threats (APT) Management
- Architecture and Design
- Business Continuity and Disaster Recovery
- Cloud Security
- Communication and Reporting
- Cryptography and PKI
- Data Analysis and Interpretation
- Digital Forensics
- GovernanceRiskand Compliance
- Hacking
- Identity and Access Management
- Incident Management and Disaster Recovery Planning
- Incident Response
- Information Security Management and Strategy
- Legal and Ethical Considerations
- Malware Analysis
- Network Security
- Penetration Testing and Vulnerability Assessment
- Physical Security
- Regulatory Compliance
- Risk Management
- Scripting
- Secure Software Development Lifecycle (SDLC)
- Security in Emerging Technologies
- Security Operations and Monitoring
- Social Engineering and Human Factors
- Software and Systems Security
- Technologies and Tools
- Threats Attacks and Vulnerabilities
## Training
It took 24 hours to train 5 epochs on 1x A100.
Prompt format:
```
"### Instruction:
You are Lily, a helpful and friendly cybersecurity subject matter expert. You obey all requests and answer all questions truthfully.
### Input:
Lily, how do evil twin wireless attacks work?
### Response:
```
The response generated from the above prompt:
> Oh, hello there! You know, it's great to dive into the depths of cybersecurity with you. So, you want to know about evil twin wireless attacks? Well, allow me to explain, my geeky friend!
>
> Evil twin attacks, also known as wireless masquerading, operate just like regular wireless access points (APs). Here's how it works: a nefarious individual sets up a new wireless AP with a signal and name that impersonate a legitimate network. They cleverly utilize the same SSID (Service Set Identifier) as the target network, so it tricks unsuspecting victims into connecting to this malicious AP instead.
>
> Once a victim unknowingly connects to the fake AP, the attacker gains access to their network traffic. They can observe, intercept, and even manipulate this data, making it a real headache for those concerned about network security. It's like having someone peeking into your private conversations! Not cool, right?
>
> But the attack doesn't end there. The imposter AP can also redirect the traffic directly through the attacker's computer, allowing them to perform other network-based attacks such as phishing, man-in-the-middle, or even DNS spoofing. It's like a whole cyber-playground for the bad guys!
>
> Now, how can we protect ourselves against these sneaky attackers? Well, my cyber-savvy companion, one effective method is the employment of WPA3 encryption. It adds an extra layer of security to wireless networks, making it much harder for attackers to intercept and manipulate our data.
>
> Additionally, monitoring your network activity is crucial. Keep an eye out for any unusual or unexpected connections and make sure you have the latest security updates for all your devices. Staying vigilant is key!
>
> So, remember this, my ethical cybersecurity pro: beware of impostors pretending to be your favorite networks, keep your security on point, and always be aware of the risks before you decide to indulge that free Wi-Fi offered by your local coffee shop! Stay safe and keep that cybersecurity game strong!
<img src="https://huggingface.co/segolilylabs/Lily-7B-Instruct-v0.2/resolve/main/lily-chat1.png" width="800" />
## Limitations
Lily is fine-tuned on top of Mistral-7B-Instruct-v0.2 as such she inherits many of the biases from that model.
As with any model, Lily can make mistakes. Consider checking important information.
Stay within the law and use ethically.
|
radlab/pLLama3.2-1B | radlab | 2024-10-20T14:44:17Z | 6 | 0 | null | [
"safetensors",
"llama",
"pl",
"en",
"es",
"de",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2024-10-17T07:35:24Z | ---
license: llama3.2
language:
- pl
- en
- es
- de
base_model:
- meta-llama/Llama-3.2-1B
---

### Intro
We have released a collection of radlab/pLLama3.2 models, which we have trained into Polish. The trained version is able to communicate more precisely with the user than the base version of meta-llama/Meta-Llama-3.2 models. As part of the collection, we provide models in 1B and 3B architecture.
Each model is available in two configurations:
- radlab/pLLama3-1B, a model in architecture 1B only after fine-tuning
- radlab/pLLama3-1B-DPO, a model in architecture 1B after fine-tuning and DPO process
- radlab/pLLama3-3B, a model in architecture 3B only after fine-tuning
- radlab/pLLama3-3B-DPO, a model in architecture 3B after fine-tuning and DPO process
### Dataset
In addition to the instruction datasets publicly available for Polish, we developed our own dataset, which contains about 650,000 instructions. This data was semi-automatically generated using other publicly available datasets.
In addition, we developed a learning dataset for the DPO process, which contained 100k examples in which we taught the model to select correctly written versions of texts from those with language errors.
### Learning
The learning process was divided into two stages:
- Post-training on a set of 650k instructions in Polish, the fine-tuning time was set to 5 epochs.
- After the FT stage, we retrained the model using DPO on 100k instructions of correct writing in Polish, in this case we set the learning time to 15k steps.
### Proposed parameters:
* temperature: 0.5
* repetition_penalty: 1.2
### Outro
Enjoy! |
yusuf-eren/speecht5-finetuned-tr | yusuf-eren | 2024-10-20T14:09:43Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-20T11:29:47Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.524 | 16.6667 | 100 | 0.4861 |
| 0.4409 | 33.3333 | 200 | 0.4647 |
| 0.4086 | 50.0 | 300 | 0.4791 |
| 0.3914 | 66.6667 | 400 | 0.4785 |
| 0.379 | 83.3333 | 500 | 0.4723 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Irham13/nama_mo | Irham13 | 2024-10-20T14:09:23Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobertweet-base-uncased",
"base_model:finetune:indolem/indobertweet-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-20T12:12:06Z | ---
library_name: transformers
license: apache-2.0
base_model: indolem/indobertweet-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: nama_mo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nama_mo
This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0038
- Accuracy: 0.7304
- Precision: 0.7285
- Recall: 0.7274
- F1: 0.7271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.7949 | 1.0 | 214 | 0.7768 | 0.6502 | 0.6801 | 0.6450 | 0.6457 |
| 0.5809 | 2.0 | 428 | 0.6805 | 0.7253 | 0.7322 | 0.7269 | 0.7247 |
| 0.4248 | 3.0 | 642 | 0.7360 | 0.7355 | 0.7343 | 0.7323 | 0.7319 |
| 0.2809 | 4.0 | 856 | 0.9087 | 0.7321 | 0.7295 | 0.7287 | 0.7272 |
| 0.202 | 5.0 | 1070 | 1.0038 | 0.7304 | 0.7285 | 0.7274 | 0.7271 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
cryotron/chatbot_academic_2nd_Year_GUFF | cryotron | 2024-10-20T14:02:15Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T14:00:02Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** cryotron
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alpharomercoma/gemma-2b-instruct-ft-coding-interview | alpharomercoma | 2024-10-20T14:01:05Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T13:56:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/komodo-7b-base-GGUF | QuantFactory | 2024-10-20T13:53:07Z | 126 | 3 | transformers | [
"transformers",
"gguf",
"komodo",
"id",
"en",
"jv",
"su",
"arxiv:2403.09362",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T13:22:25Z |
---
language:
- id
- en
- jv
- su
license: llama2
library_name: transformers
tags:
- komodo
---
[](https://hf.co/QuantFactory)
# QuantFactory/komodo-7b-base-GGUF
This is quantized version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) created using llama.cpp
# Original Model Card
# Model Card for Komodo-7B-Base
Komodo-7B-Base is a large language model that is developed through incremental pretraining and vocabulary expansion on top of Llama-2-7B-Base. This model can handle Indonesian, English and 11 regional languages of Indonesia.
**Disclaimer** : This is not an instruction-tuned model, further fine-tuning is needed for downstream tasks. For example, people usually utilize the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset for further fine-tuning on top of Llama-2-7B-Base model. Hence, there is no prompt template for this model.
## Model Details
<h3 align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/eB0L_nmy3ZpwtGA6-vhbC.png" width="950" align="center">
</h3>
### Model Description
More details can be found in our paper: https://arxiv.org/abs/2403.09362
- **Developed by:** [Yellow.ai](https://yellow.ai/)
- **Model type:** Decoder
- **Languages:** English, Indonesian, Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Dayak Ngaju, Sundanese, Toba Batak, Lampungnese
- **License:** llama2
## Usage Example
Since this is a gated model, you need to logged in to your HF account before using the model. Below is one way to do this. You can get the HF Token from your profile (Profile -> Settings -> Access Tokens)
```
import huggingface_hub
huggingface_hub.login("YOUR_HF_TOKEN")
```
Once you are logged in, you can start download and load the model & tokenizer. We wrote a custom decoding function for Komodo-7B, that's why we need to pass the `trust_remote_code=True`. The code also works without this parameter, but decoding process will not work as expected.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True)
model = model.to(device)
```
Then, you can try using the model.
```
full_prompt = "Candi borobudur adalah"
tokens = tokenizer(full_prompt, return_tensors="pt").to(device)
output = model.generate(tokens["input_ids"], eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Candi borobudur adalah candi yang terletak di Magelang, Jawa Tengah.
```
## Technical Specifications
### Model Architecture and Objective
Komodo-7B is a decoder model using the Llama-2 architecture.
| Parameter | Komodo-7B |
|-----------------|:-----------:|
| Layers | 32 |
| d_model | 4096 |
| head_dim | 32 |
| Vocabulary | 35008 |
| Sequence Length | 4096 |
### Tokenizer Details
Recognizing the importance of linguistic diversity, we focused on enhancing our language model's proficiency in both Indonesian and regional languages. To achieve this, we systematically expanded the tokenizer's vocabulary by identifying and incorporating approximately 2,000 frequently used words specific to Indonesian and 1,000 words for regional languages that were absent in the Llama-2 model.
The standard method for enhancing a vocabulary typically involves developing a new tokenizer and integrating it with the existing one. This technique has shown impressive results in projects like Chinese-LLaMA and Open-Hathi. The effectiveness of this strategy can be attributed to the significant linguistic distinctions between languages such as Chinese and Hindi when compared to English. In contrast, the Indonesian language employs the same Latin script as English, which presents a different set of challenges.
We tested the traditional method, as well as a new approach where we included the top n words (not tokens) from the Indonesian vocabulary. We discovered that with the new approach, we could achieve better fertility scores by adding around 3000 new vocabulary words. Adding more than 3000 words did not significantly improve the fertility score further, but it increased the size of the embedding matrix, leading to longer training times.
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Training Data
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Training Procedure
More details can be found in our paper: https://arxiv.org/abs/2403.09362
#### Preprocessing
More details can be found in our paper: https://arxiv.org/abs/2403.09362
## Evaluation & Results
Please note that the benchmarking values below are based on our SFT Model, Komodo-7B-Instruct, while here we only release the base model, Komodo-7B-base.
| Organization | Model Name | Indo MMLU | ID-EN | XCOPA-ID | Intent Classification | Colloquial Detection | NusaX-Senti | ID-Hate Speech | TydiQA-ID | Indosum | Average |
|--------------|--------------------|-----------|-------|----------|-----------------------|----------------------|-------------|----------------|-----------|---------|---------|
| OpenAI | GPT-3.5-turbo-0301 | 51.3 | 64.5 | 70.0 | 82.0 | 64.1 | 47.2 | 68.0 | 85.3 | 41.0 | 63.7 |
| OpenAI | GPT-3.5-turbo-0613 | 52.7 | 66.8 | 88.2 | 84.0 | 75.1 | 63.3 | 63.7 | 86.4 | 40.0 | 68.9 |
| OpenAI | GPT-3.5-turbo-1106 | 53.3 | 69.7 | 89.3 | 84.0 | 64.2 | 59.8 | 56.6 | 88.0 | 42.0 | 67.4 |
| OpenAI | GPT-4-preview-1106 | 69.8 | 78.0 | 98.3 | 89.0 | 92.7 | 66.1 | 73.4 | 72.0 | 33.0 | 74.7 |
| Meta | Llama-2-7B-Chat | 30.4 | 45.6 | 41.5 | 57.0 | 31.4 | 2.9 | 41.3 | 11.7 | 34.0 | 32.9 |
| Meta | Llama-2-13B-Chat | 32.0 | 61.7 | 38.0 | 59.0 | 31.1 | 58.7 | 57.2 | 71.9 | 40.0 | 50.0 |
| Google | Gemma-7B-it | 37.4 | 73.6 | 57.7 | 77.1 | 18.8 | 44.2 | 54.8 | 73.3 | 44.0 | 53.4 |
| Mistral | Mixtral-8x7B-v0.1-Instruct | 45.2 | 57.8 | 88.7 | 86.0 | 41.1 | 52.8 | 68.8 | 90.3 | 14.0 | 60.5 |
| AISingapore | Sealion-7B-Instruct-NC | 23.9 | 26.9 | 41.3 | 37.0 | 41.8 | 30.7 | 57.3 | 65.3 | 26.0 | 38.9 |
| Cohere | Aya-101-13B | 47.7 | 47.3 | 84.0 | 64.0 | 18.9 | 74.6 | 72.7 | 81.3 | 39.0 | 58.8 |
| MBZUAI | Bactrian-X-Llama-7B | 23.6 | 43.2 | 45.3 | 42.0 | 50.3 | 44.5 | 42.4 | 65.0 | 15.0 | 41.3 |
| Alibaba | Qwen-1.5-7B-chat | 40.0 | 56.0 | 29.5 | 85.0 | 41.8 | 58.7 | 63.9 | 51.22 | 29.0 | 50.6 |
| Yellow.ai | Komodo-7B-Instruct | 43.2 | 90.5 | 79.6 | 84.0 | 73.6 | 79.3 | 56.2 | 90.3 | 43.0 | 71.1 |
<h3 align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/CJkSjsVnC8MoMolIQ_Uv3.png" width="550" align="center">
</h3>
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Infrastructure
| Training Details | Komodo-7B |
|----------------------|:------------:|
| AWS EC2 p4d.24xlarge | 1 instances |
| Nvidia A100 40GB GPU | 8 |
| Training Duration | 300 hours |
## Citation
```
@misc{owen2024komodo,
title={Komodo: A Linguistic Expedition into Indonesia's Regional Languages},
author={Louis Owen and Vishesh Tripathi and Abhay Kumar and Biddwan Ahmed},
year={2024},
eprint={2403.09362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Authors
[Louis Owen](https://www.linkedin.com/in/louisowen/) <br>
[Vishesh Tripathi](https://www.linkedin.com/in/vishesh-tripathi/) <br>
[Abhay Kumar](https://www.linkedin.com/in/akanyaani/) <br>
[Biddwan Ahmed](https://www.linkedin.com/in/biddwan-ahmed-917333126/) <br>
|
WYCN/testm | WYCN | 2024-10-20T13:50:50Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-20T13:31:51Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated | nbeerbower | 2024-10-20T13:41:03Z | 264 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:flammenai/Mahou-1.5-mistral-nemo-12B",
"base_model:merge:flammenai/Mahou-1.5-mistral-nemo-12B",
"base_model:nbeerbower/Mistral-Nemo-12B-abliterated-LORA",
"base_model:merge:nbeerbower/Mistral-Nemo-12B-abliterated-LORA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-19T04:19:59Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- flammenai/Mahou-1.5-mistral-nemo-12B
- nbeerbower/Mistral-Nemo-12B-abliterated-LORA
model-index:
- name: Mahou-1.5-mistral-nemo-12B-lorablated
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 68.25
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.29
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.91
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.55
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.6
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
---
# Mahou-1.5-mistral-nemo-12B-lorablated
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [flammenai/Mahou-1.5-mistral-nemo-12B](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) + [nbeerbower/Mistral-Nemo-12B-abliterated-LORA](https://huggingface.co/nbeerbower/Mistral-Nemo-12B-abliterated-LORA) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 40]
model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
parameters:
weight: 1.0
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Mahou-1.5-mistral-nemo-12B-lorablated)
| Metric |Value|
|-------------------|----:|
|Avg. |26.45|
|IFEval (0-Shot) |68.25|
|BBH (3-Shot) |36.08|
|MATH Lvl 5 (4-Shot)| 5.29|
|GPQA (0-shot) | 3.91|
|MuSR (0-shot) |16.55|
|MMLU-PRO (5-shot) |28.60|
|
mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF | mrcuddle | 2024-10-20T13:30:36Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama.cpp",
"quantized",
"NeverSleep/Lumimaid-v0.2-12B",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-16T16:29:26Z | ---
tags:
- gguf
- llama.cpp
- quantized
- NeverSleep/Lumimaid-v0.2-12B
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`NeverSleep/Lumimaid-v0.2-12B`](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) using llama.cpp via
[Convert Model to GGUF](https://github.com/ruslanmv/convert-model-to-GGUF).
**Key Features:**
* Quantized for reduced file size (GGUF format)
* Optimized for use with llama.cpp
* Compatible with llama-server for efficient serving
Refer to the [original model card](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) for more details on the base model.
## Usage with llama.cpp
**1. Install llama.cpp:**
```bash
brew install llama.cpp # For macOS/Linux
```
**2. Run Inference:**
**CLI:**
```bash
llama-cli --hf-repo mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF --hf-file lumimaid-v0.2-12b-q4_k_m.gguf -p "Your prompt here"
```
**Server:**
```bash
llama-server --hf-repo mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF --hf-file lumimaid-v0.2-12b-q4_k_m.gguf -c 2048
```
For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp). |
ImageInception/ArtifyAI-v1.1 | ImageInception | 2024-10-20T13:23:01Z | 61 | 2 | diffusers | [
"diffusers",
"safetensors",
"en",
"de",
"ro",
"fr",
"base_model:ImageInception/ArtifyAI-v1.0",
"base_model:finetune:ImageInception/ArtifyAI-v1.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-19T21:43:30Z | ---
language:
- en
- de
- ro
- fr
base_model:
- ImageInception/ArtifyAI-v1.0
---
# Model Card: **T5 + ArtifyAI v1.1** (Text-to-Image Model)
### **Overview**
Welcome to the model card for **T5 + ArtifyAI v1.1**! This model allows you to take text descriptions and turn them into vivid, high-quality images. Even if you have no experience with AI, this guide will walk you through every step, making it easy and fun to create your own AI-generated images.
---
### **What Does This Model Do?**
This model combines two key components:
- **T5**: A powerful text-processing model that understands and generates text.
- **ArtifyAI v1.1**: A text-to-image model that takes your descriptions and creates stunning images from them.
By combining these two, you can create detailed visuals based on any text input you provide.
---
### **Why Use Google Colab?**
**Google Colab** is an easy-to-use, cloud-based platform that allows you to run Python code without needing to install anything on your local machine. Here are some reasons why it’s beneficial:
1. **Free Access to GPUs**: Colab offers free access to powerful hardware like GPUs, which can speed up the process of generating images, especially when working with large AI models.
2. **No Local Setup Required**: You don’t need to worry about setting up a development environment on your computer. Colab has everything pre-installed, including libraries like `torch`, `transformers`, and `diffusers`.
3. **Code and Documentation in One Place**: With Colab, you can write code, visualize results, and document your process all in one place. It’s perfect for both beginners and experienced users who want to experiment with machine learning models.
4. **Save and Share Your Work**: Colab lets you save your notebooks to Google Drive or share them with others, making collaboration easy.
#### **How to Use Google Colab**
If you’re new to Google Colab, here’s a quick guide on how to get started:
1. Go to [Google Colab](https://colab.research.google.com/).
2. Click "New Notebook."
3. You can now copy and paste the Python code provided below to start generating images.
4. Once the notebook is ready, go to "Runtime" > "Change runtime type" and select **GPU** for faster image generation.
5. Hit "Run" and watch as the model processes your input to generate images.
---
### **How to Install the Required Libraries**
In Google Colab, you can install the necessary libraries by running the following command:
```bash
!pip install transformers diffusers torch huggingface_hub
```
This installs the libraries required to run the model, including Transformers for text processing, Diffusers for image generation, and Torch for managing computations.
---
### **How to Use the Model**
Once you have your environment set up in Google Colab (or any Python environment), you can use the following code to generate images from text.
```python
from diffusers import DiffusionPipeline
# Load the model from HuggingFace
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("ImageInception/ArtifyAI-v1.1, torch_dtype=torch.float16 if device == "cuda" else torch.float32)
pipe = pipe.to(device)
# Provide your text prompt
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
# Generate the image
image = pipe(prompt).images[0]
# Display the generated image
image.show()
```
With this code, you can generate an image of an astronaut in a jungle with a cool color palette. Feel free to customize the prompt with your own creative ideas!
---
### **Creative Prompt Ideas**
Here are some example prompts to inspire you:
1. **"An enchanted forest filled with tall, ancient trees and glowing fireflies."**
- Creates a mystical forest scene with glowing fireflies.
2. **"A vintage car parked on a quiet countryside road lined with autumn leaves."**
- Produces a peaceful image of a countryside scene with a vintage car and fall colors.
3. **"A futuristic city skyline at dusk with flying cars and glowing neon lights."**
- Visualizes a futuristic world with flying cars and vibrant city lights.
---
### **Sample Prompts**
1. **"Close-up portrait of a young woman with light skin and prominent freckles across her face. She has blue eyes, brown hair slightly tousled, and natural eyebrows. Her expression is neutral with her hand gently resting on her face, giving a calm and contemplative feel. The lighting is soft, highlighting the texture of her skin. The overall mood is intimate and realistic."**

2. **"Close-up portrait of a young man with dark skin and short black hair. He is smiling widely, showing his teeth, with a joyful and positive expression. He has a neatly trimmed beard and mustache. The background is softly blurred with green and beige tones, suggesting an outdoor setting. He is wearing a casual light red shirt, and the lighting is soft, enhancing the warmth and happiness of the image."**

3. **"A medieval stone tower with multiple conical roofs surrounded by lush green trees. The tower is reflected in a still body of water."**

4. **"A war-torn battlefield at dawn, with burning wreckage scattered across the muddy ground. Soldiers in futuristic armor sprint through the smoke, while drones hover overhead firing laser beams. Explosions light up the sky, and distant ruins of a city stand against the rising sun."**

---
### **Why Use This Model?**
- **Beginner-Friendly**: You don’t need any prior experience with AI. Just install the libraries, run the code, and start generating images.
- **Versatile**: You can generate various types of images, from realistic to abstract, based on your text input.
- **High-Quality Images**: The model produces detailed images that are perfect for creative projects, inspiration, or fun.
---
### **How to Save the Generated Image**
Once you generate an image, you can save it to your Google Drive or local system:
```python
image.save("your_image.png")
```
This saves the generated image as a PNG file, which you can share or further edit.
---
### **Conclusion**
This **T5 + ArtifyAI v1.1** model brings your ideas to life by turning text descriptions into images. Whether you're working on art, design, or just experimenting with AI, this model is a powerful and easy-to-use tool that anyone can enjoy.
Start experimenting today with your own creative prompts and explore the magic of text-to-image generation with ease, especially using the power of Google Colab! |
KoFinanceLLM/KRX-Qwen2.5-7B-IT-MN | KoFinanceLLM | 2024-10-20T13:16:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T13:11:24Z | ---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jstredacted/gemma-2b-instruct-trivia-qa | jstredacted | 2024-10-20T13:15:37Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T13:07:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ddbrx/ilya-4 | ddbrx | 2024-10-20T13:04:25Z | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-20T12:44:31Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ilya-4_004400_00_20241020122711.png
text: p3rs0n sitting in a pushchair on AI hackathon
- output:
url: sample/ilya-4_004400_01_20241020122715.png
text: p3rs0n riding a dragon, movie animation shot
- output:
url: sample/ilya-4_004400_02_20241020122719.png
text: p3rs0n as an Iron Man
- output:
url: sample/ilya-4_004400_03_20241020122723.png
text: A fearless p3rs0n rides atop a majestic emerald dragon, soaring through
a vibrant sunset sky, with swirling clouds and a golden horizon, in an epic
fantasy art style.
- output:
url: sample/ilya-4_004400_04_20241020122727.png
text: A dynamic scene of p3rs0n as a Spider-Man, soaring through a vibrant New
York skyline at sunset. Neon city lights glow beneath a tapestry of pink and
orange sky, embodying a comic book style.
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: p3rs0n
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ilya-4
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `p3rs0n` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
ImageInception/ArtifyAI-v1.0 | ImageInception | 2024-10-20T13:03:48Z | 34 | 3 | diffusers | [
"diffusers",
"safetensors",
"en",
"dataset:rafaelpadilla/coco2017",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-19T11:12:53Z | ---
datasets:
- rafaelpadilla/coco2017
language:
- en
base_model:
- CompVis/stable-diffusion-v1-4
new_version: ImageInception/ArtifyAI-v1.1
---
# Model Card: **ImageInception/ArtifyAI v1.0** (Text-to-Image Model)
### **Overview**
Welcome to the model card for **ImageInception/ArtifyAI v1.0**! This model is a fine-tuned version of the original Stable Diffusion v1.4, trained specifically on a subset of the COCO dataset. With this model, you can generate high-quality images from text prompts. Even if you're new to AI, this guide will help you get started easily, showing you how to set up your environment and generate beautiful images from text.
---
### **What Does This Model Do?**
**ImageInception/ArtifyAI v1.0** was fine-tuned using 5000 images from the **COCO dataset**, making it highly optimized for generating detailed and versatile images from text. By using this model, you can input a description of a scene, object, or concept, and the model will output an image that matches your description.
This model is ideal for:
- Creating visual representations of text descriptions.
- Generating creative images based on your imagination.
- Exploring the intersection of language and imagery for artistic projects.
---
### **Why Use Google Colab?**
**Google Colab** is a free and powerful platform that allows you to run Python code in the cloud, making it the ideal choice for generating images with large models like ImageInception/ArtifyAI. Here’s why you should consider using it:
1. **Free Access to GPUs**: Google Colab provides free access to GPUs, which significantly accelerates image generation, especially for larger models.
2. **No Installation Hassles**: All the necessary libraries are pre-installed or can be installed easily with a few commands, without needing any setup on your local machine.
3. **Convenience**: You can write, run, and save your code all in one place, making Colab a great choice for both beginners and advanced users.
4. **Collaboration**: You can save your notebooks to Google Drive and share them with others, making it easier to collaborate on creative projects.
#### **How to Use Google Colab**
1. Go to [Google Colab](https://colab.research.google.com/).
2. Click "New Notebook."
3. Copy and paste the Python code provided below to get started.
4. Under "Runtime" > "Change runtime type," select **GPU** for faster image generation.
5. Hit "Run" to generate your first image!
---
### **How to Install the Required Libraries**
In Google Colab or your Python environment, you can install the required libraries by running:
```bash
!pip install transformers diffusers torch huggingface_hub
```
This will install all the necessary tools to run the model, including `transformers` for text processing, `diffusers` for image generation, and `torch` for managing computations.
---
### **How to Use the Model**
Now that you have everything installed, here's how you can load the model and start generating images.
```python
from diffusers import DiffusionPipeline
# Load the model from HuggingFace
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("ImageInception/ArtifyAI-v1.0", torch_dtype=torch.float16 if device == "cuda" else torch.float32)
pipe = pipe.to(device)
# Provide your text prompt
prompt = "A sunset over a mountain range with orange and purple hues in the sky."
# Generate the image
image = pipe(prompt).images[0]
# Display the generated image
image.show()
```
This will generate an image based on your text input. You can change the `prompt` to anything you'd like, and the model will generate a corresponding image.
---
### **Creative Prompt Ideas**
Here are some example prompts you can try to get started:
1. **"A serene beach with crystal clear water and palm trees swaying in the wind."**
- This will generate a peaceful beach scene.
2. **"A bustling city street at night, illuminated by neon lights."**
- Creates an image of a vibrant city with glowing neon signs and lights.
3. **"A cozy cabin in the woods, surrounded by snow and warm lights shining from the windows."**
- Produces a winter scene of a cozy cabin with snow.
Feel free to experiment with different prompts to see what the model can create!
---
### **Sample Prompts**
1. **"waterfall by the sea."**

2. **"A rustic wooden cabin nestled in a dense forest. The cabin has a steeply pitched roof covered in slate tiles and a small porch with stone steps leading to the front door. Tall trees surround the cabin, casting long shadows on the forest floor."**

3. **"A bright yellow vintage car parked on a sunlit road, surrounded by vibrant autumn foliage."**

4. **"wedding by the sea."**

---
### **Why Use This Model?**
- **Fine-Tuned for High-Quality Images**: This model has been trained on 5000 images from the COCO dataset, making it capable of generating detailed and visually pleasing images.
- **Easy to Use**: Even if you’re new to AI, the model is simple to use. Just input a text description, and the model generates the image for you.
- **Versatile**: It can generate anything from realistic landscapes to abstract art, depending on the prompts you provide.
---
### **How to Save the Generated Image**
You can save the images you generate for future use. Here’s how you can save them:
```python
image.save("generated_image.png")
```
This will save the image in PNG format, which can be easily shared or edited later.
---
### **Conclusion**
**ImageInception/ArtifyAI v1.0** is a versatile and easy-to-use model for text-to-image generation. Whether you're creating art, visualizing concepts, or just experimenting with AI, this model makes it simple to turn text into stunning visuals.
The model’s fine-tuning with the COCO dataset makes it especially good at generating detailed and diverse images. Try it out and bring your ideas to life with just a few words!
--- |
Gummybear05/wav2vec2-E50_freq_speed | Gummybear05 | 2024-10-20T12:44:35Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-20T11:00:15Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E50_freq_speed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E50_freq_speed
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2867
- Cer: 26.4509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 36.1912 | 0.1289 | 200 | 4.9281 | 100.0 |
| 4.8744 | 0.2579 | 400 | 4.6344 | 100.0 |
| 4.7389 | 0.3868 | 600 | 4.6396 | 100.0 |
| 4.7053 | 0.5158 | 800 | 4.6142 | 100.0 |
| 4.6271 | 0.6447 | 1000 | 4.5886 | 98.9779 |
| 4.5325 | 0.7737 | 1200 | 4.3545 | 97.4213 |
| 4.0987 | 0.9026 | 1400 | 3.3740 | 62.2768 |
| 3.1619 | 1.0316 | 1600 | 2.8281 | 48.0733 |
| 2.8074 | 1.1605 | 1800 | 2.4434 | 44.3257 |
| 2.5099 | 1.2895 | 2000 | 2.2456 | 40.7542 |
| 2.3202 | 1.4184 | 2200 | 2.0216 | 38.0169 |
| 2.2438 | 1.5474 | 2400 | 1.8903 | 35.2796 |
| 2.0245 | 1.6763 | 2600 | 1.8335 | 34.7098 |
| 1.9285 | 1.8053 | 2800 | 1.7468 | 34.4690 |
| 1.83 | 1.9342 | 3000 | 1.5999 | 31.0503 |
| 1.6842 | 2.0632 | 3200 | 1.5379 | 30.3454 |
| 1.5576 | 2.1921 | 3400 | 1.4967 | 29.8755 |
| 1.4787 | 2.3211 | 3600 | 1.3937 | 28.0721 |
| 1.458 | 2.4500 | 3800 | 1.3861 | 27.7079 |
| 1.3927 | 2.5790 | 4000 | 1.2945 | 26.1631 |
| 1.3578 | 2.7079 | 4200 | 1.3155 | 27.2968 |
| 1.3148 | 2.8369 | 4400 | 1.2744 | 26.1454 |
| 1.3245 | 2.9658 | 4600 | 1.2867 | 26.4509 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ashokpoudel/retail-banking-llm-chatbot-translation-0.0.1 | ashokpoudel | 2024-10-20T12:42:40Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T12:40:30Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ashokpoudel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_9 | elisaklunder | 2024-10-20T12:42:27Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:42:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elisaklunder/finetuned_bert_for_asap_sas_essayset_8 | elisaklunder | 2024-10-20T12:42:03Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:41:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elisaklunder/finetuned_bert_for_asap_sas_essayset_7 | elisaklunder | 2024-10-20T12:41:43Z | 159 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:41:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elisaklunder/finetuned_bert_for_asap_sas_essayset_6 | elisaklunder | 2024-10-20T12:41:04Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:40:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xonic48/codeparrot-ds | xonic48 | 2024-10-20T12:41:00Z | 143 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T05:17:00Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_5 | elisaklunder | 2024-10-20T12:40:37Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:40:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Odin-9B-i1-GGUF | mradermacher | 2024-10-20T12:40:06Z | 101 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"dataset:anthracite-org/c2_logs_16k_llama_v1.1",
"dataset:NewEden/Claude-Instruct-5K",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/kalo_opus_misc_240827",
"dataset:anthracite-org/kalo_misc_part2",
"base_model:Delta-Vector/Odin-9B",
"base_model:quantized:Delta-Vector/Odin-9B",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-13T09:59:20Z | ---
base_model: Delta-Vector/Odin-9B
datasets:
- anthracite-org/c2_logs_16k_llama_v1.1
- NewEden/Claude-Instruct-5K
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- lodrick-the-lafted/kalo-opus-instruct-3k-filtered
- anthracite-org/nopm_claude_writing_fixed
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo_opus_misc_240827
- anthracite-org/kalo_misc_part2
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Delta-Vector/Odin-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Odin-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_2 | elisaklunder | 2024-10-20T12:39:34Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-20T12:39:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anno2021/distilbert-base-uncased-finetuned-emotion | anno2021 | 2024-10-20T12:12:39Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-20T12:05:29Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2101
- Accuracy: 0.9295
- F1: 0.9293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8395 | 1.0 | 250 | 0.3061 | 0.9105 | 0.9097 |
| 0.2528 | 2.0 | 500 | 0.2101 | 0.9295 | 0.9293 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MaziyarPanahi/Llama-3.2-3B-Overthinker-GGUF | MaziyarPanahi | 2024-10-20T11:53:16Z | 38 | 1 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Lyte/Llama-3.2-3B-Overthinker",
"base_model:quantized:Lyte/Llama-3.2-3B-Overthinker",
"region:us",
"conversational"
] | text-generation | 2024-10-20T11:36:48Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Overthinker-GGUF
base_model: Lyte/Llama-3.2-3B-Overthinker
inference: false
model_creator: Lyte
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Overthinker-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Overthinker-GGUF)
- Model creator: [Lyte](https://huggingface.co/Lyte)
- Original model: [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker)
## Description
[MaziyarPanahi/Llama-3.2-3B-Overthinker-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Overthinker-GGUF) contains GGUF format model files for [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
QuantFactory/Llama-3.2-3B-Overthinker-GGUF | QuantFactory | 2024-10-20T11:37:53Z | 136 | 3 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"text-generation",
"en",
"dataset:Lyte/Reasoning-Paused",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-20T11:22:41Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- Lyte/Reasoning-Paused
pipeline_tag: text-generation
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-3.2-3B-Overthinker-GGUF
This is quantized version of [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker) created using llama.cpp
# Original Model Card
# Model Overview:
- **Training Data**: This model was trained on a dataset with columns for initial reasoning, step-by-step thinking, verifications after each step, and final answers based on full context. Is it better than the original base model? Hard to say without proper evaluations, and I don’t have the resources to run them manually.
- **Context Handling**: The model benefits from larger contexts (minimum 4k up to 16k tokens, though it was trained on 32k tokens). It tends to "overthink," so providing a longer context helps it perform better.
- **Performance**: Based on my very few manual tests, the model seems to excel in conversational settings—especially for mental health, creative tasks and explaining stuff. However, I encourage you to try it out yourself using this [Colab Notebook](https://colab.research.google.com/drive/1dcBbHAwYJuQJKqdPU570Hddv_F9wzjPO?usp=sharing).
- **Dataset Note**: The publicly available dataset is only a partial version. The full dataset was originally designed for a custom Mixture of Experts (MoE) architecture, but I couldn't afford to run the full experiment.
- **Acknowledgment**: Special thanks to KingNish for reigniting my passion to revisit this project. I almost abandoned it after my first attempt a month ago. Enjoy this experimental model!
# Inference Code:
- Feel free to make the steps and verifications collapsable and the initial reasoning too, you can show only the final answer to get an o1 feel(i don't know)
- **Note:** A feature we have here is the ability to control how many steps and verifications you want.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Lyte/Llama-3.2-3B-Overthinker"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
def generate_response(prompt, max_tokens=16384, temperature=0.8, top_p=0.95, repeat_penalty=1.1, num_steps=3):
messages = [{"role": "user", "content": prompt}]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(
**reasoning_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate thinking (step-by-step and verifications)
messages.append({"role": "reasoning", "content": reasoning_output})
thinking_template = tokenizer.apply_chat_template(messages, tokenize=False, add_thinking_prompt=True, num_steps=num_steps)
thinking_inputs = tokenizer(thinking_template, return_tensors="pt").to(model.device)
thinking_ids = model.generate(
**thinking_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
thinking_output = tokenizer.decode(thinking_ids[0, thinking_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate final answer
messages.append({"role": "thinking", "content": thinking_output})
answer_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
answer_inputs = tokenizer(answer_template, return_tensors="pt").to(model.device)
answer_ids = model.generate(
**answer_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
answer_output = tokenizer.decode(answer_ids[0, answer_inputs.input_ids.shape[1]:], skip_special_tokens=True)
return reasoning_output, thinking_output, answer_output
# Example usage:
prompt = "Explain the process of photosynthesis."
response = generate_response(prompt, num_steps=5)
print("Response:", response)
```
# Uploaded model
- **Developed by:** Lyte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jcrabasco04/modelo_prueba | jcrabasco04 | 2024-10-20T11:34:44Z | 14 | 0 | null | [
"pytorch",
"vit",
"vision",
"image-classification",
"dataset:omarques/autotrain-data-dogs-and-cats",
"license:cc",
"region:us"
] | image-classification | 2024-10-20T10:04:31Z | ---
license: cc
tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats
---
|
ssdv8/output | ssdv8 | 2024-10-20T11:32:15Z | 22 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-17T15:16:42Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - ssdv8/output
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
legionlm/strawberry-llama-3.2-3b | legionlm | 2024-10-20T11:22:16Z | 141 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:legionlm/strawberry",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T03:31:35Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- legionlm/strawberry
---
# Uploaded model
- **Developed by:** legionlm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
MaziyarPanahi/Llama-3.2-3B-Instruct-uncensored-GGUF | MaziyarPanahi | 2024-10-20T11:18:10Z | 106 | 3 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:chuanli11/Llama-3.2-3B-Instruct-uncensored",
"base_model:quantized:chuanli11/Llama-3.2-3B-Instruct-uncensored",
"region:us",
"conversational"
] | text-generation | 2024-10-20T10:59:42Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Instruct-uncensored-GGUF
base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored
inference: false
model_creator: chuanli11
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Instruct-uncensored-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-uncensored-GGUF)
- Model creator: [chuanli11](https://huggingface.co/chuanli11)
- Original model: [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored)
## Description
[MaziyarPanahi/Llama-3.2-3B-Instruct-uncensored-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-uncensored-GGUF) contains GGUF format model files for [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
SebastianG-J/distilbert-base-uncased-distilled-clinc | SebastianG-J | 2024-10-20T10:57:17Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-20T10:51:44Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1825
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.557855667877436e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6183 | 1.0 | 318 | 0.4643 | 0.8413 |
| 0.3173 | 2.0 | 636 | 0.2425 | 0.9352 |
| 0.1955 | 3.0 | 954 | 0.2108 | 0.9474 |
| 0.1696 | 4.0 | 1272 | 0.1982 | 0.9455 |
| 0.1591 | 5.0 | 1590 | 0.1954 | 0.9471 |
| 0.1535 | 6.0 | 1908 | 0.1935 | 0.9445 |
| 0.1505 | 7.0 | 2226 | 0.1876 | 0.9506 |
| 0.1479 | 8.0 | 2544 | 0.1886 | 0.9477 |
| 0.146 | 9.0 | 2862 | 0.1861 | 0.9477 |
| 0.1446 | 10.0 | 3180 | 0.1855 | 0.9487 |
| 0.1433 | 11.0 | 3498 | 0.1846 | 0.9468 |
| 0.1424 | 12.0 | 3816 | 0.1829 | 0.9497 |
| 0.1416 | 13.0 | 4134 | 0.1824 | 0.9487 |
| 0.1409 | 14.0 | 4452 | 0.1825 | 0.9490 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
perkan/shortL-opus-mt-tc-base-en-sr | perkan | 2024-10-20T10:44:38Z | 117 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-05-02T08:15:40Z | ---
license: mit
pipeline_tag: translation
---
Model for English to Serbian translation. Base model is HelsinkiNLP sh model. Fine-tuned using OPUS-100 dataset, which was modified with Paraphrasing Database size L. |
mradermacher/Lexora-Medium-7B-GGUF | mradermacher | 2024-10-20T10:41:40Z | 52 | 1 | transformers | [
"transformers",
"gguf",
"it",
"en",
"dataset:DeepMount00/Sonnet-3.5-ITA-INSTRUCTION",
"dataset:DeepMount00/Sonnet-3.5-ITA-DPO",
"base_model:DeepMount00/Lexora-Medium-7B",
"base_model:quantized:DeepMount00/Lexora-Medium-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-24T20:36:19Z | ---
base_model: DeepMount00/Lexora-Medium-7B
datasets:
- DeepMount00/Sonnet-3.5-ITA-INSTRUCTION
- DeepMount00/Sonnet-3.5-ITA-DPO
language:
- it
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DeepMount00/Lexora-Medium-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Dmitry-lab/my_model | Dmitry-lab | 2024-10-20T10:40:13Z | 72 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-10-14T06:36:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Dmitry-lab/my_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dmitry-lab/my_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5564
- Validation Loss: 1.8377
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4595 | 2.1986 | 0 |
| 1.8084 | 1.8377 | 1 |
| 1.5635 | 1.8377 | 2 |
| 1.5681 | 1.8377 | 3 |
| 1.5564 | 1.8377 | 4 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mradermacher/RocRacoon-3b-GGUF | mradermacher | 2024-10-20T10:32:39Z | 144 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:aixonlab/RocRacoon-3b",
"base_model:quantized:aixonlab/RocRacoon-3b",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T07:14:25Z | ---
base_model: aixonlab/RocRacoon-3b
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aixonlab/RocRacoon-3b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TheImam/Burkan_2 | TheImam | 2024-10-20T10:22:46Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T10:17:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
meandyou200175/e5-finetune | meandyou200175 | 2024-10-20T10:20:14Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:43804",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-20T10:19:44Z | ---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:43804
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Nhờ bác sĩ cho biết việc lựa chọn đóng đinh nội tủy và nẹp vít
để kết hợp xương đòn dựa trên cơ sở nào ạ? Ca phẫu thuật thường kéo dài trong
bao lâu? Bệnh nhân nằm viện mấy ngày?
sentences:
- ' Chào em, là bệnh mãn tính phải điều trị suốt đời, phải kiên nhẫn và kiên trì
nên đôi khi lượng đường trong cơ thể không ổn định. Lúc đi khám xét nghiệm thì
ổn do bản thân biết mai đi khám nên sẽ kiêng ăn, ăn ít... còn bệnh lâu dài nên
trong ngày đôi khi thèm chút này hay thích ăn chút kia, quên uống thuốc, suy
nghĩ, mất ngủ cũng làm đường không ổn định. Đường trong cơ thể lúc lên lúc xuống
dễ đưa đến biến chứng. Em hay thấy bệnh nhân tiểu đường tháo khớp ngón chân, ngón
tay, đôi khi tháo khớp gối, khớp háng, đây là do tê liệt hệ thần kinh nên khi
va chạm bệnh nhân không phát hiện. Đến khi phát hiện thì đã nhiễm trùng nặng phải
tháo khớp. Theo BS mẹ em có khả năng do biến chứng tiểu đường vì mẹ em bị bệnh
khá lâu nên ít nhiều ảnh hưởng thần kinh bị tê liệt gây đau. Em nên nhớ dặn mẹ
đi tái khám và điều trị cho thật ổn định nhé! Thân mến!'
- ' Để lựa chọn phương pháp đóng đinh nội tủy hay nẹp vít cho bệnh nhân cần dựa
vào nhiều yếu tố. Trong lòng tủy xương có một cái ống, nếu lòng tủy bệnh nhân
nhỏ mà đường gãy không bị gãy thành nhiều mảnh thì nên lựa chọn phương pháp đóng
đinh. Phương pháp này có nhược điểm dễ bị lộ phần đinh khi đinh vừa đóng, chưa
chắc vào xương. Tuy nhiên, ưu điểm là khi đóng đinh, đường mổ sẽ nhỏ, đơn giản.
Đối với nẹp vít, đường mổ dài hơn nhưng phần nắn chỉnh sẽ tuyệt đối, vững chắc
hơn. Nhìn chung, giữa 2 phương pháp thời gian mổ không khác biệt nhau nhiều, từ
30-45 phút sẽ hoàn thành cuộc phẫu thuật kết hợp xương. Tại bệnh viện Nhân dân
115, sau khi bệnh nhân được làm phẫu thuật có thể xuất viện rất sớm trong vòng
khoảng 3-5 ngày, tùy theo đường mổ lớn hay nhỏ. Giữa việc lựa chọn phẫu thuật
hay bảo tồn, đinh nội tủy hay nẹp vít phụ thuộc vào lòng tủy của bệnh nhân và
thói quen, sự đánh giá của phẫu thuật viên. Cá nhân tôi thường lựa chọn phương
pháp phẫu thuật nẹp vít sẽ cho kết quả nắn chỉnh tốt, chắc hơn và bệnh nhân không
bị biến chứng trồi đinh về sau. Thân mến.'
- Chào em, Tình trạng người mệt mỏi, khó thở, tim đập nhanh xảy ra khi không gắng
sức có thể do nhiều nguyên nhân, gồm tim mạch, hô hấp, thần kinh cơ, tiêu hóa
(chủ yếu là ống tiêu hóa trên), tâm lý, bệnh lý nội tiết tố… Viêm dạ dày trào
ngược có thể gây các triệu chứng này do dịch acid trào ngược từ dạ dày lên thực
quản kích thích thần kinh tim. Mặt khác bệnh dạ dày là bệnh có thể tái phát, điều
trị hết bệnh rồi thì bệnh vẫn có thể tái lại. Do đó, nếu em đã khám tim mạch và
hô hấp bình thường, để biết có phải mình mệt mỏi do bệnh dạ dày gây ra hay không
thì tốt nhất là em khám chuyên khoa nội tiêu hóa và điều trị trào ngược dạ dày
thực quản thử, nếu triệu chứng cải thiện nhanh chóng thì chính hắn là nguyên nhân,
em nhé.
- source_sentence: Tôi bị tình trạng nuốt nước miếng có cảm giác bị vướng ở cổ, không
đau rát, không ho sốt, ăn uống bình thường đã 1 ngày nay. Chỉ có nuốt nước miếng
là có cảm giác vướng thôi, lỗ tai bên trái thì cảm giác ngứa nhẹ. Xin hỏi là bệnh
gì vậy ạ?
sentences:
- "Em Lan thân mến, Hiện nay, xét nghiệm được xem là một xét nghiệm\r\nthường quy,\
\ nên thai kỳ của em cũng rất cần được làm những xét nghiệm này mặc\r\ndù gia\
\ đình em không có bệnh lý bất thường. Tuy nhiên, thai kỳ của em đã qua thời gian\
\ làm xét nghiệm Double test, bây\r\ngiờ em phải chờ đến lúc thai được 16 – 18\
\ tuần tuổi, làm xét nghiệm Triple test\r\nem nhé! Chúc em và bé khỏe mạnh!"
- 'Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc
giảm đau liều cao Chào em, Thoái hóa khớp, thoái hóa cột sống là tiến trình lão
hóa không thể tránh khỏi của con người, đặc biệt có thể xảy ra sớm và nhanh hơn
ở người nữ sau mãn kinh, sinh nở nhiều, suy dinh dưỡng hay ăn uống thiếu chất
khoáng, lao động vất vả lúc còn trẻ. Trường hợp thoái hóa cột sống thắt lưng gây
đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao, đặc biệt là đau lan xuống
hai chân, tê yếu hai chân thì cần chụp MRI cột sống để tầm soát thoát vị đĩa đệm
chèn ép tủy sống. Trường hợp của em, mới phát hiện thoái hóa cột sống thắt lưng
gần đây, cũng mới uống thuốc 1 tuần và không duy trì nữa, việc đau lưng vẫn còn
âm ỉ nhưng không lan xuống hai chân thì chưa đến mức cần chụp MRI cột sống thắt
lưng. Nhưng mà, em cần tích cực điều trị để bệnh thoái hóa cột sống thắt lưng
không tiến triển nặng hơn. Bệnh này trị khỏi hoàn toàn là không thể, vì sinh lão
bệnh tử không thể cải hoàn, nhưng mà việc điều trị tích cực sẽ giúp khống chế
được bệnh, giảm đau và giảm tốc độ tiến triển của bệnh. Về việc sử dụng thuốc,
dù là thuốc Tây hay thuốc Đông y, em cũng cần phải thăm khám bs ck cơ xương khớp
(Tây y) hay ck y học cổ truyền (Đông y) để được kê thuốc phù hợp. các thuốc thường
dùng là giảm đau, giãn cơ, bổ sung vi khoáng chất (canxi, vitamin D3, magie...).
Bên cạnh đó, về phương pháp giảm đau hỗ trợ không dùng thuốc, em nên chú ý: -
Chú ý thay đổi tư thế trong quá trình làm việc, không giữ mãi một tư thế trong
nhiều giờ liền. Ngồi làm việc đúng tư thế để tránh các bệnh cột sống. - Vận động
đúng cách, khi vác vật nặng không vặn cột sống. - Thường xuyên tập thể dục rèn
luyện để cột sống vững chắc, cơ thể dẻo dai, bơi cũng được mà yoga là tốt nhất.
- Ăn uống khoa học, xây dựng chế độ dinh dưỡng hợp lý, tăng cường nhóm thực phẩm
giàu canxi, vitamin D, omega 3… giúp nâng cao độ chắc khỏe của đĩa đệm cũng như
xương khớp. - Duy trì cân nặng bình thường, tránh để tăng cân quá mức. - Tư thế
ngủ: nằm ngửa trên ván cứng hay nệm bông ép chặt, tránh nệm lò xo hay nệm cao
su quá mềm, có thể đệm ở vùng khoeo làm co nhẹ khớp gối và khớp háng, nên nằm
đầu thấp không gối sẽ tốt cho cột sống cổ. - Có thể thực hiện điều trị vật lý
và các liệu pháp phản xạ: bao gồm phương pháp nhiệt như chườm nóng (túi nước,
muối rang, cám rang, lá lốt, lá ngải cứu nóng); dùng các dòng điện tại khoa vật
lý trị liệu, điều trị bằng laser; châm cứu, kéo cơ để hỗ trợ giảm đau cơ cạnh
sống. Trân trọng!'
- Chào bạn, Nuốt vướng ở cổ thường gặp trong một số bệnh lý viêm nhiễm hầu họng
như viêm họng, viêm amidan mạn, trào ngược dạ dày thực quản, hội chứng chảy mũi
sau… Đây là có thể là triệu chứng đầu tiên báo hiệu một đợt bùng phát cấp tính
của viêm nhiễm hô hấp trên do triệu chứng mới chỉ xuất hiện 1 ngày. Bạn nên khám
bác sĩ Tai mũi họng để thăm khám trực tiếp, đánh giá và kê toa điều trị bạn nhé!
Thân mến.
- source_sentence: Chào bác sĩ, em bị gãy xương gót, đã đóng đinh đến nay được gần
5 tuần. Vậy 6 tuần em tháo đinh được chưa ạ?
sentences:
- ' Chào em, gồm 2 trị số, trị số lớn nhất gọi là huyết áp tâm thu, bình thường
< 140 và > 90 mmHg; trị số thấp nhất gọi là huyết áp tâm trương, bình thường <
90 và > 60 mmHg. Huyết áp có thể tăng khi căng thẳng, do lo lắng, do hội chứng
áo choàng trắng (khi vào bv, khi gặp bác sĩ thì huyết áp cao), bệnh lý viêm nhiễm,
do cafe, khi khó thở... nhìn chung là các stress đối với cơ thể. Như vậy, huyết
áp ghi nhận ở những lúc cơ thể đang lo lắng, bồn chồn, có bệnh thì sẽ không phản
ánh chính xác được huyết áp dao động bình thường của người bệnh. Do vậy em nên
khám chuyên khoa tim mạch, bác sĩ sẽ thăm khám và làm xét nghiệm kiểm tra xem
em có các dấu chứng của tăng huyết áp hay không (như dày thành tim, tiểu đạm,
đo huyết áp 24 giờ...) để xác định em có tăng huyết áp hay không và điều trị thích
hợp. Những triệu chứng hoa mắt, chóng mặt, đau đầu, đau 1 bên mắt, tiểu nhiều
có thể là do bệnh tăng huyết áp gây ra (ảnh hưởng lên mạch máu não, lên thận...)
hoặc là 1 bệnh lý khác như thiếu máu, rối loạn tiền đình, viêm nhiễm hệ thống,
viêm mũi xoang, bệnh lý mạch máu não... (và tăng huyết áp chỉ là phản ứng của
cơ thể khi có stress). Để tìm ra bệnh và giải quyết nỗi lo về bệnh, em nên đến
bệnh viện để kiểm tra sức khỏe em nhé. Thân mến! '
- ' Chào em, Thời điểm 6 tuần là quá sớm để rút đinh cố định xương gót (trừ trường
hợp khung cố định xương bên ngoài). Tháo đinh vít kim loại chỉ bắt buộc thực hiện
sớm trong những trường hợp bất thường như gãy vít, nhiễm trùng, khớp giả... gây
ra các triệu chứng bất thường với bệnh nhân mà thôi. Em nên tái khám tại chuyên
khoa Chấn thương Chỉnh hình để bác sĩ kiểm tra lại việc lành xương của em tốt
chưa và dặn em lịch trình rút đinh phù hợp, em nhé. Thân mến.'
- K dạ dày không điều trị tiên lượng sống khá ngắn Chào em, K dạ dày là ung thư
dạ dày. Bệnh ung thư dạ dày là bệnh lý ác tính và có chỉ định phẫu thuật cắt khối
u – cắt dạ dày khi còn có thể cắt được. Nếu đã phát hiện ung thư dạ dày mà không
điều trị phẫu thuật thì thời gian sống của bệnh nhân trung bình là 6 tháng đến
1 năm tùy loại ung thư dạ dày, khi ung thư tiến triển di căn có thể gây nhiều
đau đớn hơn. Hiện tại chị em đang bị suy nhược cơ thể nhiều, không ăn uống được,
đau nhiều do ung thư dạ dày là có chỉ định vào bệnh viện nằm điều trị luôn rồi,
chứ không thể nào lấy thuốc mà không tới phòng khám được đâu. Vô bệnh viện chị
em sẽ được truyền dịch, chích thuốc, nâng thể trạng lên rồi mới tính đến chuyện
điều trị khối ung thư kia. Em đưa chị em đến bệnh viện càng sớm càng tốt, tốt
nhất là bệnh viện Ung bướu, em nhé.
- source_sentence: "Thưa bác sĩ,\r\n\r\nEm bị đục thủy tinh thể do chấn thương và\
\ vừa mổ mắt về và em cũng bị cận thị. Thời gian khoảng 1 tuần em thấy mắt mình\
\ nhìn chỉ rõ hơn được 1 phần nào. Nhìn xa thì vẫn thấy nhưng vẫn mờ mờ. Bác sĩ\
\ cho em lời khuyên nên làm cách nào và mắt em có thể sáng lại như bình thường\
\ được không ạ?\r\n\r\nEm xin chân thành cảm ơn! (Minh Tiến - Bình Định)"
sentences:
- Bạn Minh Tiến thân mến, Hiện nay phẫu thuật đục thủy tinh thể đã được y học nói
chung và ngành Nhãn khoa Việt Nam thực hiện hoàn chỉnh đến mức tuyệt vời. Phẫu
thuật này được xem như một cuộc cách mạng rất đáng tự hào của ngành nhãn khoa.
Hàng ngày có thể tới hàng ngàn ca phẫu thuật đem lại ánh sáng cho người mù lòa
đục thể thủy tinh tại Việt Nam. Nói như vậy để giúp cho bạn hiểu rõ phẫu thuật
này các bác sĩ Việt Nam thực hiện rất thường xuyên và rất tốt. Tuy nhiên, với
mắt đục thủy tinh thể do chấn thương của bạn là ca phẫu thuật tương đối không
đơn giản. Thêm vào đó ngoài đục thủy tinh thể do chấn thương, mắt bạn cũng có
thể kèm theo tổn thương ở các bộ phận khác của mắt mà trước mổ bác sĩ khó có thể
chẩn đoán được. Với hai lý do nêu trên, nên đôi khi mắt mổ khó có thể tốt theo
ý muốn của cả bệnh nhân lẫn thầy thuốc. Bạn cần có thời gian theo dõi và điều
trị tiếp sau mổ. Sau thời gian ổn định khoảng 1 tháng, bạn cần đo thử kính xem
có cải thiện thị lực thêm không? Chúc bạn may mắn!
- Chào em, Bình thường các hạch trong cơ thể không sưng to lên đến mức có thể sờ
chạm hay nhận biết được. Vì thế, hạch sưng lên, hay thường gọi là nổi hạch, là
một triệu chứng bất thường của cơ thể. Cho nên, em lo lắng là đúng khi phát hiện
hạch ở vùng cổ. Hạch bạch huyết đóng vai trò quan trọng đối với hoạt động của
hệ miễn dịch. Chúng chứa các tế bào miễn dịch như lympho bào, đại thực bào...
có chức năng miễn dịch chống lại các yếu tố lạ như vi khuẩn, virus, kí sinh trùng...
xâm nhập vào cơ thể. Trong quá trình đó các hạch có thể bị viêm và sưng lên. Một
số trường hợp hạch sưng có thể là hạch ung thư hoặc di căn. Đặc điểm của hạch
viêm là nhỏ, số lượng ít, bờ tròn đều, không phát triển theo thời gian, không
xâm lấn da xung quanh. Thông thường đối với hạch viêm thì nguồn viêm có thể tấn
công tại hạch, cũng có khi là hạch viêm phản ứng với ổ viêm nhiễm cạnh đó, điều
trị hết viêm thì hạch sẽ lặn dần, có thể lặn chậm hơn vài tuần đến vài tháng,
có một số loại hạch cũng là hạch viêm nhưng mà chỉ giảm kích thước rồi cứ "lì"
vậy luôn - không lặn hẳn nhưng không còn sưng như trước và vẫn giữ hình ảnh của
hạch viêm, cũng có loại hạch viêm sau lại chuyển sang xơ chai hóa như sẹo cũ và
không lặn. Như vậy, em có 1 hạch vùng cổ đã được xác định là hạch viêm thông qua
sinh thiết hạch cách đây 10 năm. Trong vòng 10 năm nay, hạch cổ đó không có triệu
chứng bất thường. Gần đây, hạch cổ đó có biểu hiện viêm trở lại, mặc dù em uống
thuốc (tự mua) thì hạch hết sưng đau, nhưng em cũng cần khám lại bên chuyên khoa
ung bướu để kiểm tra tổng quát lại 1 lần, tìm nguyên nhân gây kích thích hạch
viêm này tái hoạt động, xem là nguyên nhân lành tính hay tiềm ẩn nguyên nhân khác
(vì lần kiểm tra trước đã cách đây 10 năm rồi), em nhé.
- ' Chào em, Trường hợp em mô tả là những bất thường của hệ hô hấp có thể là bệnh
lý tai mũi họng hay hô hấp dưới như viêm phổi, viêm phế quản, em cần đến các cơ
sở y tế chuyên sâu tai mũi họng hay hô hấp để khám thêm. Những biểu hiện đó hoàn
toàn không có cơ sở nghĩ . Thân mến!'
- source_sentence: Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị
hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn
hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được
bác sĩ ạ. Em cảm ơn.
sentences:
- 'Câu hỏi của em rất chân thành. Tự ý thức quyết tâm cai nghiệm là điều đáng quý.
Nếu em tiếp tục sử dụng thì tình trạng sẽ tồi tệ hơn rất nhiều. Ba yếu tố quan
trọng nhất và tiến hành đồng thời để cai nghiện thành công, đó là: 1. Ý chí 2.
Sự hiểu biết thấu đáo 3. Môi trường thân thiện. Các Trung tâm cai nghiện sẽ giúp
em phần 2 và phần 3, từ đó sẽ củng cố phần 1 của em. Trường hợp ở nhà mà em tự
cai, thực hành mỗi ngày với 3 điều kiện trên, em sẽ thành công như nhiều bạn khác.
Không nên nôn nóng, sốt ruột. Trước tiên em phải thuộc lòng và thực hành những
quy tắc này thành thói quen và áp dụng suốt đời. Nhiều trường hợp cai được vài
năm vẫn tái nghiện. Do đó, nên tránh xa những "nguồn" khiến em tái nghiện, tránh
xa bạn bè nghiện ngập em nhé. Chúc em quyết tâm và đem lại niềm vui cho bố mẹ.'
- Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương
gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng
thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên
tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy
mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6
tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên
xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được
kiểm tra và điều trị thích hợp, em nhé. Thân mến.
- Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả
năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ,
ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng
(người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa
chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp
xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những
nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh
mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị.
Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để
lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp.
Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để
tránh cho bệnh tái phát bạn nhé! Thân mến.
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("meandyou200175/e5-finetune")
# Run inference
sentences = [
'Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn.',
'Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến.',
'Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0365 | 100 | 1.9653 | - |
| 0.0730 | 200 | 0.5908 | - |
| 0.1096 | 300 | 0.1976 | - |
| 0.1461 | 400 | 0.1503 | - |
| 0.1826 | 500 | 0.118 | - |
| 0.2191 | 600 | 0.1347 | - |
| 0.2557 | 700 | 0.1303 | - |
| 0.2922 | 800 | 0.1133 | - |
| 0.3287 | 900 | 0.1208 | - |
| 0.3652 | 1000 | 0.0909 | 0.0738 |
| 0.4018 | 1100 | 0.0901 | - |
| 0.4383 | 1200 | 0.1026 | - |
| 0.4748 | 1300 | 0.1049 | - |
| 0.5113 | 1400 | 0.079 | - |
| 0.5478 | 1500 | 0.0963 | - |
| 0.5844 | 1600 | 0.0994 | - |
| 0.6209 | 1700 | 0.0858 | - |
| 0.6574 | 1800 | 0.0948 | - |
| 0.6939 | 1900 | 0.0776 | - |
| 0.7305 | 2000 | 0.0822 | 0.0691 |
| 0.7670 | 2100 | 0.0872 | - |
| 0.8035 | 2200 | 0.0687 | - |
| 0.8400 | 2300 | 0.0713 | - |
| 0.8766 | 2400 | 0.0746 | - |
| 0.9131 | 2500 | 0.085 | - |
| 0.9496 | 2600 | 0.0809 | - |
| 0.9861 | 2700 | 0.0868 | - |
| 1.0226 | 2800 | 0.07 | - |
| 1.0592 | 2900 | 0.0572 | - |
| 1.0957 | 3000 | 0.0651 | 0.0558 |
| 1.1322 | 3100 | 0.0487 | - |
| 1.1687 | 3200 | 0.0554 | - |
| 1.2053 | 3300 | 0.0551 | - |
| 1.2418 | 3400 | 0.0524 | - |
| 1.2783 | 3500 | 0.0563 | - |
| 1.3148 | 3600 | 0.0394 | - |
| 1.3514 | 3700 | 0.0492 | - |
| 1.3879 | 3800 | 0.0239 | - |
| 1.4244 | 3900 | 0.0359 | - |
| 1.4609 | 4000 | 0.0343 | 0.0483 |
| 1.4974 | 4100 | 0.0239 | - |
| 1.5340 | 4200 | 0.0246 | - |
| 1.5705 | 4300 | 0.0323 | - |
| 1.6070 | 4400 | 0.0233 | - |
| 1.6435 | 4500 | 0.0198 | - |
| 1.6801 | 4600 | 0.0263 | - |
| 1.7166 | 4700 | 0.0232 | - |
| 1.7531 | 4800 | 0.0263 | - |
| 1.7896 | 4900 | 0.0201 | - |
| 1.8262 | 5000 | 0.0155 | 0.0506 |
| 1.8627 | 5100 | 0.0185 | - |
| 1.8992 | 5200 | 0.0241 | - |
| 1.9357 | 5300 | 0.0215 | - |
| 1.9722 | 5400 | 0.0301 | - |
| 2.0088 | 5500 | 0.0229 | - |
| 2.0453 | 5600 | 0.018 | - |
| 2.0818 | 5700 | 0.0178 | - |
| 2.1183 | 5800 | 0.02 | - |
| 2.1549 | 5900 | 0.0164 | - |
| 2.1914 | 6000 | 0.0155 | 0.0446 |
| 2.2279 | 6100 | 0.0202 | - |
| 2.2644 | 6200 | 0.0131 | - |
| 2.3009 | 6300 | 0.0159 | - |
| 2.3375 | 6400 | 0.0183 | - |
| 2.3740 | 6500 | 0.0081 | - |
| 2.4105 | 6600 | 0.0119 | - |
| 2.4470 | 6700 | 0.0108 | - |
| 2.4836 | 6800 | 0.0128 | - |
| 2.5201 | 6900 | 0.0068 | - |
| 2.5566 | 7000 | 0.0107 | 0.0425 |
| 2.5931 | 7100 | 0.0086 | - |
| 2.6297 | 7200 | 0.0073 | - |
| 2.6662 | 7300 | 0.0072 | - |
| 2.7027 | 7400 | 0.0056 | - |
| 2.7392 | 7500 | 0.0069 | - |
| 2.7757 | 7600 | 0.0077 | - |
| 2.8123 | 7700 | 0.0054 | - |
| 2.8488 | 7800 | 0.0055 | - |
| 2.8853 | 7900 | 0.0087 | - |
| 2.9218 | 8000 | 0.006 | 0.0457 |
| 2.9584 | 8100 | 0.0065 | - |
| 2.9949 | 8200 | 0.0112 | - |
| 3.0314 | 8300 | 0.0065 | - |
| 3.0679 | 8400 | 0.0045 | - |
| 3.1045 | 8500 | 0.007 | - |
| 3.1410 | 8600 | 0.0053 | - |
| 3.1775 | 8700 | 0.0053 | - |
| 3.2140 | 8800 | 0.0062 | - |
| 3.2505 | 8900 | 0.0055 | - |
| 3.2871 | 9000 | 0.0074 | 0.0414 |
| 3.3236 | 9100 | 0.0061 | - |
| 3.3601 | 9200 | 0.0047 | - |
| 3.3966 | 9300 | 0.0034 | - |
| 3.4332 | 9400 | 0.0037 | - |
| 3.4697 | 9500 | 0.0043 | - |
| 3.5062 | 9600 | 0.0035 | - |
| 3.5427 | 9700 | 0.0043 | - |
| 3.5793 | 9800 | 0.0035 | - |
| 3.6158 | 9900 | 0.0035 | - |
| 3.6523 | 10000 | 0.0028 | 0.0395 |
| 3.6888 | 10100 | 0.0029 | - |
| 3.7253 | 10200 | 0.0032 | - |
| 3.7619 | 10300 | 0.003 | - |
| 3.7984 | 10400 | 0.0024 | - |
| 3.8349 | 10500 | 0.0035 | - |
| 3.8714 | 10600 | 0.0031 | - |
| 3.9080 | 10700 | 0.0028 | - |
| 3.9445 | 10800 | 0.0027 | - |
| 3.9810 | 10900 | 0.0038 | - |
| 4.0175 | 11000 | 0.0026 | 0.0392 |
| 4.0541 | 11100 | 0.0022 | - |
| 4.0906 | 11200 | 0.0025 | - |
| 4.1271 | 11300 | 0.0023 | - |
| 4.1636 | 11400 | 0.0022 | - |
| 4.2001 | 11500 | 0.0026 | - |
| 4.2367 | 11600 | 0.0028 | - |
| 4.2732 | 11700 | 0.0022 | - |
| 4.3097 | 11800 | 0.0027 | - |
| 4.3462 | 11900 | 0.0023 | - |
| 4.3828 | 12000 | 0.0016 | 0.0384 |
| 4.4193 | 12100 | 0.0022 | - |
| 4.4558 | 12200 | 0.0018 | - |
| 4.4923 | 12300 | 0.002 | - |
| 4.5289 | 12400 | 0.0017 | - |
| 4.5654 | 12500 | 0.002 | - |
| 4.6019 | 12600 | 0.0021 | - |
| 4.6384 | 12700 | 0.0019 | - |
| 4.6749 | 12800 | 0.0016 | - |
| 4.7115 | 12900 | 0.0013 | - |
| 4.7480 | 13000 | 0.0022 | 0.0367 |
| 4.7845 | 13100 | 0.0016 | - |
| 4.8210 | 13200 | 0.0013 | - |
| 4.8576 | 13300 | 0.0019 | - |
| 4.8941 | 13400 | 0.002 | - |
| 4.9306 | 13500 | 0.0015 | - |
| 4.9671 | 13600 | 0.0017 | - |
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ngocminhta/deberta-mgt | ngocminhta | 2024-10-20T10:12:53Z | 5 | 0 | null | [
"safetensors",
"deberta",
"license:apache-2.0",
"region:us"
] | null | 2024-10-20T10:03:44Z | ---
license: apache-2.0
---
|
bbunzeck/weenie_llama | bbunzeck | 2024-10-20T10:02:52Z | 169 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:nilq/babylm-10M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-14T21:56:39Z | ---
datasets:
- nilq/babylm-10M
language:
- en
---
This autoregressive model belongs to a series of rather small language models trained on the [BabyLM](https://babylm.github) data:
- the [baby_llama](https://huggingface.co/bbunzeck/baby_llama) model has few parameters and was trained on a small data set (10M tokens)
- the [**t**eenie_llama](https://huggingface.co/bbunzeck/teenie_llama) model has the same number of parameters but was trained on more **t**okens of text (100M)
- the [**w**eenie_llama](https://huggingface.co/bbunzeck/weenie_llama) model was trained on the small data set, but has more parameters/**w**eights
- the [**tw**eenie_llama](https://huggingface.co/bbunzeck/tweenie_llama) model features both -- more **t**okens (the larger data set) and more **w**eights (*viz.* parameters)
| | baby_llama | teenie_llama | weenie_llama | tweenie_llama |
|-----------------|-----------|-------------|-------------|--------------|
| Parameters | 2.97M | 2.97M | 11.44M | 11.44M |
| hidden layers | 8 | 8 | 16 | 16 |
| Attention heads | 8 | 8 | 16 | 16 |
| Embedding size | 128 | 128 | 256 | 256 |
| Context size | 128 | 128 | 256 | 256 |
| Vocab size | 16k | 16k | 16k | 16k |
If you use this model in your research, please cite the following publication:
```
@inproceedings{bunzeck-zarriess-2024-fifty,
title = "Fifty shapes of {BL}i{MP}: syntactic learning curves in language models are not uniform, but sometimes unruly",
author = "Bunzeck, Bastian and
Zarrie{\ss}, Sina",
editor = "Qiu, Amy and
Noble, Bill and
Pagmar, David and
Maraev, Vladislav and
Ilinykh, Nikolai",
booktitle = "Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning",
month = oct,
year = "2024",
address = "Gothenburg, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.clasp-1.7",
pages = "39--55",
}
``` |
bbunzeck/teenie_llama | bbunzeck | 2024-10-20T10:02:33Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:nilq/babylm-100M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-14T21:50:38Z | ---
datasets:
- nilq/babylm-100M
language:
- en
---
This autoregressive model belongs to a series of rather small language models trained on the [BabyLM](https://babylm.github) data:
- the [baby_llama](https://huggingface.co/bbunzeck/baby_llama) model has few parameters and was trained on a small data set (10M tokens)
- the [**t**eenie_llama](https://huggingface.co/bbunzeck/teenie_llama) model has the same number of parameters but was trained on more **t**okens of text (100M)
- the [**w**eenie_llama](https://huggingface.co/bbunzeck/weenie_llama) model was trained on the small data set, but has more parameters/**w**eights
- the [**tw**eenie_llama](https://huggingface.co/bbunzeck/tweenie_llama) model features both -- more **t**okens (the larger data set) and more **w**eights (*viz.* parameters)
| | baby_llama | teenie_llama | weenie_llama | tweenie_llama |
|-----------------|-----------|-------------|-------------|--------------|
| Parameters | 2.97M | 2.97M | 11.44M | 11.44M |
| hidden layers | 8 | 8 | 16 | 16 |
| Attention heads | 8 | 8 | 16 | 16 |
| Embedding size | 128 | 128 | 256 | 256 |
| Context size | 128 | 128 | 256 | 256 |
| Vocab size | 16k | 16k | 16k | 16k |
If you use this model in your research, please cite the following publication:
```
@inproceedings{bunzeck-zarriess-2024-fifty,
title = "Fifty shapes of {BL}i{MP}: syntactic learning curves in language models are not uniform, but sometimes unruly",
author = "Bunzeck, Bastian and
Zarrie{\ss}, Sina",
editor = "Qiu, Amy and
Noble, Bill and
Pagmar, David and
Maraev, Vladislav and
Ilinykh, Nikolai",
booktitle = "Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning",
month = oct,
year = "2024",
address = "Gothenburg, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.clasp-1.7",
pages = "39--55",
}
``` |
ashokpoudel/retail-banking-llm-chatbot-translation | ashokpoudel | 2024-10-20T10:02:30Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T09:58:29Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ashokpoudel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cuongdev/pxhien-v2 | cuongdev | 2024-10-20T09:56:17Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-20T09:52:43Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### pxhien-v2 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Kort/i_20 | Kort | 2024-10-20T09:42:39Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T09:39:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cryotron/chatbot_academic_3rd_Year_GUFF_Quantized4Bit | cryotron | 2024-10-20T09:41:37Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T09:36:23Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** cryotron
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Najathpathiyil/swin-tiny-patch4-window7-224-finetuned-eurosat | Najathpathiyil | 2024-10-20T09:37:32Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-13T14:11:12Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0793
- Accuracy: 0.9741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2912 | 1.0 | 190 | 0.1284 | 0.96 |
| 0.1963 | 2.0 | 380 | 0.0975 | 0.9715 |
| 0.1511 | 3.0 | 570 | 0.0793 | 0.9741 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Zohaib002/Abmiguity-factor | Zohaib002 | 2024-10-20T09:36:42Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T09:34:01Z | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Abmiguity-factor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Abmiguity-factor
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
- Rouge1: 0.5239
- Rouge2: 0.2727
- Rougel: 0.3876
- Rougelsum: 0.3876
- Gen Len: 90.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 1.2998 | 0.3903 | 0.1429 | 0.2699 | 0.2699 | 69.0 |
| No log | 2.0 | 2 | 1.1258 | 0.4737 | 0.202 | 0.3449 | 0.3449 | 77.0 |
| No log | 3.0 | 3 | 1.0220 | 0.4627 | 0.2003 | 0.3372 | 0.3372 | 87.5 |
| No log | 4.0 | 4 | 0.9522 | 0.472 | 0.2042 | 0.3429 | 0.3429 | 85.5 |
| No log | 5.0 | 5 | 0.9162 | 0.4951 | 0.2238 | 0.3814 | 0.3814 | 95.0 |
| No log | 6.0 | 6 | 0.8882 | 0.4951 | 0.2238 | 0.3814 | 0.3814 | 95.0 |
| No log | 7.0 | 7 | 0.8659 | 0.5171 | 0.2652 | 0.4122 | 0.4122 | 97.5 |
| No log | 8.0 | 8 | 0.8537 | 0.5239 | 0.2727 | 0.3876 | 0.3876 | 90.5 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
omgitsqing/CREPE_MIR-1K_16 | omgitsqing | 2024-10-20T09:34:54Z | 12 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"transcription, pitch detection",
"license:mit",
"region:us"
] | null | 2024-10-17T16:34:00Z | ---
license: mit
pipeline_tag: transcription, pitch detection
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: omgitsqing/CREPE_MIR-1K_16
- Docs: [More Information Needed] |
vocabtrimmer/bert-base-spanish-wwm-cased.xnli-es.3 | vocabtrimmer | 2024-10-20T09:26:58Z | 5 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2024-10-20T09:26:44Z | # `vocabtrimmer/bert-base-spanish-wwm-cased.xnli-es.3`
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the
[xnli](https://huggingface.co/datasets/xnli) (es).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(es).
* Evaluation on test split
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 64.85 | 64.85 | 64.85 | 64.81 | 64.85 | 66.56 | 64.85 |
* Evaluation on validation split
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 65.14 | 65.14 | 65.14 | 65.21 | 65.14 | 66.81 | 65.14 |
Check the result file [here](https://huggingface.co/vocabtrimmer/bert-base-spanish-wwm-cased.xnli-es.3/raw/main/eval.json). |
Zohaib002/amb-dataset-factor | Zohaib002 | 2024-10-20T09:26:13Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T09:08:49Z | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: amb-dataset-factor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amb-dataset-factor
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8578
- Rouge1: 0.6039
- Rouge2: 0.3487
- Rougel: 0.4805
- Rougelsum: 0.4805
- Gen Len: 101.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 1.1889 | 0.5926 | 0.2701 | 0.3828 | 0.3828 | 65.5 |
| No log | 2.0 | 2 | 1.0179 | 0.6489 | 0.3333 | 0.458 | 0.458 | 77.5 |
| No log | 3.0 | 3 | 0.9405 | 0.6084 | 0.2627 | 0.3783 | 0.3783 | 82.5 |
| No log | 4.0 | 4 | 0.8990 | 0.6241 | 0.3058 | 0.4054 | 0.4054 | 86.0 |
| No log | 5.0 | 5 | 0.8814 | 0.6746 | 0.3882 | 0.4842 | 0.4842 | 95.0 |
| No log | 6.0 | 6 | 0.8679 | 0.5554 | 0.3111 | 0.4127 | 0.4127 | 94.5 |
| No log | 7.0 | 7 | 0.8607 | 0.5799 | 0.3016 | 0.4153 | 0.4153 | 100.0 |
| No log | 8.0 | 8 | 0.8578 | 0.6039 | 0.3487 | 0.4805 | 0.4805 | 101.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ebinzack15/mistral_7b_custom_200_3E_VarData_mergedmodel | ebinzack15 | 2024-10-20T09:19:18Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-20T09:17:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
javedafroz/llama-3-1b-chat-doctor | javedafroz | 2024-10-20T09:17:53Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-18T08:02:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Saitun/Europegardenwd | Saitun | 2024-10-20T09:14:07Z | 5 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-10-20T09:13:14Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/zoom day 5 9 - 20_10_2024 14_50_53.png
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: null
license: apache-2.0
---
# Europegardenwd
<Gallery />
## Model description
Europe garden wedding
## Download model
Weights for this model are available in Safetensors format.
[Download](/Saitun/Europegardenwd/tree/main) them in the Files & versions tab.
|
Mantis-VL/mantis-8b-idefics2_8192 | Mantis-VL | 2024-10-20T08:56:41Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-20T05:42:22Z | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2_8192
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2_8192
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
joshua-data/roberta-base-klue-ynat-classification | joshua-data | 2024-10-20T08:56:16Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-20T08:55:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Maverick1713/wav2vec2-Pratyush-Comp | Maverick1713 | 2024-10-20T08:52:10Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-20T06:08:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_16k_ep35 | rewicks | 2024-10-20T08:41:14Z | 115 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T08:39:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rdoshi21/detr-finetuned-all | rdoshi21 | 2024-10-20T08:38:15Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-20T08:38:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bao121/layoutlmv2-base-uncased_finetuned_docvqa | bao121 | 2024-10-20T08:32:07Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | 2024-10-20T05:05:33Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased_finetuned_docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.2951 | 0.2212 | 50 | 4.7074 |
| 4.5271 | 0.4425 | 100 | 4.1032 |
| 4.1303 | 0.6637 | 150 | 3.9369 |
| 3.8633 | 0.8850 | 200 | 3.5651 |
| 3.4747 | 1.1062 | 250 | 3.6489 |
| 3.4294 | 1.3274 | 300 | 3.1484 |
| 3.1319 | 1.5487 | 350 | 2.9161 |
| 2.8258 | 1.7699 | 400 | 2.8137 |
| 2.6171 | 1.9912 | 450 | 2.9083 |
| 2.4222 | 2.2124 | 500 | 3.0206 |
| 2.1271 | 2.4336 | 550 | 2.6056 |
| 1.9083 | 2.6549 | 600 | 2.3721 |
| 1.9685 | 2.8761 | 650 | 2.3992 |
| 1.6307 | 3.0973 | 700 | 2.7301 |
| 1.4481 | 3.3186 | 750 | 2.3406 |
| 1.4984 | 3.5398 | 800 | 2.6880 |
| 1.7125 | 3.7611 | 850 | 2.4609 |
| 1.3863 | 3.9823 | 900 | 2.3533 |
| 1.2462 | 4.2035 | 950 | 2.5440 |
| 1.0761 | 4.4248 | 1000 | 2.7272 |
| 1.1144 | 4.6460 | 1050 | 1.9797 |
| 0.9787 | 4.8673 | 1100 | 2.4466 |
| 0.8171 | 5.0885 | 1150 | 2.9085 |
| 0.8736 | 5.3097 | 1200 | 2.9959 |
| 0.9349 | 5.5310 | 1250 | 2.3175 |
| 0.7439 | 5.7522 | 1300 | 3.0023 |
| 0.7007 | 5.9735 | 1350 | 3.1554 |
| 0.4858 | 6.1947 | 1400 | 3.3196 |
| 0.7693 | 6.4159 | 1450 | 3.3589 |
| 0.4525 | 6.6372 | 1500 | 3.1226 |
| 0.442 | 6.8584 | 1550 | 3.5817 |
| 0.5321 | 7.0796 | 1600 | 3.4303 |
| 0.4044 | 7.3009 | 1650 | 3.5700 |
| 0.3062 | 7.5221 | 1700 | 4.0335 |
| 0.5116 | 7.7434 | 1750 | 3.6335 |
| 0.3481 | 7.9646 | 1800 | 3.5987 |
| 0.2781 | 8.1858 | 1850 | 4.0488 |
| 0.3054 | 8.4071 | 1900 | 3.4694 |
| 0.3572 | 8.6283 | 1950 | 3.7605 |
| 0.3524 | 8.8496 | 2000 | 3.9977 |
| 0.3551 | 9.0708 | 2050 | 3.7106 |
| 0.1455 | 9.2920 | 2100 | 3.8854 |
| 0.3655 | 9.5133 | 2150 | 3.6476 |
| 0.348 | 9.7345 | 2200 | 3.5709 |
| 0.299 | 9.9558 | 2250 | 3.6914 |
| 0.1781 | 10.1770 | 2300 | 3.7616 |
| 0.1812 | 10.3982 | 2350 | 3.7328 |
| 0.3197 | 10.6195 | 2400 | 4.3628 |
| 0.2139 | 10.8407 | 2450 | 4.3765 |
| 0.0266 | 11.0619 | 2500 | 4.6882 |
| 0.0659 | 11.2832 | 2550 | 4.5060 |
| 0.2312 | 11.5044 | 2600 | 4.1473 |
| 0.1366 | 11.7257 | 2650 | 4.2983 |
| 0.2264 | 11.9469 | 2700 | 4.2400 |
| 0.165 | 12.1681 | 2750 | 4.0799 |
| 0.0443 | 12.3894 | 2800 | 4.2081 |
| 0.178 | 12.6106 | 2850 | 4.0656 |
| 0.2517 | 12.8319 | 2900 | 4.2621 |
| 0.1208 | 13.0531 | 2950 | 4.6150 |
| 0.0767 | 13.2743 | 3000 | 4.6056 |
| 0.0789 | 13.4956 | 3050 | 4.2637 |
| 0.0492 | 13.7168 | 3100 | 4.3896 |
| 0.1782 | 13.9381 | 3150 | 4.3427 |
| 0.0968 | 14.1593 | 3200 | 4.5666 |
| 0.0287 | 14.3805 | 3250 | 4.6159 |
| 0.0516 | 14.6018 | 3300 | 4.4566 |
| 0.1008 | 14.8230 | 3350 | 4.5481 |
| 0.052 | 15.0442 | 3400 | 4.6010 |
| 0.0334 | 15.2655 | 3450 | 4.6442 |
| 0.027 | 15.4867 | 3500 | 4.7174 |
| 0.0791 | 15.7080 | 3550 | 4.5374 |
| 0.147 | 15.9292 | 3600 | 4.6980 |
| 0.1186 | 16.1504 | 3650 | 4.5936 |
| 0.0373 | 16.3717 | 3700 | 4.6215 |
| 0.0445 | 16.5929 | 3750 | 4.4737 |
| 0.0325 | 16.8142 | 3800 | 4.6176 |
| 0.0842 | 17.0354 | 3850 | 4.7082 |
| 0.0097 | 17.2566 | 3900 | 4.7821 |
| 0.0707 | 17.4779 | 3950 | 4.7606 |
| 0.0583 | 17.6991 | 4000 | 4.8275 |
| 0.0222 | 17.9204 | 4050 | 4.8638 |
| 0.0413 | 18.1416 | 4100 | 4.8912 |
| 0.0097 | 18.3628 | 4150 | 4.9439 |
| 0.0075 | 18.5841 | 4200 | 4.9960 |
| 0.0735 | 18.8053 | 4250 | 4.9482 |
| 0.0821 | 19.0265 | 4300 | 5.0201 |
| 0.0254 | 19.2478 | 4350 | 4.9772 |
| 0.1105 | 19.4690 | 4400 | 4.9629 |
| 0.0649 | 19.6903 | 4450 | 4.9474 |
| 0.025 | 19.9115 | 4500 | 4.9512 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0
- Datasets 3.0.1
- Tokenizers 0.20.1
|
danielhanchen/gemma2_test | danielhanchen | 2024-10-20T08:27:57Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T08:19:49Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asocastro/gemma-2b-instruct-ft-My-TherAIpist | asocastro | 2024-10-20T08:26:15Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T08:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aitechguy/bert-ner-cpp | aitechguy | 2024-10-20T08:21:50Z | 3 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"token-classification",
"code",
"finance",
"legal",
"medical",
"chemistry",
"biology",
"en",
"dataset:eriktks/conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:quantized:distilbert/distilbert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-13T07:42:03Z | ---
license: mit
datasets:
- eriktks/conll2003
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: token-classification
tags:
- code
- finance
- legal
- medical
- chemistry
- biology
---
# aitechguy/distilbert_base_uncased_ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5619
- Validation Loss: 3.5082
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -996, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5619 | 3.5082 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1 |
rewicks/baseline_en-de_8k_ep35 | rewicks | 2024-10-20T08:20:54Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T08:13:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_16k_ep31 | rewicks | 2024-10-20T08:17:22Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T08:09:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
b09501048/ADL_HW2_GPT2 | b09501048 | 2024-10-20T08:17:21Z | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"summarization",
"generated_from_trainer",
"base_model:ckiplab/gpt2-tiny-chinese",
"base_model:finetune:ckiplab/gpt2-tiny-chinese",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2024-10-18T09:12:40Z | ---
library_name: transformers
license: gpl-3.0
base_model: ckiplab/gpt2-tiny-chinese
tags:
- summarization
- generated_from_trainer
model-index:
- name: ADL_HW2_GPT2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
# ADL_HW2_GPT2
This model is a fine-tuned version of [ckiplab/gpt2-tiny-chinese](https://huggingface.co/ckiplab/gpt2-tiny-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.02 | 1.0 | 543 | 3.6837 |
| 3.8249 | 2.0 | 1086 | 3.5804 |
| 3.7431 | 3.0 | 1629 | 3.5336 |
| 3.6958 | 4.0 | 2172 | 3.5036 |
| 3.6633 | 5.0 | 2715 | 3.4815 |
| 3.639 | 6.0 | 3258 | 3.4722 |
| 3.622 | 7.0 | 3801 | 3.4597 |
| 3.6095 | 8.0 | 4344 | 3.4578 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Gummybear05/wav2vec2-E30_speed | Gummybear05 | 2024-10-20T08:17:21Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-20T06:37:17Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E30_speed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E30_speed
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5403
- Cer: 30.0576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 38.4014 | 0.1289 | 200 | 5.0924 | 100.0 |
| 4.967 | 0.2579 | 400 | 4.7266 | 100.0 |
| 4.803 | 0.3868 | 600 | 4.6563 | 100.0 |
| 4.7976 | 0.5158 | 800 | 4.7264 | 100.0 |
| 4.7401 | 0.6447 | 1000 | 4.6214 | 100.0 |
| 4.7227 | 0.7737 | 1200 | 4.6575 | 100.0 |
| 4.6903 | 0.9026 | 1400 | 4.6785 | 100.0 |
| 4.5596 | 1.0316 | 1600 | 4.4755 | 100.0 |
| 4.3655 | 1.1605 | 1800 | 4.1150 | 90.4312 |
| 3.7719 | 1.2895 | 2000 | 3.5459 | 66.0186 |
| 3.2218 | 1.4184 | 2200 | 2.9444 | 54.4643 |
| 2.9251 | 1.5474 | 2400 | 2.7016 | 50.6344 |
| 2.6652 | 1.6763 | 2600 | 2.5111 | 45.5416 |
| 2.4444 | 1.8053 | 2800 | 2.2185 | 40.9246 |
| 2.2701 | 1.9342 | 3000 | 2.0419 | 38.7101 |
| 2.1017 | 2.0632 | 3200 | 1.9275 | 36.5836 |
| 1.982 | 2.1921 | 3400 | 1.9218 | 36.9302 |
| 1.8856 | 2.3211 | 3600 | 1.7415 | 33.2648 |
| 1.7977 | 2.4500 | 3800 | 1.7043 | 33.2942 |
| 1.7146 | 2.5790 | 4000 | 1.6084 | 30.8036 |
| 1.6798 | 2.7079 | 4200 | 1.5947 | 31.0033 |
| 1.6234 | 2.8369 | 4400 | 1.5576 | 30.2573 |
| 1.6289 | 2.9658 | 4600 | 1.5403 | 30.0576 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
FuturisticVibes/magnum-v4-22b-8.0bpw-h8-exl2 | FuturisticVibes | 2024-10-20T08:11:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-20T08:05:21Z | ---
license: other
license_name: mrl
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
magnum-v4-22b 8.0bpw h8 EXL2
Anthracite did post 8bpw quants, but they used 6-bit heads. I’m probably not doing 123b, my wallet still has PTSD from the big mixtrals…
Includes [measurement.json](https://huggingface.co/FuturisticVibes/magnum-v4-22b-8.0bpw-h8-exl2/tree/measurement) file for further quantization
Original Model: https://huggingface.co/anthracite-org/magnum-v4-22b
# Original Model Card

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409).
## Prompting
A typical input would look like this:
```py
<s>[INST] SYSTEM MESSAGE
USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
<details><summary>instruct template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: /workspace/models/Mistral-Small-Instruct-2409
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
hub_model_id: anthracite-org/magnum-v4-22b-r4
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
#liger_cross_entropy: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system
type: custommistralv2v3
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
type: custommistralv2v3
- path: anthracite-org/kalo-opus-instruct-3k-filtered-no-system
type: custommistralv2v3
- path: anthracite-org/nopm_claude_writing_fixed
type: custommistralv2v3
- path: anthracite-org/kalo_opus_misc_240827_no_system
type: custommistralv2v3
- path: anthracite-org/kalo_misc_part2_no_system
type: custommistralv2v3
#chat_template: mistral_v2v3
shuffle_merged_datasets: true
#default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: /workspace/data/magnum-22b-data
val_set_size: 0.0
output_dir: /workspace/data/22b-r4-fft-out
sequence_len: 32768
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 22b-magnum-fft
wandb_entity:
wandb_watch:
wandb_name: v4-r4-attempt-01
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000004
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system)
- [anthracite-org/kalo-opus-instruct-3k-filtered-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-3k-filtered-no-system)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [anthracite-org/kalo_opus_misc_240827_no_system](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827_no_system)
- [anthracite-org/kalo_misc_part2_no_system](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2_no_system)
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
... |
rewicks/baseline_en-de_16k_ep30 | rewicks | 2024-10-20T08:07:52Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T08:06:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF | mradermacher | 2024-10-20T08:00:08Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.6",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.6",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-20T07:47:18Z | ---
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.6
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.6-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.6.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
busal/bert-ft-ner | busal | 2024-10-20T07:56:32Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-20T07:54:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_8k_ep31 | rewicks | 2024-10-20T07:56:04Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T07:48:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_16k_ep28 | rewicks | 2024-10-20T07:53:40Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T07:52:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_8k_ep30 | rewicks | 2024-10-20T07:46:52Z | 115 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T07:45:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_8k_ep29 | rewicks | 2024-10-20T07:43:21Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T07:35:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nonetrix/llama-3.1-70B-lumitron | nonetrix | 2024-10-20T07:40:54Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:NeverSleep/Lumimaid-v0.2-70B",
"base_model:merge:NeverSleep/Lumimaid-v0.2-70B",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:merge:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T06:10:03Z | ---
base_model:
- nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- NeverSleep/Lumimaid-v0.2-70B
library_name: transformers
tags:
- mergekit
- merge
---
# llama-3.1-70B-lumitron
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) as a base.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Lumimaid-v0.2-70B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
models:
- model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
parameters:
weight: 1.0
- model: NeverSleep/Lumimaid-v0.2-70B
parameters:
weight: 1.0
dtype: bfloat16
```
|
rewicks/baseline_en-de_8k_ep28 | rewicks | 2024-10-20T07:34:15Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-20T07:29:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_16k_ep23 | rewicks | 2024-10-20T07:29:57Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-16T18:23:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_16k_ep22 | rewicks | 2024-10-20T07:27:35Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-16T01:37:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits