modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
baek26/all_4156_bart-all_rl
baek26
2024-05-23T03:34:07Z
52
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-05-23T03:33:27Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmpc08rmve1/baek26/all_4156_bart-all_rl") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpc08rmve1/baek26/all_4156_bart-all_rl") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpc08rmve1/baek26/all_4156_bart-all_rl") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
DokHee/JSLLMV7
DokHee
2024-05-23T03:31:34Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T03:19:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sorrowair/binding_site_35M_adap
Sorrowair
2024-05-23T03:30:59Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:westlake-repl/SaProt_35M_AF2", "base_model:adapter:westlake-repl/SaProt_35M_AF2", "region:us" ]
null
2024-05-23T03:30:57Z
--- library_name: peft base_model: westlake-repl/SaProt_35M_AF2 --- # Model Card for Model ID This model is used for a demo task<br><br> The digital label means: <br>0: <br> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
nms05/Dinov2-SigLIP-Phi3-LoRA
nms05
2024-05-23T03:24:12Z
0
0
null
[ "visual-question-answering", "en", "dataset:liuhaotian/LLaVA-Instruct-150K", "dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K", "arxiv:2401.06209", "region:us" ]
visual-question-answering
2024-05-21T06:12:58Z
--- datasets: - liuhaotian/LLaVA-Instruct-150K - liuhaotian/LLaVA-CC3M-Pretrain-595K language: - en metrics: - accuracy pipeline_tag: visual-question-answering --- # DinoV2-SigLIP-Phi3(LoRA) VLM * **Vision Encoder** - DinoV2 + SigLIP @384px resolution. [Why 2 vision encoders?](https://arxiv.org/abs/2401.06209) * **Connector** - MLP (Dino and SigLIP features are concatenated and then projected to Phi3 representation space) * **Language Model** - Phi3 + LoRA * **Pre-train (Align) Dataset** - LLaVA-CC3M-Pretrain-595K * **Fine-tune (Instruction) Dataset** - LLAVA-v1.5-Instruct + LRV-Instruct Scripts to build and train the models are available at [NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM](https://github.com/NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM).
emily49/spirited-away-stable-diffusion-non-inpaint
emily49
2024-05-23T03:18:42Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:finetune:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-23T02:49:00Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers base_model: stabilityai/stable-diffusion-2-1-base inference: true instance_prompt: a photo of sks spiritedaway --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - emily49/spirited-away-stable-diffusion-non-inpaint This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks spiritedaway using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
RomBor/poca-SoccerTwos
RomBor
2024-05-23T03:18:16Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-05-23T03:18:10Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: RomBor/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
benchang1110/Taiwan-tinyllama-v1.0-chat
benchang1110
2024-05-23T03:14:14Z
152
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "dataset:benchang1110/ChatTaiwan", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T11:49:42Z
--- language: - zh license: apache-2.0 datasets: - benchang1110/ChatTaiwan pipeline_tag: text-generation widget: - example_title: 範例一 messages: - role: user content: 你好 --- ## Model Card for Model ID This model is the instruction finetuning version of [benchang1110/Taiwan-tinyllama-v1.0-base](https://huggingface.co/benchang1110/Taiwan-tinyllama-v1.0-base). ## Usage ```python import torch, transformers def generate_response(): model = transformers.AutoModelForCausalLM.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat", torch_dtype=torch.bfloat16, device_map=device,attn_implementation="flash_attention_2") tokenizer = transformers.AutoTokenizer.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat") streamer = transformers.TextStreamer(tokenizer,skip_prompt=True) while(1): prompt = input('USER:') if prompt == "exit": break print("Assistant: ") message = [ {'content': prompt, 'role': 'user'}, ] untokenized_chat = tokenizer.apply_chat_template(message,tokenize=False,add_generation_prompt=False) inputs = tokenizer.encode_plus(untokenized_chat, add_special_tokens=True, return_tensors="pt",return_attention_mask=True).to(device) outputs = model.generate(inputs["input_ids"],attention_mask=inputs['attention_mask'],streamer=streamer,use_cache=True,max_new_tokens=512,do_sample=True,temperature=0.1,repetition_penalty=1.2) if __name__ == '__main__': device = 'cuda' if torch.cuda.is_available() else 'cpu' generate_response() ```
failspy/Phi-3-medium-4k-instruct-abliterated-v3
failspy
2024-05-23T03:12:59Z
5,398
22
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T20:47:55Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # Phi-3-medium-4k-instruct-abliterated-v3 [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) #### Phi-3-abliterated statement Took me a while to wizard this one up. It’s been a while since I’ve released a Phi-3 model. In the past I accidentally missed an item required in the model release process - hallucination testing. This model has been tested and though it is more likely to hallucinate than the original model in my experience, it is generally as stable as the original. Now that the new Phi-3 models are out, I'm working on completing this abliteration process quickly and then will release the other models as soon as possible. 🏇 ## Summary This is [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. [GGUF Quants](https://huggingface.co/failspy/Phi-3-medium-4k-instruct-abliterated-v3-GGUF) ## Hang on, "abliterated"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliterated": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2? Well, I released a V2 of an abliterated model a while back for Meta-Llama-3-8B under Cognitive Computations. It ended up being not worth it to try V2 with larger models, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
dahwinsingularity/dahyunvision_ultra_max
dahwinsingularity
2024-05-23T03:12:22Z
4
0
transformers
[ "transformers", "safetensors", "minicpmv", "feature-extraction", "custom_code", "license:apache-2.0", "region:us" ]
feature-extraction
2024-05-23T02:56:55Z
--- license: apache-2.0 ---
khaingsmon/xho-1
khaingsmon
2024-05-23T03:11:41Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:ml-superb-subset", "base_model:Akashpb13/Swahili_xlsr", "base_model:finetune:Akashpb13/Swahili_xlsr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T01:24:45Z
--- license: apache-2.0 base_model: Akashpb13/Swahili_xlsr tags: - generated_from_trainer datasets: - ml-superb-subset model-index: - name: xho-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xho-1 This model is a fine-tuned version of [Akashpb13/Swahili_xlsr](https://huggingface.co/Akashpb13/Swahili_xlsr) on the ml-superb-subset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Andyhahaho/distilbert-base-uncased-finetuned-adl_hw1
Andyhahaho
2024-05-23T03:09:37Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T12:10:50Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-adl_hw1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-adl_hw1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2325 - Accuracy: 0.0003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.2678 | 1.0 | 938 | 1.5396 | 0.0 | | 0.9511 | 2.0 | 1876 | 0.3407 | 0.0 | | 0.16 | 3.0 | 2814 | 0.2027 | 0.0 | | 0.0492 | 4.0 | 3752 | 0.1910 | 0.0 | | 0.0227 | 5.0 | 4690 | 0.1803 | 0.0 | | 0.0142 | 6.0 | 5628 | 0.2025 | 0.0 | | 0.014 | 7.0 | 6566 | 0.2010 | 0.0 | | 0.0064 | 8.0 | 7504 | 0.2267 | 0.0 | | 0.0076 | 9.0 | 8442 | 0.2312 | 0.0 | | 0.0065 | 10.0 | 9380 | 0.2257 | 0.0 | | 0.0051 | 11.0 | 10318 | 0.2285 | 0.0 | | 0.003 | 12.0 | 11256 | 0.2325 | 0.0003 | | 0.0031 | 13.0 | 12194 | 0.2582 | 0.0 | | 0.0009 | 14.0 | 13132 | 0.2445 | 0.0 | | 0.0012 | 15.0 | 14070 | 0.2511 | 0.0 | | 0.0006 | 16.0 | 15008 | 0.2568 | 0.0 | | 0.0002 | 17.0 | 15946 | 0.2586 | 0.0 | | 0.0002 | 18.0 | 16884 | 0.2620 | 0.0 | | 0.0001 | 19.0 | 17822 | 0.2606 | 0.0 | | 0.0001 | 20.0 | 18760 | 0.2631 | 0.0 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Sorrowair/metal_35M_adap
Sorrowair
2024-05-23T03:09:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:westlake-repl/SaProt_35M_AF2", "base_model:adapter:westlake-repl/SaProt_35M_AF2", "region:us" ]
null
2024-05-23T03:09:30Z
--- library_name: peft base_model: westlake-repl/SaProt_35M_AF2 --- # Model Card for Model ID This model is used for a demo task<br><br> The digital label means: <br>0: <br> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
FabioSantos/curso_llama3Finetune_unsloth
FabioSantos
2024-05-23T03:04:31Z
8
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T02:37:19Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** FabioSantos - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
seongs/maymust_mistral_v1.1
seongs
2024-05-23T03:02:35Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:15:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huqian/dummy-model
huqian
2024-05-23T03:00:21Z
106
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-23T02:58:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thesven/Hermes-2-Theta-Llama-3-8B-GPTQ
thesven
2024-05-23T02:52:23Z
80
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-05-23T02:36:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
h104/dippy_1049
h104
2024-05-23T02:50:44Z
76
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:49:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cocoirun/longforemr-kobart-summary-v1
cocoirun
2024-05-23T02:47:53Z
152
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-23T02:17:35Z
--- license: cc-by-nc-nd-4.0 --- Longformer인코더 KoBART로 AIHUB 금융 및 콜 상담 대화 데이터를 CHATGPT를 통해 요약한 학습 데이터를 학습한 모델 ``` input = """고객: 안녕하세요, 제가 여기서 사용하는 신용카드에 대해 궁금한 게 있어요. 상담원: 안녕하세요! 네, 어떤 문의가 있으신가요? 고객: 제가 이번 달에 카드를 사용하면서 리워드 포인트를 얼마나 쌓았는지 확인하고 싶어요. 상담원: 네, 당신의 리워드 포인트 잔액을 확인해 드릴 수 있습니다. 제가 당신의 카드 번호를 입력하고 확인해볼게요. 번호를 알려주실 수 있을까요? 고객: 네, 제 카드 번호는 1234-5678-9012-3456입니다. 상담원: 감사합니다. 잠시만 기다려주세요. 확인 중이에요... 네, 현재 당신의 리워드 포인트 잔액은 3,250 포인트입니다. 고객: 알겠어요, 감사합니다! 그럼 추가적인 이용 혜택이나 할인에 관한 정보도 얻을 수 있을까요? 상담원: 물론이죠! 저희 카드사는 다양한 이용 혜택을 제공하고 있습니다. 예를 들어, 여행, 쇼핑, 식사 등 다양한 분야에서 할인 혜택을 받을 수 있거나, 리워드 포인트를 사용하여 상품이나 기프트 카드로 교환할 수 있습니다. 어떤 혜택에 관심이 있으신가요? 고객: 저는 여행 할인이나 마일리지 적립에 관심이 있어요. 상담원: 그런 경우에는 당신에게 적합한 여행 카드 혜택을 제공하는 카드를 추천해 드릴 수 있습니다. 여행 카드는 항공사 마일리지를 쌓을 수 있고, 호텔 할인 혜택을 받을 수도 있습니다. 제가 몇 가지 옵션을 제안해 볼까요? 고객: 네, 그러면 좋을 것 같아요. 감사합니다! 상담원: 말씀해 주셔서 감사합니다. 이제 제가 몇 가지 추천을 드리도록 하겠습니다. 어떤 항공사를 주로 이용하시나요?""" ``` ``` output =""" - 고객이 신용카드에 대해 궁금한 사항 상담 - 리워드 포인트 확인 요청 - 상담원이 카드 번호와 잔액 확인 후 추가 이용 혜택 안내 - 고객이 여행 할인, 마일리지, 호텔 할인 등 다양한 혜택에 관심 표현 """ ``` 해당 모델을 활용하기 위해서 다음과 같은 class 필요 ``` class LongformerSelfAttentionForBart(nn.Module): def __init__(self, config, layer_id): super().__init__() self.embed_dim = config.d_model self.longformer_self_attn = LongformerSelfAttention(config, layer_id=layer_id) self.output = nn.Linear(self.embed_dim, self.embed_dim) def forward( self, hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, attention_mask: Optional[torch.Tensor] = None, layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: is_cross_attention = key_value_states is not None bsz, tgt_len, embed_dim = hidden_states.size() # bs x seq_len x seq_len -> bs x seq_len 으로 변경 attention_mask = attention_mask.squeeze(dim=1) attention_mask = attention_mask[:,0] is_index_masked = attention_mask < 0 is_index_global_attn = attention_mask > 0 is_global_attn = is_index_global_attn.flatten().any().item() outputs = self.longformer_self_attn( hidden_states, attention_mask=attention_mask, layer_head_mask=None, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, output_attentions=output_attentions, ) attn_output = self.output(outputs[0]) return (attn_output,) + outputs[1:] if len(outputs) == 2 else (attn_output, None, None) ``` ``` class LongformerEncoderDecoderForConditionalGeneration(BartForConditionalGeneration): def __init__(self, config): super().__init__(config) if config.attention_mode == 'n2': pass # do nothing, use BertSelfAttention instead else: self.model.encoder.embed_positions = BartLearnedPositionalEmbedding( config.max_encoder_position_embeddings, config.d_model) self.model.decoder.embed_positions = BartLearnedPositionalEmbedding( config.max_decoder_position_embeddings, config.d_model) for i, layer in enumerate(self.model.encoder.layers): layer.self_attn = LongformerSelfAttentionForBart(config, layer_id=i) ``` ``` class LongformerEncoderDecoderConfig(BartConfig): def __init__(self, attention_window: List[int] = None, attention_dilation: List[int] = None, autoregressive: bool = False, attention_mode: str = 'sliding_chunks', gradient_checkpointing: bool = False, **kwargs): """ Args: attention_window: list of attention window sizes of length = number of layers. window size = number of attention locations on each side. For an affective window size of 512, use `attention_window=[256]*num_layers` which is 256 on each side. attention_dilation: list of attention dilation of length = number of layers. attention dilation of `1` means no dilation. autoregressive: do autoregressive attention or have attention of both sides attention_mode: 'n2' for regular n^2 self-attention, 'tvm' for TVM implemenation of Longformer selfattention, 'sliding_chunks' for another implementation of Longformer selfattention """ super().__init__(**kwargs) self.attention_window = attention_window self.attention_dilation = attention_dilation self.autoregressive = autoregressive self.attention_mode = attention_mode self.gradient_checkpointing = gradient_checkpointing assert self.attention_mode in ['tvm', 'sliding_chunks', 'n2'] ``` 모델 오브젝트 로드 후 weight파일을 별도로 다운받아서 load_state_dict로 웨이트를 불러야 합니다. ``` tokenizer = AutoTokenizer.from_pretrained("cocoirun/longforemr-kobart-summary-v1") model = LongformerEncoderDecoderForConditionalGeneration.from_pretrained("cocoirun/longforemr-kobart-summary-v1") device = torch.device('cuda') model.load_state_dict(torch.load("summary weight.ckpt")) model.to(device) ``` 모델 요약 함수 ``` def summarize(text, max_len): max_seq_len = 4096 context_tokens = ['<s>'] + tokenizer.tokenize(text) + ['</s>'] input_ids = tokenizer.convert_tokens_to_ids(context_tokens) if len(input_ids) < max_seq_len: while len(input_ids) < max_seq_len: input_ids += [tokenizer.pad_token_id] else: input_ids = input_ids[:max_seq_len - 1] + [ tokenizer.eos_token_id] res_ids = model.generate(torch.tensor([input_ids]).to(device), max_length=max_len, num_beams=5, no_repeat_ngram_size = 3, eos_token_id=tokenizer.eos_token_id, bad_words_ids=[[tokenizer.unk_token_id]]) res = tokenizer.batch_decode(res_ids.tolist(), skip_special_tokens=True)[0] res = res.replace("\n\n","\n") return res ```
pkarypis/phi_15_cpd_rank500_headdim
pkarypis
2024-05-23T02:38:56Z
176
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:36:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nannnzk/llama-loki-002
nannnzk
2024-05-23T02:32:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:26:31Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DalHyun/donut-demo
DalHyun
2024-05-23T02:32:04Z
48
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-23T02:23:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
acate/bert-base-cased-wikitext2
acate
2024-05-23T02:25:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-22T23:29:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.8927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0913 | 1.0 | 2346 | 7.0517 | | 6.9051 | 2.0 | 4692 | 6.8729 | | 6.8814 | 3.0 | 7038 | 6.8927 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.13.3
xkiwilabs/lora_opComms_LLama3_v4
xkiwilabs
2024-05-23T02:23:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T02:22:46Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** xkiwilabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saad17g/finetuned_T5_amzn_v3
saad17g
2024-05-23T02:17:18Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T18:20:56Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: google-t5/t5-small model-index: - name: finetuned_T5_amzn_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_T5_amzn_v3 This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
OwOpeepeepoopoo/NoSoup4U9
OwOpeepeepoopoo
2024-05-23T02:05:52Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T16:40:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOpeepeepoopoo/DancingElaine9
OwOpeepeepoopoo
2024-05-23T02:05:43Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T02:04:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alessioryan/morpheme-tokenizer-gpt2-Finnish-finnish
alessioryan
2024-05-23T01:56:49Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:21:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DaRkSpyro/TheForsakenCallOfDutyBlackOpsColdWar
DaRkSpyro
2024-05-23T01:55:19Z
0
0
adapter-transformers
[ "adapter-transformers", "music", "en", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "region:us" ]
null
2024-05-22T22:02:03Z
--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb language: - en metrics: - accuracy library_name: adapter-transformers tags: - music ---
apple9855/code-search-net-tokenizer
apple9855
2024-05-23T01:51:14Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:51:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaodrew/phi-3-mini-4k-instruct-midjourney-autoprompter
gaodrew
2024-05-23T01:50:01Z
0
1
null
[ "safetensors", "unsloth", "opensource", "phi", "license:mit", "region:us" ]
null
2024-05-19T06:12:49Z
--- license: mit tags: - unsloth - opensource - phi --- <img src="https://cloud-3i4ld6u5y-hack-club-bot.vercel.app/0home.png" alt="Akash Network logo" width="200"/> Thank you to the [Akash Network](https://akash.network/) for sponsoring this project and providing A100s/H100s for compute! <a target="_blank" href="https://colab.research.google.com/github/andrewgcodes/autoprompter/blob/main/run_autoprompter.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # Overview Writing good AI art prompts for Midjourney, Stable Diffusion, etcetera takes time and practice. I fine-tuned a small language model (Phi-3) to help you improve your prompts. Fine-tuned version of unquantized [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) using Unsloth on ~100,000 high quality Midjourney AI art prompts This Hugging Face repo contains adapter weights only (you need to load on top of the base Phi-3 weights) <img src="https://cloud-5rbe0uczw-hack-club-bot.vercel.app/0screenshot_2024-05-20_at_3.30.18___pm.png" alt="prompt format" width="400"/> # Inference Recommended inference settings: repetition_penalty = 1.2, temperature = 0.5-1.0 Adjust if the model starts repeating itself. To run (Colab T4 GPU works): <a target="_blank" href="https://colab.research.google.com/github/andrewgcodes/autoprompter/blob/main/run_autoprompter.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # Fine-tuning Details I used this [reference code](https://medium.com/@mauryaanoop3/fine-tuning-phi-3-with-unsloth-for-superior-performance-on-custom-data-2c14b3c1e90b) from Anoop Maurya. After experimenting with various settings and parameters, this is what I settled on: - Max seq length: 128 (few prompts are longer than 128 tokens, at this point you probably get diminishing returns on image quality) - fine-tuned on unquantized base weights - LoRA: R=32, Alpha=32 - Batch size: 32 - Epochs: 1 - Gradient accumulation steps: 4 - Warmup steps: 100 - Learning rate: 1e4 - Optimizer: adamw_8bit I used an H100 GPU from the Akash Network. # Dataset Please see [gaodrew/midjourney-prompts-highquality](https://huggingface.co/datasets/gaodrew/midjourney-prompts-highquality) # Limitations The model is prone to listing out adjectives. For example: "kitten, cute kitten with a big smile on its face and fluffy fur. The background is filled with colorful flowers in various shades of pink, purple, blue, yellow, orange, green, red, white, black, brown, gray, gold, silver, bronze, copper, brass, steel, aluminum, titanium, platinum, diamond"
GenTrendGPT/ModelType-IV
GenTrendGPT
2024-05-23T01:46:10Z
141
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:45:17Z
--- base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers tags: - mergekit - merge --- # ModelType-IV This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 - sources: - layer_range: [0, 20] model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 ```
albisumikel1/ppo-LunarLander-v2
albisumikel1
2024-05-23T01:43:34Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T22:12:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -62.54 +/- 35.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
longma98/bert-finetuned-ner
longma98
2024-05-23T01:41:05Z
62
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T01:22:56Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: longma98/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # longma98/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0278 - Validation Loss: 0.0528 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1781 | 0.0684 | 0 | | 0.0482 | 0.0548 | 1 | | 0.0278 | 0.0528 | 2 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
euunn/0523_llama2
euunn
2024-05-23T01:33:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T01:32:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/cyO1jWNJQGYnw9v6
hgnoi
2024-05-23T01:28:55Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:27:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/F1dR8pO6RYRk5UVn
hgnoi
2024-05-23T01:28:22Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T01:26:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FernandoD95/poca-SoccerTwos
FernandoD95
2024-05-23T01:21:54Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "ML-Agents", "region:us" ]
reinforcement-learning
2024-05-23T01:00:48Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos - ML-Agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: FernandoD95/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0
T3Q-LLM
2024-05-23T01:17:24Z
2,249
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:maywell/ko_Ultrafeedback_binarized", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-17T12:31:29Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation datasets: - maywell/ko_Ultrafeedback_binarized base model: - yanolja/EEVE-Korean-Instruct-10.8B-v1.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) # T3Q-LLM-sft1.0-dpo1.0 ## This model is a version of T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0 that has been fine-tuned with DPO. ## Model Developers Chihoon Lee(chihoonlee10), T3Q ## Prompt Template ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: {prompt} Assistant: ``` ## How to Use it ```python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0") tokenizer = AutoTokenizer.from_pretrained("T3Q-LLM/T3Q-LLM-sft1.0-dpo1.0") prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n" text = '한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.\n\n(A) 경성\n(B) 부산\n(C) 평양\n(D) 서울\n(E) 전주' model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt') outputs = model.generate(**model_inputs, max_new_tokens=256) output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(output_text) ``` ### Example Output ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: 한국의 수도는 어디인가요? 아래 선택지 중 골라주세요. (A) 경성 (B) 부산 (C) 평양 (D) 서울 (E) 전주 Assistant: (D) 서울이 한국의 수도입니다. 서울은 나라의 북동부에 위치해 있으며, 정치, 경제, 문화의 중심지입니다. 약 1,000만 명이 넘는 인구를 가진 세계에서 가장 큰 도시 중 하나입니다. 서울은 높은 빌딩, 현대적인 인프라, 활기 문화 장면으로 유명합니다. 또한, 많은 역사적 명소와 박물관이 있어 방문객들에게 풍부한 문화 체험을 제공합니다. ``` | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9387|± |0.0064| | | |macro_f1|0.9387|± |0.0064| |kobest_copa | 0|acc |0.7590|± |0.0135| | | |macro_f1|0.7585|± |0.0135| |kobest_hellaswag| 0|acc |0.5080|± |0.0224| | | |acc_norm|0.5580|± |0.0222| | | |macro_f1|0.5049|± |0.0224| |kobest_sentineg | 0|acc |0.8489|± |0.0180| | | |macro_f1|0.8483|± |0.0180| hf-causal-experimental (pretrained=nlpai-lab/KULLM3,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.8896|± |0.0084| | | |macro_f1|0.8888|± |0.0084| |kobest_copa | 0|acc |0.6930|± |0.0146| | | |macro_f1|0.6925|± |0.0147| |kobest_hellaswag| 0|acc |0.4640|± |0.0223| | | |acc_norm|0.5240|± |0.0224| | | |macro_f1|0.4612|± |0.0223| |kobest_sentineg | 0|acc |0.6297|± |0.0243| | | |macro_f1|0.6255|± |0.0244|
EleutherAI/Mistral-7B-v0.1-hemisphere-random-standardized-random-names
EleutherAI
2024-05-23T01:15:27Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:28:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/dolphin-2.9.1-llama-3-70b-GGUF
mradermacher
2024-05-23T01:14:15Z
54
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.1-llama-3-70b", "base_model:quantized:cognitivecomputations/dolphin-2.9.1-llama-3-70b", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-22T19:01:40Z
--- base_model: cognitivecomputations/dolphin-2.9.1-llama-3-70b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - generated_from_trainer - axolotl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9.1-llama-3-70b-GGUF/resolve/main/dolphin-2.9.1-llama-3-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
backyardai/Iced-Lemon-Cookie-7B-GGUF
backyardai
2024-05-23T01:12:55Z
307
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:FallenMerick/Iced-Lemon-Cookie-7B", "base_model:quantized:FallenMerick/Iced-Lemon-Cookie-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T23:49:29Z
--- library_name: transformers tags: - mergekit - merge base_model: FallenMerick/Iced-Lemon-Cookie-7B model_name: Iced-Lemon-Cookie-7B-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Iced Lemon Cookie 7B - **Creator:** [FallenMerick](https://huggingface.co/FallenMerick/) - **Original:** [Iced Lemon Cookie 7B](https://huggingface.co/FallenMerick/Iced-Lemon-Cookie-7B) - **Date Created:** 2024-05-09 - **Trained Context:** 32768 tokens - **Description:** Merge of LemonadeRP, IceLemonTeaRP, Kunoichi DPO, and Big L models. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
S4nto/lora-dpo-finetuned-stage4-sft-0.5-1e-6_ep5
S4nto
2024-05-23T01:07:31Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:57:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
weifar/llama3-8b-500-v2
weifar
2024-05-23T00:58:42Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T00:56:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mingming2000/KoMoAlpaca-0522-6epoch
mingming2000
2024-05-23T00:57:00Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/KoAlpaca-Polyglot-5.8B", "base_model:adapter:beomi/KoAlpaca-Polyglot-5.8B", "region:us" ]
null
2024-05-23T00:56:55Z
--- library_name: peft base_model: beomi/KoAlpaca-Polyglot-5.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.2.dev0
JEFFERSONMUSIC/hwtcopenhagen97
JEFFERSONMUSIC
2024-05-23T00:53:28Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T00:52:06Z
--- license: apache-2.0 ---
EthanRhys/Ashley-Current
EthanRhys
2024-05-23T00:50:46Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-05-23T00:49:56Z
--- license: openrail++ ---
lots-o/ko-albert-base-v1
lots-o
2024-05-23T00:50:24Z
110
0
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "ko", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-18T03:37:23Z
--- license: apache-2.0 language: - ko --- # Korean ALBERT # Dataset - [AI-HUB](https://www.aihub.or.kr/) - [국립국어원 - 모두의 말뭉치](https://kli.korean.go.kr/corpus/main/requestMain.do?lang=ko) - [Korean News Comments](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments) # Evaluation results - The code for finetuning can be found at [KcBERT-Finetune](https://github.com/Beomi/KcBERT-finetune). | | Size(용량) | Average Score | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | |:---------------------- |:----------:|:-------------:|:------------------:|:----------------------:|:------------------:|:--------------------:|:-------------------------:|:---------------------------:|:-----------------------------:| | KcELECTRA-base | 475M | 84.84 | 91.71 | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 | | KcELECTRA-base-v2022 | 475M | 85.20 | **91.97** | **87.35** | 76.50 | **82.12** | **83.67** | 95.12 | 69.00 / 90.40 | | KcBERT-Base | 417M | 79.65 | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 | | KcBERT-Large | 1.2G | 81.33 | 90.68 | 85.53 | 70.15 | 76.99 | 77.49 | 94.06 | 62.16 / 86.64 | | KoBERT | 351M | 82.21 | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 | | XLM-Roberta-Base | 1.03G | 84.01 | 89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 | | HanBERT | 614M | 86.24 | 90.16 | 87.31 | 82.40 | 80.89 | 83.33 | 94.19 | 78.74 / 92.02 | | KoELECTRA-Base | 423M | 84.66 | 90.21 | 86.87 | 81.90 | 80.85 | 83.21 | 94.20 | 61.10 / 89.59 | | KoELECTRA-Base-v2 | 423M | **86.96** | 89.70 | 87.02 | **83.90** | 80.61 | 84.30 | 94.72 | **84.34 / 92.58** | | DistilKoBERT | 108M | 76.76 | 88.41 | 84.13 | 62.55 | 70.55 | 73.21 | 92.48 | 54.12 / 77.80 | | **ko-albert-base-v1** | **51M** | 80.46 | 86.83 | 82.26 | 69.95 | 74.17 | 74.48 | 94.06 | 76.08 / 86.82 | | **ko-albert-large-v1** | **75M** | 82.39 | 86.91 | 83.12 | 76.10 | 76.01 | 77.46 | 94.33 | 77.64 / 87.99 | *The size of HanBERT is the sum of the BERT model and the tokenizer DB. *These results were obtained using the default configuration settings. Better performance may be achieved with additional hyperparameter tuning. # How to use ```python from transformers import AutoTokenizer, AutoModel # Base Model (51M) tokenizer = AutoTokenizer.from_pretrained("lots-o/ko-albert-base-v1") model = AutoModel.from_pretrained("lots-o/ko-albert-base-v1") # Large Model (75M) tokenizer = AutoTokenizer.from_pretrained("lots-o/ko-albert-large-v1") model = AutoModel.from_pretrained("lots-o/ko-albert-large-v1") ``` # Acknowledgement - The GCP/TPU environment used for training the ALBERT Model was supported by the [TRC](https://sites.research.google/trc/about/) program. # Reference ## Paper - [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) ## Github Repos - [google-albert](https://github.com/google-research/albert) - [albert-zh](https://github.com/brightmart/albert_zh) - [KcBERT](https://github.com/Beomi/KcBERT) - [KcBERT-Finetune](https://github.com/Beomi/KcBERT-finetune)
ssec-uw/OLMo-7B-Instruct-GGUF
ssec-uw
2024-05-23T00:48:02Z
128
6
null
[ "gguf", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-07T17:55:34Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation --- # OLMo 7B-Instruct-GGUF > For more details on OLMO-7B-Instruct, refer to [Allen AI's OLMo-7B-Instruct model card](https://huggingface.co/allenai/OLMo-7B-Instruct). OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. The Instruct version is trained on the [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned). OLMo 7B Instruct is trained for better question answering. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques. This version of the model is derived from [ssec-uw/OLMo-7B-Instruct-hf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-hf) as [GGUF format](https://huggingface.co/docs/hub/en/gguf), a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. In addition to the model being in GGUF format, the model has been [quantized](https://huggingface.co/docs/optimum/en/concept_guides/quantization), to reduce the computational and memory costs of running inference. *We are currently working on adding all of the [Quantization Types](https://huggingface.co/docs/hub/en/gguf#quantization-types)*. These files are designed for use with [GGML](https://ggml.ai/) and executors based on GGML such as [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Get Started To get started using one of the GGUF file, you can simply use [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python binding for `llama.cpp`. 1. Install `llama-cpp-python` of at least `v0.2.70` with pip. The following command will install a pre-built wheel with basic CPU support. For other installation methods, see [llama-cpp-python installation docs](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation). ```bash pip install llama-cpp-python>=0.2.70 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu ``` 3. Download one of the GGUF file. In this example, we will download the [OLMo-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-GGUF/resolve/main/OLMo-7B-Instruct-Q4_K_M.gguf?download=true), when the link is clicked. 4. Open up a python interpreter and run the following commands. For example, we can ask it: `What is a solar system?` *You will need to modify the `model_path` argument to where the GGUF model has been saved in your system* ```python from llama_cpp import Llama llm = Llama( model_path="path/to/OLMo-7B-Instruct-Q4_K_M.gguf" ) result_dict = llm.create_chat_completion( messages = [ { "role": "user", "content": "What is a solar system?" } ] ) print(result_dict['choices'][0]['message']['content']) ``` 5. That's it, you should see the result fairly quickly! Have fun! 🤖 ## Contact For errors in this model card, contact Don or Anant, {landungs, anmittal} at uw dot edu. ## Acknowledgement We would like to thank the hardworking folks at [Allen AI](https://huggingface.co/allenai) for providing the original model. Additionally, the work to convert and quantize the model was done by the [University of Washington Scientific Software Engineering Center (SSEC)](https://escience.washington.edu/software-engineering/ssec/), as part of the [Schmidt Sciences Virtual Institute for Scientific Software (VISS)](https://www.schmidtsciences.org/viss/).
TheOriginalMarcelo/wav2vec2-base-test
TheOriginalMarcelo
2024-05-23T00:41:11Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T00:41:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/LtPstCwgItBeRXH8
hgnoi
2024-05-23T00:39:56Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:38:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pablo119367/modelo_preg_resp2
Pablo119367
2024-05-23T00:39:32Z
143
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:35:07Z
--- license: apache-2.0 ---
MrBlamo/kryp
MrBlamo
2024-05-23T00:27:25Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "region:us" ]
text-to-image
2024-05-23T00:27:23Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/icon.png base_model: stabilityai/sdxl-turbo instance_prompt: null --- # pyth <Gallery /> ## Download model [Download](/MrBlamo/kryp/tree/main) them in the Files & versions tab.
kaith/fon_awr_asr
kaith
2024-05-23T00:25:56Z
0
0
peft
[ "peft", "pytorch", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "region:us" ]
null
2024-05-22T16:54:18Z
--- library_name: peft base_model: openai/whisper-large-v3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Rizkiath YAYA NADJO - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** Spoken Language & Automatic Speech Recognitions - **Language(s) (NLP):** Fongbe - **License:** [More Information Needed] - **Finetuned from model [optional]:** openai/whisper-large-v3 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.2.dev0
DokHee/JSLLMV5
DokHee
2024-05-23T00:19:52Z
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T00:03:39Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pablo119367/my-model
Pablo119367
2024-05-23T00:17:24Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T00:17:24Z
--- license: apache-2.0 ---
RLHFlow/DPA-v1-Mistral-7B
RLHFlow
2024-05-23T00:10:58Z
11
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2402.18571", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T08:51:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Haoxiang Wang - **Model type:** Decoder-only LLM - **Language(s) (NLP):** English - **License:** Apache-2.0 - **Finetuned from model [optional]:** https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/RLHFlow/directional-preference-alignment - **Paper [ACL 2024]:** [Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards](https://arxiv.org/abs/2402.18571) ## How to Get Started with the Model Use the code below to get started with the model. + System Prompt: + Template: `"You are a helpful, respectful, and honest assistant who always responds to the user in a harmless way. Your response should maximize weighted rating = helpfulness*{weight_helpfulness} + verbosity*{weight_verbosity}"` + Value Choices: `weight_helpfulness` is an integer from 0 to 100 and `(weight_verbosity/100)**2 + (weight_helpfulness/100)**2 == 1` + The maximum `weight_helpfulness` is 100 the lowest suggested value is 71. + The model will generate a response that implicitly maximizes the weighted rating `helpfulness*weight_helpfulness + verbosity*weight_verbosity`, where `helpfulness` and `verbosity` are two reward objectives that range from 0 to 100. We suggest starting with a ratio of `weight_verbosity/weight_helpfulness` first. For instance, considering `weight_verbosity/weight_helpfulness` is equal to `tan(-15°)` ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch import numpy as np # Here we show how to use the DPA model to generate a response to a user prompt. device = "cuda" model = AutoModelForCausalLM.from_pretrained("RLHFlow/DPA-v1-Mistral-7B", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("Haoxiang-Wang/DPA-v1-Mistral-7B") degree = -15 # weight_verbosity/weight_helpfulness = tan(-15°) rad = np.radians(degree) # convert from degree to radian weight_helpfulness = np.round((np.cos(rad) * 100)).astype(int) # compute weight_helpfulness, scale it by 100x, and round it to an integer weight_verbosity = np.round((np.sin(rad) * 100)).astype(int) # compute weight_verbosity, scale it by 100x, and round it to an integer ## Now (weight_helpfulness/100)**2 + (weight_verbosity/100)**2 ≈ 1 - it is not an exact equivalence due to the round() operations above sys_prompt = f"You are a helpful, respectful, and honest assistant who always responds to the user in a harmless way. Your response should maximize weighted rating = helpfulness*{weight_helpfulness} + verbosity*{weight_verbosity}" user_prompt = "Write a summary of Romeo and Juliet." messages = [ {"role": "system", "content": sys_prompt}, {"role": "user", "content": user_prompt}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(device) output = model.generate(input_ids=input_ids, max_new_tokens=2048,temperature=0.7) prompt_len = input_ids.shape[-1] generated_response = tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True) print(generated_response) # 'Romeo and Juliet is a tragic love story written by William Shakespeare, believed to have been written between 1591 and 1595. The play is based on an Italian tale called "The Tragical History of Romeus and Juliet" by Arthur Brooke, which was published in 1562.\n\nThe story revolves around two young star-crossed lovers, Romeo Montague and Juliet Capulet, from rival families in Verona, Italy. Their love is forbidden by their families, who have a long-standing feud. Despite the obstacles, Romeo and Juliet marry in secret and spend a few blissful days together before fate intervenes.\n\nA series of misunderstandings, miscommunications, and tragic events lead to the deaths of both Romeo and Juliet. Romeo believes that Juliet is dead, and in a fit of despair, he takes his own life. Juliet, who is actually still alive, awakens to find Romeo dead and takes her own life in grief.\n\nThe play explores themes of love, hate, fate, and the consequences of actions. It is known for its iconic characters, including the passionate Romeo, the fiery Juliet, and the noble Friar Lawrence, who tries to help the young lovers.\n\nRomeo and Juliet has been adapted into numerous films, stage productions, and other media over the years, and it remains a beloved and tragic tale of forbidden love.' ``` ## Training ![image/png](https://github.com/RLHFlow/directional-preference-alignment/raw/main/assets/preference-conflict.jpg) ![image/png](https://github.com/RLHFlow/directional-preference-alignment/raw/main/assets/algo-illustration.jpg) ## Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/638fb8cf2380ffd99caf8c2a/IEO1xFOzopiOEWrxlDCem.png) ## Citation **BibTeX:** If you find this work useful to your research, please consider citing our paper ``` @inproceedings{wang2024arithmetic, title={Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards}, author={Haoxiang Wang and Yong Lin and Wei Xiong and Rui Yang and Shizhe Diao and Shuang Qiu and Han Zhao and Tong Zhang}, year={2024}, booktitle={ACL}, } ``` ## Model Card Authors Haoxiang Wang ## Model Card Contact [email protected]
JoPmt/Wooly-Llama-3B-code-instruct-Ties
JoPmt
2024-05-23T00:10:56Z
143
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k", "mwitiderrick/open_llama_3b_code_instruct_0.1", "base_model:harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k", "base_model:merge:harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k", "base_model:mwitiderrick/open_llama_3b_code_instruct_0.1", "base_model:merge:mwitiderrick/open_llama_3b_code_instruct_0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T23:50:17Z
--- tags: - merge - mergekit - lazymergekit - harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k - mwitiderrick/open_llama_3b_code_instruct_0.1 base_model: - harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k - mwitiderrick/open_llama_3b_code_instruct_0.1 --- # Wooly-Llama-3B-code-instruct-Ties Wooly-Llama-3B-code-instruct-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k) * [mwitiderrick/open_llama_3b_code_instruct_0.1](https://huggingface.co/mwitiderrick/open_llama_3b_code_instruct_0.1) ## 🧩 Configuration ```yaml models: - model: harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k parameters: density: 0.5 weight: 0.5 - model: mwitiderrick/open_llama_3b_code_instruct_0.1 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "JoPmt/Wooly-Llama-3B-code-instruct-Ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
RLHFlow/RewardModel-Mistral-7B-for-DPA-v1
RLHFlow
2024-05-23T00:10:31Z
491
3
transformers
[ "transformers", "safetensors", "mistral", "text-classification", "custom_code", "arxiv:2402.18571", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-04-21T08:40:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Haoxiang Wang - **Model type:** Sequence Classifier - **Language(s) (NLP):** English - **License:** Apache-2.0 - **Finetuned from model [optional]:** https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/RLHFlow/directional-preference-alignment - **Paper [ACL 2024]:** [Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards](https://arxiv.org/abs/2402.18571) ## How to Get Started with the Model Use the code below to get started with the model. The model has 10-dimensional output, corresponding to the following attributes from HelpSteer and UltraFeedback ['helpsteer-helpfulness', 'helpsteer-correctness', 'helpsteer-coherence', 'helpsteer-complexity', 'helpsteer-verbosity', 'ultrafeedback-overall_score', "ultrafeedback-instruction_following", "ultrafeedback-truthfulness", "ultrafeedback-honesty", "ultrafeedback-helpfulness"] Here is a sample code that you can try ```python from transformers import AutoModelForSequenceClassification,AutoTokenizer import torch device = 'cuda' path = "RLHFlow/RewardModel-Mistral-7B-for-DPA-v1" rm = AutoModelForSequenceClassification.from_pretrained(path, trust_remote_code=True).to(device) tokenizer = AutoTokenizer.from_pretrained(path) input_template = "[INST] You must read the following conversation carefully and rate the assistant's response from score 0-100 in these aspects: helpfulness, correctness, coherence, honesty, complexity, verbosity\n\nUser: {prompt}\n\nAssistant: {response} [/INST]" # Use a sample from HelpSteer validation set prompt = 'What are some synonyms for the word "beautiful"?' response = "Nicely, Beautifully, Handsome, Stunning, Wonderful, Gorgeous, Pretty, Stunning, Elegant" model_inputs = tokenizer(input_template.format(prompt=prompt, response=response), return_tensors="pt").to(device) with torch.no_grad(): score = rm(**model_inputs).logits.squeeze().cpu().float().numpy() print(score) # [68.99269 69.62718 76.23071 33.48785 35.853596 63.833366 55.58917 68.7175 59.552124 46.465595] # Convert from our scale (0-100) to HelpSteer scale (0-4) helpsteer_rewards_pred = (score[:5]-10)/20 print(helpsteer_rewards_pred) # [2.9496346 2.981359 3.3115356 1.1743925 1.2926798] # The actual rewards from the HelpSteer dataset for this sample are [3,3,4,2,2] ``` ## Training ![image/png](https://github.com/RLHFlow/directional-preference-alignment/raw/main/assets/preference-conflict.jpg) ![image/png](https://github.com/RLHFlow/directional-preference-alignment/raw/main/assets/algo-illustration.jpg) ## Citation **BibTeX:** If you find this work useful to your research, please consider citing our paper ``` @inproceedings{wang2024arithmetic, title={Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards}, author={Haoxiang Wang and Yong Lin and Wei Xiong and Rui Yang and Shizhe Diao and Shuang Qiu and Han Zhao and Tong Zhang}, year={2024}, booktitle={ACL}, } ``` ## Model Card Authors Haoxiang Wang ## Model Card Contact [email protected]
JoftheV/Luna-Samantha
JoftheV
2024-05-23T00:07:16Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "art", "finance", "text-generation-inference", "merge", "en", "fr", "es", "dataset:HuggingFaceFW/fineweb", "dataset:nvidia/ChatQA-Training-Data", "dataset:allenai/WildChat-1M", "dataset:TIGER-Lab/MMLU-Pro", "dataset:H-D-T/Buzz", "dataset:PleIAs/Post-OCR-Correction", "dataset:mlabonne/orpo-dpo-mix-40k", "dataset:m-a-p/Matrix", "dataset:satellogic/EarthView", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2024-05-22T23:01:04Z
--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb - nvidia/ChatQA-Training-Data - allenai/WildChat-1M - TIGER-Lab/MMLU-Pro - H-D-T/Buzz - PleIAs/Post-OCR-Correction - mlabonne/orpo-dpo-mix-40k - m-a-p/Matrix - satellogic/EarthView language: - en - fr - es metrics: - accuracy - bertscore - bleu - brier_score - cer - character - charcut_mt - chrf - code_eval library_name: adapter-transformers tags: - code - art - finance - text-generation-inference - merge --- # Model Card for Model ID --- model-index: - name: Yi-34B results: - task: type: text-generation dataset: name: ai2_arc type: ai2_arc metrics: - name: AI2 Reasoning Challenge (25-Shot) type: AI2 Reasoning Challenge (25-Shot) value: 64.59 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard --- GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:spaces/dalle-mini/dalle-mini GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5 GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct https://gist.github.com/f2acd35a005b3f88893ecb04c975eaa6.git <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ndavidson/lora
ndavidson
2024-05-23T00:04:33Z
6
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-23T00:02:48Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** ndavidson - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
microsoft/Phi-3-medium-4k-instruct-onnx-cuda
microsoft
2024-05-23T00:01:45Z
72
9
transformers
[ "transformers", "onnx", "phi3", "text-generation", "ONNX", "DML", "ONNXRuntime", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T23:01:20Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Medium-4K-Instruct ONNX CUDA models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-medium-4k-instruct](https://aka.ms/phi3-medium-4k-instruct) to accelerate inference with ONNX Runtime for your machines with NVIDIA GPUs. Phi-3 Medium is a 14B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and the filtered publicly available websites data, with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the medium version in two variants: [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct), which are the context lengths (in tokens) that they can support. The base model has undergone a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Medium models are published here in [ONNX](https://onnx.ai) format and run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, and Linux, with the precision best suited to each of these targets. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for FP16 CUDA: ONNX model for NVIDIA GPUs. 2. ONNX model for INT4 CUDA: ONNX model for NVIDIA GPUs using int4 quantization via RTN. How do you know which is the best ONNX model for you: - Are you on a Windows machine with GPU? - I don't know → Review this [guide](https://www.microsoft.com/en-us/windows/learning-center/how-to-check-gpu) to see whether you have a GPU in your Windows machine. - Yes → Access the Hugging Face DirectML ONNX models and instructions at [Phi-3-medium-4k-instruct-onnx-directml](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-directml). - No → Do you have a NVIDIA GPU? - I don't know → Review this [guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu) to see whether you have a CUDA-capable GPU. - Yes → Access the Hugging Face CUDA ONNX models and instructions at [Phi-3-medium-4k-instruct-onnx-cuda](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) for NVIDIA GPUs. - No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-medium-4k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cpu) Note: Using the Hugging Face CLI, you can download sub folders and not all models if you are limited on disk space. The FP16 model is recommended for larger batch sizes, while the INT4 model optimizes performance for lower batch sizes. Example: ``` # Download just the FP16 model $ huggingface-cli download microsoft/Phi-3-small-8k-instruct-onnx-cuda --include cuda-fp16/* --local-dir . --local-dir-use-symlinks False ``` ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test this with a [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The models are tested on: - 1 A100 GPU, SKU: Standard_ND96amsr_A100_v4 (CUDA) Minimum Configuration Required: - CUDA: NVIDIA GPU with [Compute Capability](https://developer.nvidia.com/cuda-gpus) >= 7.0 ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Medium-4K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-medium-4k-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics ## CUDA Phi-3 Medium-4K-Instruct performs better with ONNX Runtime compared to PyTorch for all batch size, prompt length combinations. For FP16 CUDA, ORT performs up to 5X faster than PyTorch, while with INT4 CUDA, it's up to 10X faster than PyTorch. It is also up to 3X faster than llama.cpp for large batch sizes. The table below shows the average throughput of the first 256 tokens generated (tps) for FP16 and INT4 precisions on CUDA as measured on [1 A100 80GB GPU, SKU: Standard_ND96amsr_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/ndm-a100-v4-series). | Batch Size, Prompt Length | ORT FP16 CUDA | PyTorch Eager FP16 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 47.32 | 14.41 | 3.28 | | 4, 16 | 190.05 | 84.43 | 2.25 | | 16, 16 | 707.68 | 347.52 | 2.04 | | 16, 64 | 698.22 | 342.83 | 2.04 | | Batch Size, Prompt Length | ORT INT4 CUDA | PyTorch Eager INT4 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 115.68 | 14.89 | 7.77 | | 4, 16 | 88.53 | 45.22 | 1.96 | | 16, 16 | 341.8 | 168.36 | 2.03 | ### Package Versions | Pip package name | Version | |------------------|---------| | torch | 2.3.0 | | triton | 2.3.0 | | onnxruntime-gpu | 1.18.0 | | transformers | 4.40.2 | | bitsandbytes | 0.43.1 | ## Appendix ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Sheetal Arun Kadam, Rui Ren, Natalie Kershaw, Parinita Rahi
cm309/distilroberta-base-finetuned-Financial-News-Superior
cm309
2024-05-23T00:01:42Z
198
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "dataset:financial-reports-sec", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-22T23:46:16Z
--- license: apache-2.0 base_model: distilbert/distilroberta-base tags: - generated_from_trainer datasets: - financial-reports-sec model-index: - name: distilroberta-base-finetuned-Financial-News-Superior results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-Financial-News-Superior This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the financial-reports-sec dataset. It achieves the following results on the evaluation set: - Loss: 1.4572 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8137 | 1.0 | 235 | 1.5646 | | 1.6397 | 2.0 | 470 | 1.4806 | | 1.6004 | 3.0 | 705 | 1.4789 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
microsoft/Phi-3-medium-4k-instruct-onnx-directml
microsoft
2024-05-23T00:01:26Z
65
9
transformers
[ "transformers", "onnx", "phi3", "text-generation", "ONNX", "DML", "ONNXRuntime", "nlp", "conversational", "custom_code", "arxiv:2306.00978", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T23:02:06Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Medium-4K-Instruct ONNX DirectML models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-medium-4k-instruct](https://aka.ms/phi3-medium-4K-instruct) to accelerate inference with DirectML and ONNX Runtime for your machines with GPUs. Phi-3 Medium is a 14B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and the filtered publicly available websites data, with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the medium version in two variants: [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct), which are the context lengths (in tokens) that they can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Medium models are published here in [ONNX](https://onnx.ai) format and run with [DirectML](https://learn.microsoft.com/en-us/windows/ai/directml/dml-intro). This lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for INT4 DML: ONNX model optimized to run with DirectML and quantized to int4 precision using AWQ*. How do you know which is the best ONNX model for you: - Are you on a Windows machine with GPU? - I don't know → Review this [guide](https://www.microsoft.com/en-us/windows/learning-center/how-to-check-gpu) to see whether you have a GPU in your Windows machine. - Yes → Access the Hugging Face DirectML ONNX models and instructions at [Phi-3-medium-4k-instruct-onnx-directml](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-directml). - No → Do you have a NVIDIA GPU? - I don't know → Review this [guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu) to see whether you have a CUDA-capable GPU. - Yes → Access the Hugging Face CUDA ONNX models and instructions at [Phi-3-medium-4k-instruct-onnx-cuda](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) for NVIDIA GPUs. - No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-medium-4k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cpu) ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test this with a [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The model has been tested on: - GPU SKU: RTX 4090 (DirectML) Minimum Configuration Required: - Windows: DirectX 12-capable GPU and a minimum of 10GB of combined RAM ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Medium-4K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-medium-4k-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics ## DirectML We measured the performance of DirectML and ONNX Runtime's new Generate() API with Phi-3 medium quantized with Activation-Aware Quantization [AWQ](https://arxiv.org/abs/2306.00978) and with a block size of 128 on Windows. Our test machine had an NVIDIA GeForce RTX 4090 GPU and an Intel Core i9-13900K CPU. DirectML lets developers not only achieve great performance but also lets developers deploy models across the entire Windows ecosystem with support from AMD, Intel, and NVIDIA. Best of all, AWQ means that developers get this scale while also maintaining high model accuracy. Stay tuned for additional performance improvements in the coming weeks thanks to optimized drivers from our hardware partners, along with additional updates to the ONNX Runtime Generate() API. | Batch Size, Prompt Length | Block Size = 32 | Block Size = 128 | |---------------------------|-----------------|------------------| | 1, 16 | 66.36 | 72.39 | #### Package Versions | Pip package name | Version | |------------------|---------| | torch | 2.2.0 | | triton | 2.2.0 | | onnxruntime-gpu | 1.18.0 | | transformers | 4.39.0 | | bitsandbytes | 0.42.0 | ## Appendix ### Activation Aware Quantization AWQ works by identifying the top 1% most salient weights that are most important for maintaining accuracy and quantizing the remaining 99% of weights. This leads to less accuracy loss from quantization compared to many other quantization techniques. For more on AWQ see [here](https://arxiv.org/abs/2306.00978). ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Sheetal Arun Kadam, Natalie Kershaw, Parinita Rahi, Patrice Vignola, Xiang Zhang, Chai Chaoweeraprasit, Logan Iyer, Vicente Rivera, Jacques Van Rhyn
microsoft/Phi-3-medium-128k-instruct-onnx-cpu
microsoft
2024-05-23T00:01:08Z
95
11
transformers
[ "transformers", "onnx", "phi3", "text-generation", "ONNX", "DML", "ONNXRuntime", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T23:02:41Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Medium-128K-Instruct ONNX CPU models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-medium-128k-instruct](https://aka.ms/phi3-medium-128K-instruct) to accelerate inference with ONNX Runtime for your CPU. Phi-3 Medium is a 14B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and the filtered publicly available websites data, with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the medium version in two variants: [4K](https://huggingface.co/microsoft/Phi-3-medium-4K-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct), which are the context lengths (in tokens) that they can support. The base model has undergone a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Medium models are published here in [ONNX](https://onnx.ai) format and run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, and Linux, with the precision best suited to each of these targets. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for INT4 CPU: ONNX model for CPUs using int4 quantization via RTN. How do you know which is the best ONNX model for you: - Are you on a Windows machine with GPU? - I don't know → Review this [guide](https://www.microsoft.com/en-us/windows/learning-center/how-to-check-gpu) to see whether you have a GPU in your Windows machine. - Yes → Access the Hugging Face DirectML ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-directml](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-directml). - No → Do you have a NVIDIA GPU? - I don't know → Review this [guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu) to see whether you have a CUDA-capable GPU. - Yes → Access the Hugging Face CUDA ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-cuda](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) for NVIDIA GPUs. - No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-medium-128k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cpu). ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test this with a [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The models are tested on: - Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz Minimum Configuration Required: - CPU machine with 16GB RAM ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Medium-128K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-medium-128K-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics The model runs at ~20 tokens/sec on a Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz. ## Appendix ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Akshay Sonawane, Sheetal Arun Kadam, Rui Ren, Edward Chen, Scott McKay, Emma Ning, Natalie Kershaw, Parinita Rahi
microsoft/Phi-3-medium-128k-instruct-onnx-cuda
microsoft
2024-05-23T00:00:51Z
128
23
transformers
[ "transformers", "onnx", "phi3", "text-generation", "ONNX", "DML", "ONNXRuntime", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T23:03:10Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Medium-128K-Instruct ONNX CUDA models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-medium-128k-instruct](https://aka.ms/phi3-medium-128K-instruct) to accelerate inference with ONNX Runtime for your machines with NVIDIA GPUs. Phi-3 Medium is a 14B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and the filtered publicly available websites data, with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the medium version in two variants: [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct), which are the context lengths (in tokens) that they can support. The base model has undergone a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Medium models are published here in [ONNX](https://onnx.ai) format and run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, and Linux, with the precision best suited to each of these targets. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for FP16 CUDA: ONNX model for NVIDIA GPUs. 2. ONNX model for INT4 CUDA: ONNX model for NVIDIA GPUs using int4 quantization via RTN. How do you know which is the best ONNX model for you: - Are you on a Windows machine with GPU? - I don't know → Review this [guide](https://www.microsoft.com/en-us/windows/learning-center/how-to-check-gpu) to see whether you have a GPU in your Windows machine. - Yes → Access the Hugging Face DirectML ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-directml](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-directml). - No → Do you have a NVIDIA GPU? - I don't know → Review this [guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu) to see whether you have a CUDA-capable GPU. - Yes → Access the Hugging Face CUDA ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-cuda](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) for NVIDIA GPUs. - No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-medium-128k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cpu) Note: Using the Hugging Face CLI, you can download sub folders and not all models if you are limited on disk space. The FP16 model is recommended for larger batch sizes, while the INT4 model optimizes performance for lower batch sizes. Example: ``` # Download just the FP16 model $ huggingface-cli download microsoft/Phi-3-small-8k-instruct-onnx-cuda --include cuda-fp16/* --local-dir . --local-dir-use-symlinks False ``` ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test this with a [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The models are tested on: - 1 A100 GPU, SKU: Standard_ND96amsr_A100_v4 (CUDA) Minimum Configuration Required: - CUDA: NVIDIA GPU with [Compute Capability](https://developer.nvidia.com/cuda-gpus) >= 7.0 ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Medium-128K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-medium-128K-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics ## CUDA Phi-3 Medium-128K-Instruct performs better with ONNX Runtime compared to PyTorch for all batch size, prompt length combinations. For FP16 CUDA, ORT performs up to 3X faster than PyTorch, while with INT4 CUDA, it's up to 8X faster than PyTorch. The table below shows the average throughput of the first 256 tokens generated (tps) for FP16 and INT4 precisions on CUDA as measured on [1 A100 80GB GPU, SKU: Standard_ND96amsr_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/ndm-a100-v4-series). | Batch Size, Prompt Length | ORT FP16 CUDA | PyTorch Eager FP16 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 47.32 | 14.41 | 3.28 | | 4, 16 | 190.05 | 84.43 | 2.25 | | 16, 16 | 707.68 | 347.52 | 2.04 | | Batch Size, Prompt Length | ORT INT4 CUDA | PyTorch Eager INT4 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 115.68 | 14.89 | 7.77 | | 4, 16 | 88.53 | 45.22 | 1.96 | | 16, 16 | 341.8 | 168.36 | 2.03 | ### Package Versions | Pip package name | Version | |------------------|---------| | torch | 2.3.0 | | triton | 2.3.0 | | onnxruntime-gpu | 1.18.0 | | transformers | 4.40.2 | | bitsandbytes | 0.43.1 | ## Appendix ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Sheetal Arun Kadam, Rui Ren, Natalie Kershaw, Parinita Rahi
hgnoi/o7NNpfy8U4rgpK0G
hgnoi
2024-05-23T00:00:06Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T23:58:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/WBiNd3ef8dqTTwEv
hgnoi
2024-05-22T23:52:52Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T23:51:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RodrigoLimaRFL/distil-large-nurc-sp
RodrigoLimaRFL
2024-05-22T23:52:41Z
21
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "pt", "dataset:RodrigoLimaRFL/NURC-SP", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-22T00:58:24Z
--- license: mit datasets: - RodrigoLimaRFL/NURC-SP language: - pt ---
Sorour/llama3_cls_finred
Sorour
2024-05-22T23:44:36Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T20:17:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AleRothermel/my-sentiments-model
AleRothermel
2024-05-22T23:42:53Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T02:50:03Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-sentiments-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-sentiments-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3284 - Accuracy: 0.8876 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4981 | 1.0 | 625 | 0.3775 | 0.8654 | | 0.4093 | 2.0 | 1250 | 0.3348 | 0.8862 | | 0.3153 | 3.0 | 1875 | 0.3284 | 0.8876 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
kalo-team/llama3-4x8b-pythonT2_step_final
kalo-team
2024-05-22T23:38:54Z
7
13
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "code", "conversational", "en", "arxiv:2303.01610", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T11:11:14Z
--- license: llama3 language: - en tags: - code --- # 70b Distillation Experiment This is not the full-fledged run that I plan to do for a large scale distillation of Llama3 70b. Instead, it's a preliminary test train of the custom distillation trainer, where we target KL divergence from the larger Llama3 70b teacher model onto 4x8b (the student). I'm releasing it here mainly so that people who are interested can tinker with it / finetune it to see how it behaves before I am ready to do a larger run. # Training details Each of the 8b expert MLP layers is duplicated 3x from the original Llama3 8b in a typical Mixtral-style Sparse MoE layout. Over the course of the training run, the expert selection count was gradually increased from the minimum (topk=1) to the maximum (topk=4), as in [Sparse MoE as the New Dropout](https://arxiv.org/abs/2303.01610). This was done with a stochastic / randomized top_k expert selection with **frozen gate layers**, as recommended in the paper. LR = 2e-6, ~2.5 mil tokens of Python instruct data, all around ~8k tokens ish for each sample ~(300 total samples). Despite the use of instruct data, the model does not necessarily behave like one, as the training process involves mimicking a larger base model's distributions over to said data. 1 epoch distillation of 70b logprobs, topk=200 logits from the fp16 Llama3-70b. # Evals ## llama3-4x8b-pythonT2_step_final * mmlu: 65.10 (66.69) - 0.97x * arc: 57.94 (59.47) - 0.97x * hellaswag: 81.93 (82.09) - 0.99x * winogrande: 77.03 (77.35) - 0.99x * gsm8k: 50.95 (45.79) - 1.11x * truthfulqa-mc1: 27.66 * truthfulqa-mc2: 44.53 (43.9) - 1.01x * humaneval+: 32.9 (29.3) - 1.12x * humaneval: 37.2 (33.5) - 1.11x # Current Conclusions Going by evals (and evals alone), full-finetuning seems to have caused some degree of mild catastrophic forgetting outside of the domains that were specifically distilled, as you might expect from the lack of data. I plan to remedy this with lower LRs and/or bigger batch sizes, and of course, on a much larger dataset than the limited selection seen here. The plan is to do at least 1 billion unique tokens; we are still conducting custom tests for alternative loss functions (i.e, things in the vein of a weighted Cross-Entropy loss function to be used in tandem with KL divergence.) ![Embedded Image](https://cdn.discordapp.com/attachments/1201939812488052757/1242759414033813565/image.png?ex=664faa25&is=664e58a5&hm=a15028990e0f6dfba573ea2a00207daa01e953b19c2637e40546d5e594b51b36&)
TRI-ML/mamba-7b-rw
TRI-ML
2024-05-22T23:38:27Z
80
53
openlm
[ "openlm", "pytorch", "safetensors", "mamba", "linear", "text-generation", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2312.00752", "arxiv:2405.06640", "license:apache-2.0", "model-index", "region:us" ]
text-generation
2024-04-08T17:38:07Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb pipeline_tag: text-generation library_name: openlm tags: - mamba - linear language: - en model-index: - name: mamba-7b results: - task: type: text-generation dataset: type: MMLU name: MMLU metrics: - name: accuracy type: accuracy value: 33.3 verified: false - task: type: text-generation dataset: type: HellaSwag name: HellaSwag metrics: - name: accuracy type: accuracy value: 77.9 verified: false - task: type: text-generation dataset: type: PIQA name: PIQA metrics: - name: accuracy type: accuracy value: 81.0 verified: false - task: type: text-generation dataset: type: Winogrande name: Winogrande metrics: - name: accuracy type: accuracy value: 71.8 verified: false - task: type: text-generation dataset: type: ai2_arc name: ARC-E metrics: - name: accuracy type: accuracy value: 77.5 verified: false - task: type: text-generation dataset: type: ai2_arc name: ARC-C metrics: - name: accuracy type: accuracy value: 46.7 verified: false --- # Mamba-7B This is a 7B parameter model with the [Mamba](https://arxiv.org/abs/2312.00752) architecture, trained on multiple epochs (1.2T tokens) of the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset. Mamba is a state-space model that does not use self-attention unlike the standard transformer architecture. It has shown strong performance on various natural language benchmarks. To date, the largest publicly released pure-Mamba pretrain is [Mamba-2.8B](https://huggingface.co/state-spaces/mamba-2.8b). We follow their training recipe and release our version of Mamba-7B. This model was trained as a baseline for our paper [Linearizing Large Language Models](https://arxiv.org/abs/2405.06640). ## Model Details - **Developed by**: [Toyota Research Institute](https://www.tri.global/our-work/robotics) - **Model Type**: This is an auto-regressive language model based on the [Mamba](https://arxiv.org/abs/2312.00752) architecture. - **Dataset**: Trained on 1.2T tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - **Tokenizer**: `EleutherAI/gpt-neox-20b` - **Library**: [OpenLM](https://github.com/mlfoundations/open_lm/) - **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). | Parameters | Hidden Size | Layers | Vocab Size | Sequence Length | |------------|-------------|--------| ---------- | --------------- | | 7B | 4096 | 64 | 50432 | 2048 | ## Training Details - Mamba-7B was trained using AWS SageMaker on 128 H100 80GB GPUs. - Training began in March 2024 and lasted three weeks. | **Hyperparameter** | **Value** | |--------------------|------------| | Precision | `bfloat16` | | Optimizer | AdamW | | Learning rate | 3e-4 | | LR cooldown end | 1e-5 | | Warmup steps | 2000 | | Z-loss | 1e-4 | | Batch size | 2M | ## Usage This model was trained using [OpenLM](https://github.com/mlfoundations/open_lm/). The weights have been converted to be compatible with HuggingFace. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tri-ml/mamba-7b-rw") model = AutoModelForCausalLM.from_pretrained("tri-ml/mamba-7b-rw") inputs = tokenizer(["The Toyota Supra"], return_tensors="pt") gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1} output = model.generate(inputs['input_ids'], **gen_kwargs) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) # The Toyota Supra is a sports car that has been in production since 1978. The car was discontinued in 2002, but it has recently been revived and will be available again in 2020. The Supra has always been known for its powerful engines and agile handling. ``` ## Performance Evaluation Our evaluations were done using the [Eleuther LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) repo. Below we report the performance of Mamba 7B compared to other base models. <style> .evalTable th { background: white; } .evalTable tr:nth-child(1) { background: #f3f3f3; } .evalTable tr:nth-child(2) { background: #f3f3f3; } .evalTable tr:nth-child(7) { background: #f3f3f3; } </style> <div class="evalTable"> | | HellaSwag | PIQA | Winogrande | ARC-E | ARC-C | MMLU (5-shot) | | ----------------- | ------------- | -------- | -------------- | --------- | --------- | ---------------- | | Mamba-1.4B | 59.0 | 73.9 | 61.4 | 65.5 | 32.9 | 25.2 | | Mamba-2.8B | 71.0 | 78.1 | 65.9 | 68.2 | 41.7 | 26.2 | | RWKV5-1.7T-7B | 73.0 | 78.6 | 72.9 | 75.8 | 45.6 | 34.9 | | Llama2-7B | 76.0 | 79.1 | 69.1 | 76.3 | 46.3 | 45.9 | | Gemma-7B | 80.7 | 81.9 | 73.7 | 81.1 | 53.2 | 62.9 | | Mistral-7B | 81.0 | 82.1 | 74.0 | 80.9 | 53.8 | 62.4 | | **Mamba-7B** | 77.9 | 81.0 | 71.8 | 77.5 | 46.7 | 33.3 | </div> ## How to Cite If you use this model, please cite our paper on [Linearizing Large Language Models](https://arxiv.org/abs/2405.06640). ``` @article{Mercat2024Linearizing, title={Linearizing Large Language Models}, author={Jean Mercat and Igor Vasiljevic and Sedrick Keh and Kushal Arora and Achal Dave and Adrien Gaidon and Thomas Kollar}, journal={arXiv preprint arXiv:2405.06640}, year={2024} } ``` ## Citations Mamba ``` @article{mamba, title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces}, author={Gu, Albert and Dao, Tri}, journal={arXiv preprint arXiv:2312.00752}, year={2023} } ``` OpenLM ``` @misc{open_lm, author = {Gururangan, Suchin and Wortsman, Mitchell and Gadre, Samir Yitzhak and Dave, Achal and Kilian, Maciej and Shi, Weijia and Mercat, Jean and Smyrnis, Georgios and Ilharco, Gabriel and Jordan, Matt and Heckel, Reinhard and Dimakis, Alex and Farhadi, Ali and Shankar, Vaishaal and Schmidt, Ludwig}, title = {{open_lm}: a minimal but performative language modeling (LM) repository}, year = {2023}, note = {GitHub repository}, url = {https://github.com/mlfoundations/open_lm/} } ```
Sorour/cls_finred_llama3_v1
Sorour
2024-05-22T23:34:18Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-05-22T22:37:58Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: cls_finred_llama3_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cls_finred_llama3_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.4061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7071 | 0.1116 | 20 | 0.6759 | | 0.6162 | 0.2232 | 40 | 0.6174 | | 0.6143 | 0.3347 | 60 | 0.5845 | | 0.5753 | 0.4463 | 80 | 0.5507 | | 0.5712 | 0.5579 | 100 | 0.5225 | | 0.5216 | 0.6695 | 120 | 0.5105 | | 0.4931 | 0.7810 | 140 | 0.4920 | | 0.482 | 0.8926 | 160 | 0.4733 | | 0.4562 | 1.0042 | 180 | 0.4624 | | 0.3635 | 1.1158 | 200 | 0.4631 | | 0.3619 | 1.2273 | 220 | 0.4538 | | 0.351 | 1.3389 | 240 | 0.4452 | | 0.3458 | 1.4505 | 260 | 0.4392 | | 0.3397 | 1.5621 | 280 | 0.4290 | | 0.298 | 1.6736 | 300 | 0.4278 | | 0.2902 | 1.7852 | 320 | 0.4196 | | 0.3425 | 1.8968 | 340 | 0.4061 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Zlovoblachko/en_L1_FullGen_xlm
Zlovoblachko
2024-05-22T23:31:52Z
0
0
spacy
[ "spacy", "en", "region:us" ]
null
2024-05-22T23:30:42Z
--- tags: - spacy language: - en model-index: - name: en_L1_FullGen_xlm results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_L1_FullGen_xlm` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `spancat` | | **Components** | `transformer`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`spancat`** | `Tense semantics`, `Synonyms`, `Copying expression`, `Word form transmission`, `Transliteration` | </details> ### Accuracy | Type | Score | | --- | --- | | `SPANS_SC_F` | 83.14 | | `SPANS_SC_P` | 88.18 | | `SPANS_SC_R` | 78.64 | | `TRANSFORMER_LOSS` | 4605.04 | | `SPANCAT_LOSS` | 182839.00 |
baek26/all_5137_bart-all_rl
baek26
2024-05-22T23:30:32Z
49
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-05-22T23:29:53Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmp96yq8eqa/baek26/all_5137_bart-all_rl") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmp96yq8eqa/baek26/all_5137_bart-all_rl") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmp96yq8eqa/baek26/all_5137_bart-all_rl") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
NatanGarMar/entregable3
NatanGarMar
2024-05-22T23:25:56Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-05-22T23:25:49Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
anniev18/whisper-small-amh
anniev18
2024-05-22T23:25:44Z
94
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "amh", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-22T22:28:02Z
--- language: - amh license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: whisper-small-amh-fibal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-amh-fibal This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the tololupe_superb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
microsoft/Phi-3-small-128k-instruct-onnx-cuda
microsoft
2024-05-22T23:24:27Z
26
13
transformers
[ "transformers", "onnx", "phi3small", "text-generation", "ONNX", "DML", "ONNXRuntime", "phi3", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T22:49:27Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Small-128K-Instruct ONNX CUDA models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-small-128k-instruct](https://aka.ms/phi3-Small-128K-instruct) to accelerate inference with ONNX Runtime for your machines with NVIDIA GPUs. Phi-3 Small is a 7B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and filtered publicly available website data, with a focus on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the small version in two variants: [8K](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-small-128k-instruct), which are the context lengths (in tokens) that they can support. The base model has undergone a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Small-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Small models are published here in [ONNX](https://onnx.ai) format and run with [ONNX Runtime](https://onnxruntime.ai/) on GPU across devices, including server platforms, Windows, and Linux. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for FP16 CUDA: ONNX model for NVIDIA GPUs. 2. ONNX model for INT4 CUDA: ONNX model for NVIDIA GPUs using int4 quantization via RTN. Note: Using the Hugging Face CLI, you can download sub folders and not all models if you are limited on disk space. The FP16 model is recommended for larger batch sizes, while the INT4 model optimizes performance for lower batch sizes. Example: ``` # Download just the FP16 model $ huggingface-cli download microsoft/Phi-3-small-128k-instruct-onnx-cuda --include cuda-fp16/* --local-dir . --local-dir-use-symlinks False ``` ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test the models with this [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The ONNX models are tested on: - 1 A100 GPU, SKU: Standard_ND96amsr_A100_v4 (CUDA) Minimum Configuration Required: - CUDA: NVIDIA GPU with [Compute Capability](https://developer.nvidia.com/cuda-gpus) >= 7.5 ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Small-128K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-Small-128K-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics Phi-3 Small-128K-Instruct performs better with ONNX Runtime compared to PyTorch for all batch size, prompt length combinations. For FP16 CUDA, ORT performs up to 5X faster than PyTorch, while with INT4 CUDA, it's up to 5.9X faster than PyTorch. The table below shows the average throughput of the first 256 tokens generated (tps) for FP16 and INT4 precisions on CUDA as measured on [1 A100 80GB GPU, SKU: Standard_ND96amsr_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/ndm-a100-v4-series). | Batch Size, Prompt Length | ORT FP16 CUDA | PyTorch Eager FP16 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 73.60 | 14.88 | 4.95 | | 4, 16 | 287.60 | 66.25 | 4.34 | | 16,16 | 1025.44 | 66.25 | 4.38 | | Batch Size, Prompt Length | ORT INT4 CUDA | PyTorch Eager INT4 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 68.26 | 11.57 | 5.90 | | 4, 16 | 151.79 | 40.18 | 3.78 | | 16,16 | 577.41 | 148.17 | 3.90 | ### Package Versions | Pip package name | Version | |------------------|---------| | torch | 2.3.0 | | triton | 2.3.0 | | onnxruntime-gpu | 1.18.0 | | transformers | 4.40.2 | | bitsandbytes | 0.43.1 | ## Appendix ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Tianlei Wu, Sheetal Arun Kadam, Rui Ren, Baiju Meswani, Natalie Kershaw, Parinita Rahi
microsoft/Phi-3-small-8k-instruct-onnx-cuda
microsoft
2024-05-22T23:24:07Z
29
11
transformers
[ "transformers", "onnx", "phi3small", "text-generation", "ONNX", "DML", "ONNXRuntime", "phi3", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2024-05-19T22:48:55Z
--- license: mit pipeline_tag: text-generation tags: - ONNX - DML - ONNXRuntime - phi3 - nlp - conversational - custom_code inference: false --- # Phi-3 Small-8K-Instruct ONNX CUDA models <!-- Provide a quick summary of what the model is/does. --> This repository hosts the optimized versions of [Phi-3-small-8k-instruct](https://aka.ms/phi3-Small-8k-instruct) to accelerate inference with ONNX Runtime for your machines with NVIDIA GPUs. Phi-3 Small is a 7B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and filtered publicly available website data, with a focus on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the small version in two variants: [8K](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-small-128k-instruct), which are the context lengths (in tokens) that they can support. The base model has undergone a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Small-8K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Optimized variants of the Phi-3 Small models are published here in [ONNX](https://onnx.ai) format and run with [ONNX Runtime](https://onnxruntime.ai/) on GPU across devices, including server platforms, Windows, and Linux. ## ONNX Models Here are some of the optimized configurations we have added: 1. ONNX model for FP16 CUDA: ONNX model for NVIDIA GPUs. 2. ONNX model for INT4 CUDA: ONNX model for NVIDIA GPUs using int4 quantization via RTN. Note: Using the Hugging Face CLI, you can download sub folders and not all models if you are limited on disk space. The FP16 model is recommended for larger batch sizes, while the INT4 model optimizes performance for lower batch sizes. Example: ``` # Download just the FP16 model $ huggingface-cli download microsoft/Phi-3-small-8k-instruct-onnx-cuda --include cuda-fp16/* --local-dir . --local-dir-use-symlinks False ``` ## How to Get Started with the Model To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test the models with this [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app). ## Hardware Supported The ONNX models are tested on: - 1 A100 GPU, SKU: Standard_ND96amsr_A100_v4 (CUDA) Minimum Configuration Required: - CUDA: NVIDIA GPU with [Compute Capability](https://developer.nvidia.com/cuda-gpus) >= 7.5 ### Model Description - **Developed by:** Microsoft - **Model type:** ONNX - **Language(s) (NLP):** Python, C, C++ - **License:** MIT - **Model Description:** This is a conversion of the Phi-3 Small-8K-Instruct model for ONNX Runtime inference. ## Additional Details - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations) - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april) - [**Phi-3 Model Card**]( https://aka.ms/phi3-Small-8K-instruct) - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report) - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai) ## Performance Metrics Phi-3 Small-8K-Instruct performs better with ONNX Runtime compared to PyTorch for all batch size, prompt length combinations. For FP16 CUDA, ORT performs up to 4X faster than PyTorch, while with INT4 CUDA, it's up to 10.9X faster than PyTorch. The table below shows the average throughput of the first 256 tokens generated (tps) for FP16 and INT4 precisions on CUDA as measured on [1 A100 80GB GPU, SKU: Standard_ND96amsr_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/ndm-a100-v4-series). | Batch Size, Prompt Length | ORT FP16 CUDA | PyTorch Eager FP16 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 74.62 | 16.81 | 4.44 | | 4, 16 | 290.36 | 65.56 | 4.43 | | 16,16 | 1036.93 | 267.33 | 3.88 | | Batch Size, Prompt Length | ORT INT4 CUDA | PyTorch Eager INT4 CUDA | Speed Up ORT/PyTorch | |---------------------------|---------------|-------------------------|----------------------| | 1, 16 | 140.68 | 12.93 | 10.88 | | 4, 16 | 152.90 | 44.04 | 3.47 | | 16,16 | 582.07 | 160.57 | 3.62 | ### Package Versions | Pip package name | Version | |------------------|---------| | torch | 2.3.0 | | triton | 2.3.0 | | onnxruntime-gpu | 1.18.0 | | transformers | 4.40.2 | | bitsandbytes | 0.43.1 | ## Appendix ## Model Card Contact parinitarahi, kvaishnavi, natke ## Contributors Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Tianlei Wu, Sheetal Arun Kadam, Rui Ren, Baiju Meswani, Natalie Kershaw, Parinita Rahi
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-825318
fine-tuned
2024-05-22T23:22:02Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Finance", "Sentiment", "NLP", "Analysis", "Opinion", "custom_code", "en", "dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-825318", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-22T23:21:48Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-825318 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Finance - Sentiment - NLP - Analysis - Opinion --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: financial sentiment analysis and opinion-based QA ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-825318', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
CarloSgara/t5_large_model
CarloSgara
2024-05-22T23:21:17Z
3
0
sentence-transformers
[ "sentence-transformers", "safetensors", "t5", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T23:21:04Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1062 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 1024, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Intel/deepseek-coder-1.3b_base_ov_int8
Intel
2024-05-22T23:18:34Z
130
1
transformers
[ "transformers", "openvino", "llama", "text-generation", "en", "arxiv:2401.14196", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T07:01:00Z
--- license: mit language: - en --- # Deepseek-coder-1.3b base Model Card The original source of this model is: [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) This model is optimized and converted to OpenVino Intermediate Representation (IR) format using Optimum-cli. The model has been exported in Int8 by adding --weight-format Int8 while exporting this model from Huggingface. Intended to be used with: - [OpenVINO Code Completion - a VisualStudioCode extension for AI code completion with OpenVINO](https://marketplace.visualstudio.com/items?itemName=OpenVINO.openvino-code-completion) ## Original Model Details - **Model type:** text generation model - **Language(s):** English - **License:** This model is licensed under the MIT License. The use of DeepSeek Coder model is subject to the model License. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. - **Model Summary:** deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens. - **Resources for more information:** [deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base), [Paper](https://arxiv.org/abs/2401.14196). # Uses These models are pre-trained on a high-quality project-level code corpus and employ a fill-in-the-blank task with a 16K window to enhance code generation and infilling. ## Direct Use DeepSeek-Coder models are under a permissive license that allows for both research and unrestricted commercial use. Possible tasks include - Code Completion - Code Insertion - Repository Level Code Completion ### Use base restrictions You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ### Intel’s Human Rights Disclaimer: Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
euunn/llama2_finetuned
euunn
2024-05-22T23:08:02Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T23:07:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tdooms/ts-tokenizer-1024
tdooms
2024-05-22T23:08:01Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-06T11:28:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neopolita/phi-3-medium-4k-instruct-gguf
neopolita
2024-05-22T23:00:16Z
14
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-22T20:39:13Z
--- {} --- # GGUF quants for [**microsoft/Phi-3-medium-4k-instruct**](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
ivillar/xho_finetune
ivillar
2024-05-22T22:46:43Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:ml-superb-subset", "base_model:Akashpb13/Swahili_xlsr", "base_model:finetune:Akashpb13/Swahili_xlsr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-22T22:09:18Z
--- license: apache-2.0 base_model: Akashpb13/Swahili_xlsr tags: - generated_from_trainer datasets: - ml-superb-subset metrics: - wer model-index: - name: xho_finetune results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: ml-superb-subset type: ml-superb-subset config: xho split: test args: xho metrics: - name: Wer type: wer value: 53.510895883777245 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xho_finetune This model is a fine-tuned version of [Akashpb13/Swahili_xlsr](https://huggingface.co/Akashpb13/Swahili_xlsr) on the ml-superb-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.5370 - Wer: 53.5109 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 25.5184 | 0.7692 | 10 | 24.2275 | 100.0 | | 14.5363 | 1.5385 | 20 | 9.8357 | 100.0 | | 4.5811 | 2.3077 | 30 | 3.8367 | 100.0 | | 3.4822 | 3.0769 | 40 | 3.3922 | 100.0 | | 3.2732 | 3.8462 | 50 | 3.2398 | 100.0 | | 3.1796 | 4.6154 | 60 | 3.1705 | 100.0 | | 3.1504 | 5.3846 | 70 | 3.1419 | 100.0 | | 3.1119 | 6.1538 | 80 | 3.1084 | 100.0 | | 3.0789 | 6.9231 | 90 | 3.0735 | 100.0 | | 3.0619 | 7.6923 | 100 | 3.0590 | 100.0 | | 3.0298 | 8.4615 | 110 | 3.0247 | 100.0 | | 2.9933 | 9.2308 | 120 | 2.9716 | 100.0 | | 2.9079 | 10.0 | 130 | 2.8647 | 100.0 | | 2.8414 | 10.7692 | 140 | 2.7931 | 100.0 | | 2.6939 | 11.5385 | 150 | 2.5932 | 100.0 | | 2.3274 | 12.3077 | 160 | 2.1000 | 99.7579 | | 1.7068 | 13.0769 | 170 | 1.4580 | 93.4625 | | 1.206 | 13.8462 | 180 | 1.1027 | 83.0508 | | 0.9587 | 14.6154 | 190 | 0.9152 | 79.4189 | | 0.7806 | 15.3846 | 200 | 0.8122 | 69.7337 | | 0.7118 | 16.1538 | 210 | 0.7445 | 69.0073 | | 0.6814 | 16.9231 | 220 | 0.6945 | 62.9540 | | 0.5709 | 17.6923 | 230 | 0.6787 | 67.5545 | | 0.5653 | 18.4615 | 240 | 0.6758 | 62.2276 | | 0.5437 | 19.2308 | 250 | 0.6511 | 60.7748 | | 0.5092 | 20.0 | 260 | 0.6237 | 62.7119 | | 0.4239 | 20.7692 | 270 | 0.6000 | 61.5012 | | 0.4355 | 21.5385 | 280 | 0.5899 | 59.8063 | | 0.4456 | 22.3077 | 290 | 0.5960 | 59.3220 | | 0.3986 | 23.0769 | 300 | 0.5764 | 56.6586 | | 0.3856 | 23.8462 | 310 | 0.5801 | 55.9322 | | 0.3607 | 24.6154 | 320 | 0.5682 | 57.6271 | | 0.358 | 25.3846 | 330 | 0.5675 | 55.9322 | | 0.3452 | 26.1538 | 340 | 0.5630 | 57.8692 | | 0.3289 | 26.9231 | 350 | 0.5515 | 57.8692 | | 0.353 | 27.6923 | 360 | 0.5621 | 57.3850 | | 0.2907 | 28.4615 | 370 | 0.5486 | 55.2058 | | 0.3237 | 29.2308 | 380 | 0.5445 | 54.4794 | | 0.3202 | 30.0 | 390 | 0.5384 | 52.7845 | | 0.2918 | 30.7692 | 400 | 0.5370 | 55.6901 | | 0.3106 | 31.5385 | 410 | 0.5422 | 53.7530 | | 0.3105 | 32.3077 | 420 | 0.5438 | 55.2058 | | 0.2835 | 33.0769 | 430 | 0.5437 | 55.9322 | | 0.2966 | 33.8462 | 440 | 0.5416 | 54.7215 | | 0.2719 | 34.6154 | 450 | 0.5394 | 54.2373 | | 0.2859 | 35.3846 | 460 | 0.5384 | 53.7530 | | 0.29 | 36.1538 | 470 | 0.5379 | 53.2688 | | 0.2879 | 36.9231 | 480 | 0.5372 | 53.5109 | | 0.2871 | 37.6923 | 490 | 0.5370 | 53.5109 | | 0.3019 | 38.4615 | 500 | 0.5370 | 53.5109 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
thiagoquilice/tweets_deforestation_all__RT_exclude_biggerclusters
thiagoquilice
2024-05-22T22:41:30Z
3
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2024-05-22T22:41:25Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # tweets_deforestation_all__RT_exclude_biggerclusters This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("thiagoquilice/tweets_deforestation_all__RT_exclude_biggerclusters") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 13 * Number of training documents: 389209 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | desmatamento - amazônia - na - de - em | 2371 | Deforestation in the Amazon | | 0 | que - de - do - desmatamento - amazônia | 127251 | Deforestation in the Amazon | | 1 | petição - assine - impedir - exploração - via | 114185 | Protect the Amazon Rainforest | | 2 | soja - de - cerrado - que - da | 46552 | Soya in the Cerrado region | | 3 | em - de - na - km - desmatamento | 33829 | Deforestation in the Amazon | | 4 | que - de - não - do - com | 17702 | Political and environmental issues in Brazil | | 5 | menor - cai - na - desmatamento - amazônia | 8855 | Deforestation in the Amazon decreases and reaches lower rates in recent years | | 6 | impedir - exploração - petição - via - assine | 8611 | Protect the Amazon: Sign the Petition | | 7 | cerrado - do - de - desmatamento - no | 7695 | Deforestation in the Cerrado region | | 8 | - - - - | 7163 | "Technology trends and their impact on society" | | 9 | petition - sign - the - impedir - exploração | 6901 | Protect the Amazon Rainforest | | 10 | tamanho - sp - leia - quase - legal | 5157 | Deforestation in the Amazon | | 11 | in - partners - best - raze - loot | 2937 | Deforestation and exploitation of indigenous lands | </details> ## Training hyperparameters * calculate_probabilities: False * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 2.0.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.7.0 * Transformers: 4.41.0 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
mlx-community/dolphin-2.9.1-mixtral-1x22b-8bit
mlx-community
2024-05-22T22:38:54Z
10
0
mlx
[ "mlx", "safetensors", "mixtral", "generated_from_trainer", "axolotl", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:mistral-community/Mixtral-8x22B-v0.1", "base_model:finetune:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-22T22:31:25Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - axolotl - mlx base_model: mistral-community/Mixtral-8x22B-v0.1 datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - abacusai/SystemChat-1.1 - Locutusque/function-calling-chatml - internlm/Agent-FLAN model-index: - name: out results: [] --- # mlx-community/dolphin-2.9.1-mixtral-1x22b-8bit This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9.1-mixtral-1x22b`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/dolphin-2.9.1-mixtral-1x22b-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
JuanKBracho/modelo_cobranzas_TFN
JuanKBracho
2024-05-22T22:37:14Z
131
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-05-22T22:33:12Z
--- license: apache-2.0 ---
fatehmujtaba/flan-t5-Base-for-Chest-Xray
fatehmujtaba
2024-05-22T22:36:00Z
77
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-to-text", "endpoints_compatible", "region:us" ]
image-to-text
2024-05-22T21:27:28Z
--- pipeline_tag: image-to-text ---
dmitrii-a-lex/Mistral-7B-Instruct-v0.2-SFT-local-276
dmitrii-a-lex
2024-05-22T22:33:20Z
6
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-22T12:33:40Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vuongnhathien/vit-base-1e-4-20ep
vuongnhathien
2024-05-22T22:27:40Z
196
1
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T18:00:14Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-1e-4-20ep results: - task: name: Image Classification type: image-classification dataset: name: vuongnhathien/30VNFoods type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8873015873015873 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-1e-4-20ep This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the vuongnhathien/30VNFoods dataset. It achieves the following results on the evaluation set: - Loss: 0.4034 - Accuracy: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5376 | 1.0 | 275 | 0.4677 | 0.8640 | | 0.2085 | 2.0 | 550 | 0.4375 | 0.8811 | | 0.0755 | 3.0 | 825 | 0.4605 | 0.8899 | | 0.0429 | 4.0 | 1100 | 0.4784 | 0.8879 | | 0.0146 | 5.0 | 1375 | 0.5386 | 0.8799 | | 0.0176 | 6.0 | 1650 | 0.5524 | 0.8803 | | 0.0137 | 7.0 | 1925 | 0.5249 | 0.8887 | | 0.0076 | 8.0 | 2200 | 0.5401 | 0.8942 | | 0.0026 | 9.0 | 2475 | 0.5477 | 0.8934 | | 0.0054 | 10.0 | 2750 | 0.5417 | 0.8946 | | 0.0034 | 11.0 | 3025 | 0.5430 | 0.8974 | | 0.0033 | 12.0 | 3300 | 0.5443 | 0.8954 | | 0.0027 | 13.0 | 3575 | 0.5423 | 0.8986 | | 0.0024 | 14.0 | 3850 | 0.5434 | 0.8990 | | 0.0027 | 15.0 | 4125 | 0.5483 | 0.8962 | | 0.0027 | 16.0 | 4400 | 0.5485 | 0.8998 | | 0.0019 | 17.0 | 4675 | 0.5502 | 0.8998 | | 0.0022 | 18.0 | 4950 | 0.5508 | 0.8998 | | 0.0015 | 19.0 | 5225 | 0.5509 | 0.9002 | | 0.002 | 20.0 | 5500 | 0.5510 | 0.9010 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
backyardai/Mytho-Lemon-11B-GGUF
backyardai
2024-05-22T22:27:09Z
234
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:head-empty-ai/Mytho-Lemon-11B", "base_model:quantized:head-empty-ai/Mytho-Lemon-11B", "endpoints_compatible", "region:us" ]
null
2024-05-20T04:01:59Z
--- library_name: transformers tags: - mergekit - merge base_model: head-empty-ai/Mytho-Lemon-11B model_name: Mytho-Lemon-11B-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Mytho Lemon 11B - **Creator:** [head-empty-ai](https://huggingface.co/head-empty-ai/) - **Original:** [Mytho Lemon 11B](https://huggingface.co/head-empty-ai/Mytho-Lemon-11B) - **Date Created:** 2024-05-19 - **Trained Context:** 32768 tokens - **Description:** Just a simple 11B frankenmerge of LemonadeRP and MythoMist; used in [matchaaaaa/Chaifighter-20B-v2](https://huggingface.co/matchaaaaa/Chaifighter-20B-v2). *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/Dark-Miqu-103B-GGUF
backyardai
2024-05-22T22:27:06Z
354
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-05-17T22:16:02Z
--- license: other library_name: transformers tags: - mergekit - merge base_model: jukofyork/Dark-Miqu-103B model_name: Dark-Miqu-103B-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Dark Miqu 103B - **Creator:** [jukofyork](https://huggingface.co/jukofyork/) - **Original:** [Dark Miqu 103B](https://huggingface.co/jukofyork/Dark-Miqu-103B) - **Date Created:** 2024-05-14 - **Trained Context:** 32764 tokens - **Description:** A "dark" creative writing model with 32k context. Based off miqu-1-70b but with greatly reduced "positivity" and "-isms". Excels at writing Dark/Grimdark fantasy. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/Llama-3-Soliloquy-8B-v2-GGUF
backyardai
2024-05-22T22:27:01Z
1,107
3
null
[ "gguf", "en", "base_model:elyn-dev/Llama-3-Soliloquy-8B-v2", "base_model:quantized:elyn-dev/Llama-3-Soliloquy-8B-v2", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T03:18:32Z
--- language: - en license: cc-by-nc-sa-4.0 base_model: openlynn/Llama-3-Soliloquy-8B-v2 model_name: Llama-3-Soliloquy-8B-v2-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Llama 3 Soliloquy 8B v2 - **Creator:** [openlynn](https://huggingface.co/openlynn/) - **Original:** [Llama 3 Soliloquy 8B v2](https://huggingface.co/models/base/Llama-3-Soliloquy-8B-v2) - **Date Created:** 2024-04-26 - **Trained Context:** 24576 tokens - **Description:** A fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, it has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/c4ai-command-r-v01-GGUF
backyardai
2024-05-22T22:26:50Z
312
0
transformers
[ "transformers", "gguf", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "base_model:CohereForAI/c4ai-command-r-v01", "base_model:quantized:CohereForAI/c4ai-command-r-v01", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-05-02T21:12:28Z
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar license: cc-by-nc-4.0 library_name: transformers base_model: CohereForAI/c4ai-command-r-v01 model_name: c4ai-command-r-v01-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # c4ai command r v01 - **Creator:** [CohereForAI](https://huggingface.co/CohereForAI/) - **Original:** [c4ai command r v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) - **Date Created:** 2024-03-11 - **Trained Context:** 8192 tokens - **Description:** Research release of a 35 billion parameter highly performant generative model optimized for reasoning, summarization, and question answering. Command-R supports multilingual generation in 10 languages and has RAG capabilities. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/Phi-3-mini-4k-instruct-GGUF
backyardai
2024-05-22T22:26:49Z
64
0
null
[ "gguf", "nlp", "code", "text-generation", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:quantized:microsoft/Phi-3-mini-4k-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-02T05:07:16Z
--- language: - en license: mit tags: - nlp - code base_model: microsoft/Phi-3-mini-4k-instruct model_name: Phi-3-mini-4k-instruct-GGUF license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE pipeline_tag: text-generation quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Phi 3 mini 4k instruct - **Creator:** [microsoft](https://huggingface.co/microsoft/) - **Original:** [Phi 3 mini 4k instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - **Date Created:** 2024-04-22 - **Trained Context:** 4096 tokens - **Description:** State-of-the-art lightweight open model from Microsoft, trained with the Phi-3 datasets. These include both synthetic data and filtered publicly available websites data with a focus on high-quality and reasoning dense properties. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/llama-3-8b-Instruct-GGUF
backyardai
2024-05-22T22:26:43Z
120
22
transformers
[ "transformers", "gguf", "llama", "llama-3", "en", "base_model:unsloth/llama-3-8b-Instruct", "base_model:quantized:unsloth/llama-3-8b-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-18T18:04:56Z
--- language: - en license: apache-2.0 library_name: transformers tags: - transformers - llama - llama-3 base_model: unsloth/llama-3-8b-Instruct model_name: llama-3-8b-Instruct-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # llama 3 8b Instruct - **Creator:** [meta-llama](https://huggingface.co/meta-llama/) - **Original:** [llama 3 8b Instruct](https://huggingface.co/meta-llama/llama-3-8b-Instruct) - **Date Created:** 2024-04-18 - **Trained Context:** 8192 tokens - **Description:** The third generation of Meta's open source language model. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/Cerebrum-1.0-8x7b-GGUF
backyardai
2024-05-22T22:26:38Z
123
0
transformers
[ "transformers", "gguf", "text-generation-inference", "text-generation", "en", "base_model:AetherResearch/Cerebrum-1.0-8x7b", "base_model:quantized:AetherResearch/Cerebrum-1.0-8x7b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-03-30T00:13:22Z
--- base_model: AetherResearch/Cerebrum-1.0-8x7b license: other language: - en library_name: transformers pipeline_tag: text-generation quantized_by: brooketh tags: - text-generation-inference --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Cerebrum 1.0 8x7b - **Creator:** [AetherResearch](https://huggingface.co/AetherResearch/) - **Original:** [Cerebrum 1.0 8x7b](https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b) - **Date Created:** 2024-03-21 - **Trained Context:** 4096 tokens - **Description:** Mixtral 8x7b-based model created specifically for reasoning tasks, finetuned using the native chain of thought approach. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
backyardai/WestLake-10.7B-v2-GGUF
backyardai
2024-05-22T22:26:36Z
1,221
1
transformers
[ "transformers", "gguf", "roleplay", "text-generation-inference", "text-generation", "en", "base_model:froggeric/WestLake-10.7B-v2", "base_model:quantized:froggeric/WestLake-10.7B-v2", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-03-20T16:52:07Z
--- base_model: froggeric/WestLake-10.7B-v2 license: other language: - en library_name: transformers pipeline_tag: text-generation quantized_by: brooketh tags: - roleplay - text-generation-inference --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # WestLake v.2 10.7B - **Creator:** [froggeric](https://huggingface.co/froggeric/) - **Original:** [WestLake v.2 10.7B](https://huggingface.co/froggeric/WestLake-10.7B-v2) - **Date Created:** 3/11/2024 - **Trained Context:** 8192 tokens - **Description:** Self-merge of WestLake 7B. Excels at understanding nuances in language and producing creative outputs. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***