modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 18:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 18:24:50
card
stringlengths
11
1.01M
NYTK/sentiment-hts5-xlm-roberta-hungarian
NYTK
2024-08-22T14:36:03Z
3,341
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - hu tags: - text-classification license: mit metrics: - accuracy widget: - text: Jó reggelt! majd küldöm az élményhozókat :). --- # Hungarian Sentence-level Sentiment Analysis Model with XLM-RoBERTa For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained model used: XLM-RoBERTa base - Finetuned on Hungarian Twitter Sentiment (HTS) Corpus - Labels: 0 (very negative), 1 (negative), 2 (neutral), 3 (positive), 4 (very positive) ## Limitations - max_seq_length = 128 ## Results | Model | HTS2 | HTS5 | | ------------- | ------------- | ------------- | | huBERT | 85.56 | **68.99** | | XLM-RoBERTa| 85.56 | 66.50 | ## Citation If you use this model, please cite the following paper: ``` @article {laki-yang-sentiment, author = {Laki, László János and Yang, Zijian Győző}, title = {Sentiment Analysis with Neural Models for Hungarian}, journal = {Acta Polytechnica Hungarica}, year = {2023}, publisher = {Obuda University}, volume = {20}, number = {5}, doi = {10.12700/APH.20.5.2023.5.8}, pages= {109--128}, url = {https://acta.uni-obuda.hu/Laki_Yang_134.pdf} } ```
Artples/L-MChat-7b
Artples
2024-08-22T14:36:00Z
11,479
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "Nexusflow/Starling-LM-7B-beta", "FuseAI/FuseChat-7B-VaRM", "conversational", "base_model:FuseAI/FuseChat-7B-VaRM", "base_model:merge:FuseAI/FuseChat-7B-VaRM", "base_model:Nexusflow/Starling-LM-7B-beta", "base_model:merge:Nexusflow/Starling-LM-7B-beta", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-02T14:31:32Z
--- license: apache-2.0 tags: - merge - mergekit - Nexusflow/Starling-LM-7B-beta - FuseAI/FuseChat-7B-VaRM base_model: - Nexusflow/Starling-LM-7B-beta - FuseAI/FuseChat-7B-VaRM model-index: - name: L-MChat-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.94 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 52.97 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 24.2 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 7.93 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 7.38 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 8.12 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 25.54 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard --- # L-MChat-7b <div style="text-align:center;width:250px;height:250px;"> <img src="https://priority.cdn.leunos.com/logo-l-mchat-rs.png" alt="L-MChat-Series-Logo""> </div> L-MChat-7b is a merge of the following models: * [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) * [FuseAI/FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) ## Configuration ```yaml slices: - sources: - model: Nexusflow/Starling-LM-7B-beta layer_range: [0, 32] - model: FuseAI/FuseChat-7B-VaRM layer_range: [0, 32] merge_method: slerp base_model: FuseAI/FuseChat-7B-VaRM parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Artples/M-LChat-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## License Apache 2.0 but you cannot use this model to directly compete with OpenAI. ## How? Usage of [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing). ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Artples__L-MChat-7b) | Metric |Value| |---------------------------------|----:| |Avg. |69.57| |AI2 Reasoning Challenge (25-Shot)|65.61| |HellaSwag (10-Shot) |84.59| |MMLU (5-Shot) |65.44| |TruthfulQA (0-shot) |50.94| |Winogrande (5-shot) |81.37| |GSM8k (5-shot) |69.45| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Artples__L-MChat-7b) | Metric |Value| |-------------------|----:| |Avg. |21.02| |IFEval (0-Shot) |52.97| |BBH (3-Shot) |24.20| |MATH Lvl 5 (4-Shot)| 7.93| |GPQA (0-shot) | 7.38| |MuSR (0-shot) | 8.12| |MMLU-PRO (5-shot) |25.54|
NYTK/sentiment-hts2-xlm-roberta-hungarian
NYTK
2024-08-22T14:32:24Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "hu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - hu tags: - text-classification license: mit metrics: - accuracy widget: - text: Jó reggelt! majd küldöm az élményhozókat :). --- # Hungarian Sentence-level Sentiment Analysis Model with XLM-RoBERTa For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained model used: XLM-RoBERTa base - Finetuned on Hungarian Twitter Sentiment (HTS) Corpus - Labels: 0 (negative), 1 (positive) ## Limitations - max_seq_length = 128 ## Results | Model | HTS2 | HTS5 | | ------------- | ------------- | ------------- | | huBERT | 85.56 | 68.99 | | XLM-RoBERTa| **85.56** | 66.50 | ## Citation If you use this model, please cite the following paper: ``` @article {laki-yang-sentiment, author = {Laki, László János and Yang, Zijian Győző}, title = {Sentiment Analysis with Neural Models for Hungarian}, journal = {Acta Polytechnica Hungarica}, year = {2023}, publisher = {Obuda University}, volume = {20}, number = {5}, doi = {10.12700/APH.20.5.2023.5.8}, pages= {109--128}, url = {https://acta.uni-obuda.hu/Laki_Yang_134.pdf} } ```
GaetanMichelet/Gemma-2-2B_task-1_180-samples_config-2
GaetanMichelet
2024-08-22T14:24:39Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "gemma2", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:GaetanMichelet/chat-60_ft_task-1", "dataset:GaetanMichelet/chat-120_ft_task-1", "dataset:GaetanMichelet/chat-180_ft_task-1", "base_model:google/gemma-2-2b-it", "base_model:adapter:google/gemma-2-2b-it", "license:gemma", "4-bit", "bitsandbytes", "region:us" ]
null
2024-08-22T14:11:46Z
--- base_model: google/gemma-2-2b-it datasets: - GaetanMichelet/chat-60_ft_task-1 - GaetanMichelet/chat-120_ft_task-1 - GaetanMichelet/chat-180_ft_task-1 library_name: peft license: gemma tags: - alignment-handbook - trl - sft - generated_from_trainer model-index: - name: Gemma-2-2B_task-1_180-samples_config-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Gemma-2-2B_task-1_180-samples_config-2 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the GaetanMichelet/chat-60_ft_task-1, the GaetanMichelet/chat-120_ft_task-1 and the GaetanMichelet/chat-180_ft_task-1 datasets. It achieves the following results on the evaluation set: - Loss: 1.2884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 2.3482 | 0.9412 | 8 | 2.2693 | | 1.7809 | 2.0 | 17 | 1.6932 | | 1.553 | 2.9412 | 25 | 1.5041 | | 1.2823 | 4.0 | 34 | 1.3678 | | 1.1298 | 4.9412 | 42 | 1.3193 | | 1.027 | 6.0 | 51 | 1.2884 | | 0.8219 | 6.9412 | 59 | 1.3590 | | 0.6244 | 8.0 | 68 | 1.5296 | | 0.398 | 8.9412 | 76 | 1.7941 | | 0.2704 | 10.0 | 85 | 2.2475 | | 0.1592 | 10.9412 | 93 | 2.5631 | | 0.0797 | 12.0 | 102 | 2.7847 | | 0.0459 | 12.9412 | 110 | 2.9444 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.1.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
diffusers/controlnet-zoe-depth-sdxl-1.0
diffusers
2024-08-22T14:23:17Z
1,509
35
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-22T09:43:51Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet inference: false --- # SDXL-controlnet: Zoe-Depth These are ControlNet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with zoe depth conditioning. [Zoe-depth](https://github.com/isl-org/ZoeDepth) is an open-source SOTA depth estimation model which produces high-quality depth maps, which are better suited for conditioning. You can find some example images in the following. ![images_0)](./zoe-depth-example.png) ![images_2](./zoe-megatron.png) ![images_3](./photo-woman.png) ## Usage Make sure first to install the libraries: ```bash pip install accelerate transformers safetensors diffusers ``` And then setup the zoe-depth model ```python import torch import matplotlib import matplotlib.cm import numpy as np torch.hub.help("intel-isl/MiDaS", "DPT_BEiT_L_384", force_reload=True) # Triggers fresh download of MiDaS repo model_zoe_n = torch.hub.load("isl-org/ZoeDepth", "ZoeD_NK", pretrained=True).eval() model_zoe_n = model_zoe_n.to("cuda") def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): if isinstance(value, torch.Tensor): value = value.detach().cpu().numpy() value = value.squeeze() if invalid_mask is None: invalid_mask = value == invalid_val mask = np.logical_not(invalid_mask) # normalize vmin = np.percentile(value[mask],2) if vmin is None else vmin vmax = np.percentile(value[mask],85) if vmax is None else vmax if vmin != vmax: value = (value - vmin) / (vmax - vmin) # vmin..vmax else: # Avoid 0-division value = value * 0. # squeeze last dim if it exists # grey out the invalid values value[invalid_mask] = np.nan cmapper = matplotlib.cm.get_cmap(cmap) if value_transform: value = value_transform(value) # value = value / value.max() value = cmapper(value, bytes=True) # (nxmx4) # img = value[:, :, :] img = value[...] img[invalid_mask] = background_color # gamma correction img = img / 255 img = np.power(img, 2.2) img = img * 255 img = img.astype(np.uint8) img = Image.fromarray(img) return img def get_zoe_depth_map(image): with torch.autocast("cuda", enabled=True): depth = model_zoe_n.infer_pil(image) depth = colorize(depth, cmap="gray_r") return depth ``` Now we're ready to go: ```python import torch import numpy as np from PIL import Image from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL from diffusers.utils import load_image controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-zoe-depth-sdxl-1.0", use_safetensors=True, torch_dtype=torch.float16, ) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ) pipe.enable_model_cpu_offload() prompt = "pixel-art margot robbie as barbie, in a coupé . low-res, blocky, pixel art style, 8-bit graphics" negative_prompt = "sloppy, messy, blurry, noisy, highly detailed, ultra textured, photo, realistic" image = load_image("https://media.vogue.fr/photos/62bf04b69a57673c725432f3/3:2/w_1793,h_1195,c_limit/rev-1-Barbie-InstaVert_High_Res_JPEG.jpeg") controlnet_conditioning_scale = 0.55 depth_image = get_zoe_depth_map(image).resize((1088, 896)) generator = torch.Generator("cuda").manual_seed(978364352) images = pipe( prompt, image=depth_image, num_inference_steps=50, controlnet_conditioning_scale=controlnet_conditioning_scale, generator=generator ).images images[0] images[0].save(f"pixel-barbie.png") ``` ![images_1)](./barbie.png) To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl). ### Training Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md). #### Training data and Compute The model is trained on 3M image-text pairs from LAION-Aesthetics V2. The model is trained for 700 GPU hours on 80GB A100 GPUs. #### Batch size Data parallel with a single gpu batch size of 8 for a total batch size of 256. #### Hyper Parameters Constant learning rate of 1e-5. #### Mixed precision fp16
briannnyee/ppo-Pyramids
briannnyee
2024-08-22T14:22:57Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-08-22T14:22:55Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: briannnyee/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
sayakpaul/fp8-dog-lora-flux
sayakpaul
2024-08-22T14:20:38Z
8
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T13:37:37Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora instance_prompt: A photo of sks dog in a bucket widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - sayakpaul/fp8-dog-lora-flux <Gallery /> ## Model description These are sayakpaul/fp8-dog-lora-flux DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? False. FP8 training? True ## Trigger words You should use `A photo of sks dog in a bucket` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](sayakpaul/fp8-dog-lora-flux/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('sayakpaul/fp8-dog-lora-flux', weight_name='pytorch_lora_weights.safetensors') image = pipeline('A photo of sks dog in a bucket').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
abdumalikov/whisper-medium-14000
abdumalikov
2024-08-22T14:20:26Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-08-22T10:48:55Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: whisper-medium-14000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-14000 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.1 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
pabRomero/BioMedRoBERTa-finetuned-ner-pablo-classifier-then-full
pabRomero
2024-08-22T14:18:05Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:pabRomero/BioMedRoBERTa-finetuned-ner-pablo-just-classifier", "base_model:finetune:pabRomero/BioMedRoBERTa-finetuned-ner-pablo-just-classifier", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-08-21T16:51:33Z
--- library_name: transformers license: mit base_model: pabRomero/BioMedRoBERTa-finetuned-ner-pablo-just-classifier tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: BioMedRoBERTa-finetuned-ner-pablo-classifier-then-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioMedRoBERTa-finetuned-ner-pablo-classifier-then-full This model is a fine-tuned version of [pabRomero/BioMedRoBERTa-finetuned-ner-pablo-just-classifier](https://huggingface.co/pabRomero/BioMedRoBERTa-finetuned-ner-pablo-just-classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Precision: 0.7761 - Recall: 0.7831 - F1: 0.7796 - Accuracy: 0.9747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 2048 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.9697 | 16 | 0.1075 | 0.7084 | 0.7084 | 0.7084 | 0.9691 | | No log | 2.0 | 33 | 0.0972 | 0.7475 | 0.7397 | 0.7436 | 0.9712 | | No log | 2.9697 | 49 | 0.0922 | 0.7402 | 0.7483 | 0.7442 | 0.9725 | | No log | 4.0 | 66 | 0.0880 | 0.7618 | 0.7503 | 0.7560 | 0.9734 | | No log | 4.9697 | 82 | 0.0868 | 0.7612 | 0.7536 | 0.7573 | 0.9736 | | No log | 6.0 | 99 | 0.0865 | 0.7601 | 0.7572 | 0.7586 | 0.9737 | | No log | 6.9697 | 115 | 0.0863 | 0.7607 | 0.7588 | 0.7598 | 0.9737 | | No log | 8.0 | 132 | 0.0875 | 0.7513 | 0.7716 | 0.7613 | 0.9737 | | No log | 8.9697 | 148 | 0.0823 | 0.7706 | 0.7687 | 0.7696 | 0.9745 | | No log | 10.0 | 165 | 0.0827 | 0.7625 | 0.7752 | 0.7688 | 0.9738 | | No log | 10.9697 | 181 | 0.0824 | 0.7690 | 0.7739 | 0.7715 | 0.9746 | | No log | 12.0 | 198 | 0.0818 | 0.7739 | 0.7739 | 0.7739 | 0.9748 | | No log | 12.9697 | 214 | 0.0820 | 0.7718 | 0.7747 | 0.7732 | 0.9747 | | No log | 14.0 | 231 | 0.0818 | 0.7735 | 0.7773 | 0.7754 | 0.9749 | | No log | 14.9697 | 247 | 0.0820 | 0.7837 | 0.7757 | 0.7797 | 0.9754 | | No log | 16.0 | 264 | 0.0831 | 0.7734 | 0.7842 | 0.7788 | 0.9749 | | No log | 16.9697 | 280 | 0.0826 | 0.7683 | 0.7883 | 0.7782 | 0.9745 | | No log | 18.0 | 297 | 0.0826 | 0.7747 | 0.7835 | 0.7791 | 0.9747 | | No log | 18.9697 | 313 | 0.0824 | 0.7760 | 0.7830 | 0.7795 | 0.9747 | | No log | 19.3939 | 320 | 0.0824 | 0.7761 | 0.7831 | 0.7796 | 0.9747 | ### Framework versions - Transformers 4.44.1 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
Saegus/sentence-camembert-large-mean-pooling
Saegus
2024-08-22T14:16:32Z
175
0
sentence-transformers
[ "sentence-transformers", "safetensors", "camembert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-08-22T13:59:56Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- ## sentence-camembert-large-mean-pooling Model is loaded from [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large), and modified to use mean pooling. --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: CamembertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Saegus/sentence-camembert-base
Saegus
2024-08-22T14:15:55Z
106
0
sentence-transformers
[ "sentence-transformers", "safetensors", "camembert", "Text", "Sentence Similarity", "Sentence-Embedding", "camembert-base", "sentence-similarity", "fr", "dataset:stsb_multi_mt", "arxiv:1908.10084", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-08-22T12:42:04Z
--- pipeline_tag: sentence-similarity language: fr datasets: - stsb_multi_mt tags: - Text - Sentence Similarity - Sentence-Embedding - camembert-base license: apache-2.0 model-index: - name: sentence-camembert-base by Van Tuan DANG results: - task: name: Sentence-Embedding type: Text Similarity dataset: name: Text Similarity fr type: stsb_multi_mt args: fr metrics: - name: Test Pearson correlation coefficient type: Pearson_correlation_coefficient value: xx.xx library_name: sentence-transformers --- ## sentence-camembert-base Model is loaded from [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base). --- ## Pre-trained sentence embedding models are the state-of-the-art of Sentence Embeddings for French. Model is Fine-tuned using pre-trained [facebook/camembert-base](https://huggingface.co/camembert/camembert-base) and [Siamese BERT-Networks with 'sentences-transformers'](https://www.sbert.net/) on dataset [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train) ## Usage The model can be used directly (without a language model) as follows: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("dangvantuan/sentence-camembert-base") sentences = ["Un avion est en train de décoller.", "Un homme joue d'une grande flûte.", "Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond.", "Une personne est en train de plier un morceau de papier.", ] embeddings = model.encode(sentences) ``` ## Evaluation The model can be evaluated as follows on the French test data of stsb. ```python from sentence_transformers import SentenceTransformer from sentence_transformers.readers import InputExample from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator from datasets import load_dataset def convert_dataset(dataset): dataset_samples=[] for df in dataset: score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1 inp_example = InputExample(texts=[df['sentence1'], df['sentence2']], label=score) dataset_samples.append(inp_example) return dataset_samples # Loading the dataset for evaluation df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev") df_test = load_dataset("stsb_multi_mt", name="fr", split="test") # Convert the dataset for evaluation # For Dev set: dev_samples = convert_dataset(df_dev) val_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name='sts-dev') val_evaluator(model, output_path="./") # For Test set: test_samples = convert_dataset(df_test) test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test') test_evaluator(model, output_path="./") ``` **Test Result**: The performance is measured using Pearson and Spearman correlation: - On dev | Model | Pearson correlation | Spearman correlation | #params | | ------------- | ------------- | ------------- |------------- | | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 86.73 |86.54 | 110M | | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 79.22 | 79.16|135M | - On test | Model | Pearson correlation | Spearman correlation | | ------------- | ------------- | ------------- | | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 82.36 | 81.64| | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 78.62 | 77.48| ## Citation @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={https://arxiv.org/abs/1908.10084}, year={2019} } @article{martin2020camembert, title={CamemBERT: a Tasty French Language Mode}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} }
eduardo-alvarez/huggingface-workshop-emotions-bert
eduardo-alvarez
2024-08-22T14:03:09Z
118
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T14:02:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/gemma-2-2b-it_imdb_2bit_0
jvelja
2024-08-22T13:48:10Z
52
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-21T14:55:08Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmp_w6i75jw/jvelja/gemma-2-2b-it_imdb_2bit_0") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmp_w6i75jw/jvelja/gemma-2-2b-it_imdb_2bit_0") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmp_w6i75jw/jvelja/gemma-2-2b-it_imdb_2bit_0") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf
RichardErkhov
2024-08-22T13:47:43Z
239
0
null
[ "gguf", "arxiv:2404.00376", "arxiv:2009.13081", "arxiv:2402.18060", "arxiv:2203.14371", "arxiv:2009.03300", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T11:56:52Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3-meerkat-8b-v1.0 - GGUF - Model creator: https://huggingface.co/dmis-lab/ - Original model: https://huggingface.co/dmis-lab/llama-3-meerkat-8b-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3-meerkat-8b-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q2_K.gguf) | Q2_K | 2.96GB | | [llama-3-meerkat-8b-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama-3-meerkat-8b-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama-3-meerkat-8b-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama-3-meerkat-8b-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama-3-meerkat-8b-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q3_K.gguf) | Q3_K | 3.74GB | | [llama-3-meerkat-8b-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama-3-meerkat-8b-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama-3-meerkat-8b-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama-3-meerkat-8b-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama-3-meerkat-8b-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama-3-meerkat-8b-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama-3-meerkat-8b-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q4_K.gguf) | Q4_K | 4.58GB | | [llama-3-meerkat-8b-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama-3-meerkat-8b-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama-3-meerkat-8b-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama-3-meerkat-8b-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama-3-meerkat-8b-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q5_K.gguf) | Q5_K | 5.34GB | | [llama-3-meerkat-8b-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama-3-meerkat-8b-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama-3-meerkat-8b-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q6_K.gguf) | Q6_K | 6.14GB | | [llama-3-meerkat-8b-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-8b-v1.0-gguf/blob/main/llama-3-meerkat-8b-v1.0.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - medical - small LM - instruction-tuned - usmle - synthetic data --- # Meerkat-8B (Version 1.0) 🚀 Meerkat-8B is a new instruction-tuned medical AI system of the Meerkat model family. The model was based on the Meta's Llama-3-8B-Instruct model and fine-tuned using our new synthetic dataset consisting of high-quality chain-of-thought reasoning paths sourced from 18 medical textbooks, along with diverse instruction-following datasets. This equips the model with high-level medical reasoning capabilities required for solving complex medical problems. For further insights into our model, please refer to our paper! 📄 **Paper**: [Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks](https://arxiv.org/abs/2404.00376) ## Quick Start ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "dmis-lab/llama-3-meerkat-8b-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, # You can choose to use this when there's not enough GPU memory available. device_map="auto", ) # Multi-turn dialogue example messages =[ {"role": "system", "content": "You are a helpful doctor or healthcare professional. Guide the conversation to provide useful, complete, and scientifically-grounded answers to user questions. You have the option to compose a concise, single-turn conversation if the user's input is comprehensive to provide accurate answers. However, if essential details are missing, you should engage in a multi-turn dialogue, asking follow-up questions to gather a thorough medical history and records.\n\n"}, {"role": "user", "content": "Hello, doctor. I'm really concerned about my 10-year-old son. We recently discovered a painless mass in his left testicle, so we brought him to the pediatrician."}, {"role": "assistant", "content": "I understand your concern. Let's gather some more information. Has your son experienced any other symptoms along with the mass?"}, {"role": "user", "content": "Other than the mass, my son hasn't shown any symptoms. He's been his usual self, playing and eating normally."} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=1000, eos_token_id=terminators, do_sample=True, temperature=0.7, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Prompt Details To reproduce the results reported in our paper, it is advisable to utilize the identical system messages used during model training. Please refer to the guidelines detailed below. ### USMLE When solving USMLE-style questions such as [MedQA](https://arxiv.org/abs/2009.13081) and [MedBullets](https://arxiv.org/abs/2402.18060), use the following system message: ``` messages = [ {"role": "system", "content": "The following is a multiple-choice question about medical knowledge. Solve this in a step-by-step fashion, starting by summarizing the available information. Output a single option from the given options as the final answer. You are strongly required to follow the specified output format; conclude your response with the phrase \"the answer is ([option_id]) [answer_string]\".\n\n"}, {"role": "user", "content": "Two weeks after undergoing an emergency cardiac catherization with stenting for unstable angina pectoris, a 61-year-old man has decreased urinary output and malaise. He has type 2 diabetes mellitus and osteoarthritis of the hips. Prior to admission, his medications were insulin and naproxen. He was also started on aspirin, clopidogrel, and metoprolol after the coronary intervention. His temperature is 38\u00b0C (100.4\u00b0F), pulse is 93/min, and blood pressure is 125/85 mm Hg. Examination shows mottled, reticulated purplish discoloration of the feet. Laboratory studies show:\nHemoglobin count 14 g/dL\nLeukocyte count 16,400/mm3\nSegmented neutrophils 56%\nEosinophils 11%\nLymphocytes 31%\nMonocytes 2%\nPlatelet count 260,000/mm3\nErythrocyte sedimentation rate 68 mm/h\nSerum\nUrea nitrogen 25 mg/dL\nCreatinine 4.2 mg/dL\nRenal biopsy shows intravascular spindle-shaped vacuoles. Which of the following is the most likely cause of this patient's symptoms?\" (A) Renal papillary necrosis (B) Cholesterol embolization (C) Eosinophilic granulomatosis with polyangiitis (D) Polyarteritis nodosa"}, ] ``` The model generates reasoning paths to solve the problem and then sequentially provides the predicted answers. Since the model ends its response with "the answer is," it is straightforward to extract the predicted answer for comparison with the actual answer. ### Multiple-choice Exams For other types of multiple-choice exams such as [MedMCQA](https://arxiv.org/abs/2203.14371) or [MMLU](https://arxiv.org/abs/2009.03300), use the following simple system message: ``` messages = [ {"role": "system", "content": "Answer the multiple-choice question about medical knowledge.\n\n"}, {"role": "user", "content": "In a Robertsonian translocation fusion occurs at the: (A) telomeres. (B) centromeres. (C) histones. (D) ends of the long arms."}, ] ``` ### Other Use Cases Our model was trained using the [AlpaCare](https://github.com/xzhang97666/alpacare) instruction dataset comprising 52K examples, to enhance its generalization capabilities across diverse user prompts. Feel free to design and test your prompts and to share your thoughts with us, whether the model exceeds expectations or falls short! ## Reproducing MedQA Performance with vLLM Here is an example code for fast model evaluation in MedQA using vLLM. To adapt this code for other datasets like MedMCQA or MMLU, simply modify the instructions and update the dataset paths as needed. ```python import re from datasets import load_dataset from vllm import LLM, SamplingParams USMLE_INSTRUCTION = ( "The following is a multiple-choice question about medical knowledge. Solve this in" " a step-by-step fashion, starting by summarizing the available information. Output" " a single option from the given options as the final answer. You are strongly" " required to follow the specified output format; conclude your response with the" ' phrase "the answer is ([option_id]) [answer_string]".\n\n' ) llm = LLM( model="dmis-lab/llama-3-meerkat-8b-v1.0", dtype="bfloat16", gpu_memory_utilization=0.9, max_model_len=2048, trust_remote_code=True, tensor_parallel_size=1 ) tokenizer = llm.get_tokenizer() inputs, labels = [], [] for sample in load_dataset( "GBaker/MedQA-USMLE-4-options", split="test", trust_remote_code=True ): options = sorted(sample["options"].items()) options = " ".join(map(lambda x: f"({x[0]}) {x[1]}", options)) content = tokenizer.apply_chat_template( [{"role": "system", "content": USMLE_INSTRUCTION}, {"role": "user", "content": sample["question"] + " " + options}], add_generation_prompt=True, tokenize=False, ) inputs.append(content) labels.append(sample["answer_idx"]) generated = llm.generate( inputs, SamplingParams( temperature=0.0, stop_token_ids=[tokenizer.vocab["<|eot_id|>"]], max_tokens=1024, ), ) def extract_answer(text: str, options: str = "ABCD") -> str: return (re.findall(rf"he answer is \(([{options}])\)", text) or [options[0]])[-1] correctness = [] for g, l in zip(generated, labels): correctness.append(extract_answer(g.outputs[0].text) == l) print(sum(correctness) / len(correctness)) ``` ## Evaluation We tested models on seven medical benchmarks: [MedQA](https://arxiv.org/abs/2009.13081), [USMLE sample test](https://www.usmle.org/prepare-your-exam), [Medbullets-4](https://arxiv.org/abs/2402.18060), [Medbullets-5](https://arxiv.org/abs/2402.18060) , [MedMCQA](https://arxiv.org/abs/2203.14371), [MMLU-Medical](https://arxiv.org/abs/2009.03300), and [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060). | **Model** | **Average** | **MedQA** | **USMLE** | **Medbullets-4** | **Medbullets-5** | **MedMCQA** | **MMLU-Medical** | |:--------------------------------|:-----------:|:---------:|:---------:|:----------------:|:----------------:|:-----------:|:----------------:| | GPT-4 | 76.6 | 81.4 | 86.6 | 68.8 | 63.3 | 72.4 | 87.1 | | GPT-3.5 | 54.8 | 53.6 | 58.5 | 51.0 | 47.4 | 51.0 | 67.3 | | MediTron-70B (Ensemble, 5 runs) | - | 70.2 | - | - | - | 66.0 | 78.0 | |*Open-source (7B)*| | MediTron-7B | 51.0 | 50.2 | 44.6 | 51.1 | 45.5 | 57.9 | 56.7 | | BioMistral-7B | 55.4 | 54.3 | 51.4 | 52.3 | 48.7 | 61.1 | 64.6 | | Meerkat-7B | 62.6 | 70.6 | 70.3 | 58.7 | 52.9 | 60.6 | 70.5 | | Meerkat-8B (**New**) | **67.3** | **74.0** | **74.2** | **62.3** | **55.5** | **62.7** | **75.2** | Please note that the scores in MMLU-Medical were calculated based on the average accuracies across six medical-related subjects in the original MMLU benchmark, and each result for a single subject is presented below. | **Model** | **Average** | **Cliniq Knowledge** | **Medical Genetics** | **Anatomy** | **Professional Medicine** | **College Biology** | **College Medicine** | |:--------------------------------|:-----------:|:--------------------:|:--------------------:|:-----------:|:-------------------------:|:-------------------:|:--------------------:| | GPT-4 | 87.1 | 86.4 | 92.0 | 80.0 | 93.8 | 93.8 | 76.3 | | GPT-3.5 | 67.3 | 68.7 | 68.0 | 60.7 | 69.9 | 72.9 | 63.6 | | MediTron-70B (Ensemble, 5 runs) | 78.0 | 75.5 | 85.9 | 69.4 | 82.3 | 86.7 | 68.0 | |*Open-source (7B)*| | MediTron-7B | 56.7 | 57.7 | 63.8 | 56.9 | 56.0 | 57.1 | 48.9 | | BioMistral-7B | 64.6 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | | Meerkat-7B | 70.5 | 71.6 | 74.8 | 63.2 | **77.3** | 70.8 | 65.2 | | Meerkat-8B (**New**) | **75.2** | **74.3** | **76.7** | **74.8** | 75.3 | **76.1** | **74.3** | ## Reference Please see the information below to cite our paper. ```bibtex @article{kim2024small, title={Small language models learn enhanced reasoning skills from medical textbooks}, author={Kim, Hyunjae and Hwang, Hyeon and Lee, Jiwoo and Park, Sihyeon and Kim, Dain and Lee, Taewhoo and Yoon, Chanwoong and Sohn, Jiwoong and Choi, Donghee and Kang, Jaewoo}, journal={arXiv preprint arXiv:2404.00376}, year={2024} } ``` ## Acknowledgement Research supported with Cloud TPUs from Google’s TPU Research Cloud (TRC). ## Contact Feel free to email `[email protected]` if you have any questions.
vjosap/moralBERT-predict-degradation-in-text
vjosap
2024-08-22T13:45:22Z
228,544
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:44:39Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-purity-in-text
vjosap
2024-08-22T13:43:42Z
228,605
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:43:03Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-subversion-in-text
vjosap
2024-08-22T13:40:58Z
228,631
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:40:30Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-authority-in-text
vjosap
2024-08-22T13:39:48Z
228,640
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:38:59Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-loyalty-in-text
vjosap
2024-08-22T13:36:03Z
228,602
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:35:18Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-cheating-in-text
vjosap
2024-08-22T13:34:09Z
228,590
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:33:16Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
vjosap/moralBERT-predict-fairness-in-text
vjosap
2024-08-22T13:31:54Z
228,656
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:30:59Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
DW-ReCo/spot_llama3_training_ds_v15_merged_16bit
DW-ReCo
2024-08-22T13:29:55Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T13:26:33Z
--- base_model: spot_llama3_training_ds_v15_4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** DW-ReCo - **License:** apache-2.0 - **Finetuned from model :** spot_llama3_training_ds_v15_4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vjosap/moralBERT-predict-care-in-text
vjosap
2024-08-22T13:26:47Z
229,256
1
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-classification", "license:mit", "region:us" ]
text-classification
2024-08-22T13:25:58Z
--- license: mit pipeline_tag: text-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: your-repo-url - Docs: [More Information Needed]
YvanCarre/test
YvanCarre
2024-08-22T13:26:47Z
6
0
null
[ "pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-08-20T13:52:58Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
llmware/bling-phi-2-gguf
llmware
2024-08-22T13:20:57Z
10
1
transformers
[ "transformers", "gguf", "phi-2", "license:apache-2.0", "region:us" ]
null
2024-03-02T10:05:52Z
--- license: apache-2.0 inference: false --- # BLING-PHI-2-GGUF <!-- Provide a quick summary of what the model is/does. --> **bling-phi-2-gguf** is part of the BLING model series, RAG-instruct trained on top of a Microsoft Phi-2B base model. BLING models are fine-tuned with high-quality custom instruct datasets, designed for rapid prototyping in RAG scenarios. For other similar models with comparable size and performance in RAG deployments, please see: [**bling-phi-3-gguf**](https://huggingface.co/llmware/bling-phi-3-gguf) [**bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0) [**bling-sheared-llama-2.7b-0.1**](https://huggingface.co/llmware/bling-sheared-llama-2.7b-0.1) [**bling-red-pajamas-3b-0.1**](https://huggingface.co/llmware/bling-red-pajamas-3b-0.1) ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **93.0** correct out of 100 --Not Found Classification: 95.0% --Boolean: 85.0% --Math/Logic: 82.5% --Complex Questions (1-5): 3 (Above Average - multiple-choice, causal) --Summarization Quality (1-5): 3 (Above Average) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Phi-2B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Microsoft Phi-2B-Base ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The intended use of BLING models is two-fold: 1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow. 2. BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## How to Get Started with the Model To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/bling-phi-2-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog model = ModelCatalog().load_model("bling-phi-2-gguf") response = model.inference(query, add_context=text_sample) Note: please review [**config.json**](https://huggingface.co/llmware/bling-phi-2-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} ## Model Card Contact Darren Oberst & llmware team
digitalbrain79/dreamshaper-xl-lightning-4step-coreml-6bits-compiled
digitalbrain79
2024-08-22T13:17:29Z
50
2
diffusers
[ "diffusers", "sdxl", "coreml", "stablediffusion", "dreamshaper", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T11:19:16Z
--- license: openrail++ tags: - sdxl - coreml - stablediffusion - dreamshaper library_name: diffusers --- Converted [DreamShaper XL](https://civitai.com/models/112902/dreamshaper-xl?modelVersionId=354657) to CoreML. ## Usage [https://github.com/digitalbrain79/diffusers_coreml.git](https://github.com/digitalbrain79/diffusers_coreml.git)
BigHuggyD/anthracite-org_magnum-v2-123b_exl2_6.0bpw_h6
BigHuggyD
2024-08-22T13:16:53Z
5
0
null
[ "safetensors", "mistral", "chat", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "license:other", "6-bit", "exl2", "region:us" ]
text-generation
2024-08-22T12:01:15Z
--- license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md language: - en - fr - de - es - it - pt - ru - zh - ja pipeline_tag: text-generation tags: - chat --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hkPzhL-xYPeGGKCyAf3Qd.png) This is the sixth in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407). ## Prompting Model has been Instruct tuned with the Mistral formatting. A typical input would look like this: ```py <s>[INST] SYSTEM MESSAGE\nUSER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST] ``` We also provide SillyTavern presets for [Context](https://huggingface.co/anthracite-org/Magnum-123b-v1/resolve/main/Magnum-Mistral-Context.json) and [Instruct](https://huggingface.co/anthracite-org/Magnum-123b-v1/raw/main/Magnum-Mistral-Instruct.json) respectively. The Mistral preset included in SillyTavern seems to be misconfigured by default, so we recommend using these as a replacement. ## Credits - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered) - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) This model has been a team effort, and the credits goes to all members of Anthracite. ## Training The training was done for 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. In addition to this, we noticed that Mistral Large models seemed much more sensitive to learning rate adjustments than other models: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/xCK3ISKF6pWcMyO7MEzTA.png) We hypothesize this is primarily due to the particularly narrow and low variance weight distributions typical of Mistral derived models regardless of their scale. In the end, due to the costs that would be involved in training another full 2 epochs run ($600) on an even lower rate, we settled on our third attempt: 2e-6 with an effective batch size of 64. We chose to publish the 1.5 epoch run after manually testing and comparing it. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/d9_cBy-DuWrdnoVBbAvRV.png) Also, we notice a correlation between the significance of the 2nd epoch loss drop and the strength of the learning rate, implying 4e-6 leads to more catastrophic forgetting. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
vanninh2101/bert_job_recommendation_model
vanninh2101
2024-08-22T13:15:55Z
113
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T13:15:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hhyun/gemma-9b-it-v1
hhyun
2024-08-22T13:11:41Z
5
0
null
[ "tensorboard", "safetensors", "gemma2", "llama-factory", "full", "generated_from_trainer", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "license:other", "region:us" ]
null
2024-08-22T13:08:34Z
--- license: other base_model: google/gemma-2-9b-it tags: - llama-factory - full - generated_from_trainer model-index: - name: sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on the tax_qna_data_income_only dataset. It achieves the following results on the evaluation set: - Loss: 1.1982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.0 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.21.0 - Tokenizers 0.19.1
Aarushhh/SmolLM-360M-xlam-60k-merged-fp16
Aarushhh
2024-08-22T13:09:43Z
127
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:Salesforce/xlam-function-calling-60k", "base_model:HuggingFaceTB/SmolLM-360M", "base_model:finetune:HuggingFaceTB/SmolLM-360M", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-08-21T14:12:21Z
--- base_model: HuggingFaceTB/SmolLM-360M language: - en license: cc-by-sa-4.0 tags: - text-generation-inference - transformers - unsloth - llama - trl datasets: - Salesforce/xlam-function-calling-60k --- # FP16 merged version of Smollm for function calling FP16 merged version of [Smollm 360M finetuned with xlam 60k](Aarushhh/SmolLM-360M-xlam-60k) ## training data(example) ``` <|im_start|>system You are a helpful function calling assistant<|im_end|> <|im_start|>user Retrieve reviews for product ID '54321' in the UK with a language preference of 'fr' and starting from the 100th review. Available functions: <|tool_start|>{'name': 'loginuser', 'description': 'Logs a user into the system using the provided username and password.', 'parameters': {'username': {'description': "The user's username for login.", 'type': 'str', 'default': 'string'}, 'password': {'description': "The user's password for login in clear text.", 'type': 'str', 'default': 'string'}}}<|tool_end|><|tool_start|>{'name': 'product_reviews', 'description': 'Fetch product reviews from the Real-Time Product Search API, supporting infinite pagination and filtering options.', 'parameters': {'product_id': {'description': 'The product ID for which reviews are to be fetched.', 'type': 'str', 'default': '11577822456427762145'}, 'country': {'description': "ISO 3166-1 alpha-2 country code. Default is 'us'.", 'type': 'str, optional', 'default': 'us'}, 'language': {'description': "ISO 639-1 language code. Default is 'en'.", 'type': 'str, optional', 'default': 'en'}, 'offset': {'description': 'Number of reviews to skip. Valid values are integers from 0 to 30000. Default is None.', 'type': 'str, optional', 'default': ''}, 'rating': {'description': 'Minimum user rating of the reviews to be fetched. Valid values are 1-5. Default is None.', 'type': 'str, optional', 'default': ''}, 'limit': {'description': 'Maximum number of reviews to return. Valid values are integers from 0 to 100. Default is None.', 'type': 'str, optional', 'default': ''}}}<|tool_end|><|tool_start|>{'name': 'product_search', 'description': 'Search for products in a store based on a keyword.', 'parameters': {'store_id': {'description': 'The ID of the store to search in.', 'type': 'str', 'default': '1122'}, 'keyword': {'description': 'The keyword to search for products.', 'type': 'str', 'default': 'womens shoes'}, 'offset': {'description': "The starting point for the search results. Defaults to '0'.", 'type': 'str, optional', 'default': '0'}, 'count': {'description': "The maximum number of products to return. Defaults to '25'.", 'type': 'str, optional', 'default': '25'}}}<|tool_end|><|tool_start|>{'name': 'main_endpoint', 'description': 'Fetches product information from the Amazon Pricing and Product Info API using the given ASIN and domain.', 'parameters': {'asin': {'description': 'The Amazon Standard Identification Number of the product.', 'type': 'str', 'default': 'B07GR5MSKD'}, 'domain': {'description': "The domain from which to fetch the product information (e.g., 'com', 'co.uk').", 'type': 'str', 'default': 'de'}}}<|tool_end|><|tool_start|>{'name': 'product_details', 'description': 'Fetch product details from the given URL using the Axesso Kaufland Data Service API.', 'parameters': {'url': {'description': 'The URL of the product to look up.', 'type': 'str', 'default': 'https://www.kaufland.de/product/349303242/'}}}<|tool_end|><|im_end|> <|im_start|>assistant <|function_start|>{'name': 'product_reviews', 'arguments': {'product_id': '54321', 'country': 'uk', 'language': 'fr', 'offset': '100'}}<|function_end|><|im_end|> ``` ## Usage Use like any other Llama model ## License [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zzzzz24/weejkDNqw
zzzzz24
2024-08-22T13:08:19Z
29
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T13:05:20Z
--- license: creativeml-openrail-m ---
RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf
RichardErkhov
2024-08-22T12:56:44Z
28
0
null
[ "gguf", "arxiv:2405.00675", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T10:57:55Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Gemma-2-9B-It-SPPO-Iter2 - GGUF - Model creator: https://huggingface.co/UCLA-AGI/ - Original model: https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Gemma-2-9B-It-SPPO-Iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q2_K.gguf) | Q2_K | 3.54GB | | [Gemma-2-9B-It-SPPO-Iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [Gemma-2-9B-It-SPPO-Iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.IQ3_S.gguf) | IQ3_S | 4.04GB | | [Gemma-2-9B-It-SPPO-Iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [Gemma-2-9B-It-SPPO-Iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.IQ3_M.gguf) | IQ3_M | 4.19GB | | [Gemma-2-9B-It-SPPO-Iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q3_K.gguf) | Q3_K | 4.43GB | | [Gemma-2-9B-It-SPPO-Iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [Gemma-2-9B-It-SPPO-Iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [Gemma-2-9B-It-SPPO-Iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [Gemma-2-9B-It-SPPO-Iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q4_0.gguf) | Q4_0 | 5.07GB | | [Gemma-2-9B-It-SPPO-Iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [Gemma-2-9B-It-SPPO-Iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [Gemma-2-9B-It-SPPO-Iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q4_K.gguf) | Q4_K | 5.37GB | | [Gemma-2-9B-It-SPPO-Iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [Gemma-2-9B-It-SPPO-Iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q4_1.gguf) | Q4_1 | 5.55GB | | [Gemma-2-9B-It-SPPO-Iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q5_0.gguf) | Q5_0 | 6.04GB | | [Gemma-2-9B-It-SPPO-Iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [Gemma-2-9B-It-SPPO-Iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q5_K.gguf) | Q5_K | 6.19GB | | [Gemma-2-9B-It-SPPO-Iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [Gemma-2-9B-It-SPPO-Iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q5_1.gguf) | Q5_1 | 6.52GB | | [Gemma-2-9B-It-SPPO-Iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q6_K.gguf) | Q6_K | 7.07GB | | [Gemma-2-9B-It-SPPO-Iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Gemma-2-9B-It-SPPO-Iter2-gguf/blob/main/Gemma-2-9B-It-SPPO-Iter2.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- license: gemma datasets: - openbmb/UltraFeedback language: - en pipeline_tag: text-generation --- Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675) # Gemma-2-9B-It-SPPO-Iter2 This model was developed using [Self-Play Preference Optimization](https://arxiv.org/abs/2405.00675) at iteration 2, based on the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic. **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it) ## Links to Other Models - [Gemma-2-9B-It-SPPO-Iter1](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter1) - [Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) - [Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) ### Model Description - Model type: A 8B parameter GPT-like model fine-tuned on synthetic datasets. - Language(s) (NLP): Primarily English - License: Apache-2.0 - Finetuned from model: google/gemma-2-9b-it ## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/) | Model | LC. Win Rate | Win Rate | Avg. Length | |-------------------------------------------|:------------:|:--------:|:-----------:| |[Llama-3-8B-SPPO Iter1](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter1) |48.70 |40.76 | 1669 |[Llama-3-8B-SPPO Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) |50.93 | 44.64 | 1759 |[Llama-3-8B-SPPO Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) |**53.27** |**47.74** | 1803 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - eta: 1000 - per_device_train_batch_size: 8 - gradient_accumulation_steps: 1 - seed: 42 - distributed_type: deepspeed_zero3 - num_devices: 8 - optimizer: RMSProp - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_train_epochs: 1.0 ## Citation ``` @misc{wu2024self, title={Self-Play Preference Optimization for Language Model Alignment}, author={Wu, Yue and Sun, Zhiqing and Yuan, Huizhuo and Ji, Kaixuan and Yang, Yiming and Gu, Quanquan}, year={2024}, eprint={2405.00675}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Balab2021/bbphi35ftv1
Balab2021
2024-08-22T12:53:43Z
120
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T12:51:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
suko/Meta-Llama-3-8B-CHT
suko
2024-08-22T12:48:33Z
18
0
transformers
[ "transformers", "gguf", "llama", "llama-3", "text-generation", "en", "zh", "dataset:erhwenkuo/alpaca-data-gpt4-chinese-zhtw", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T18:21:54Z
--- language: - en - zh license: llama3 library_name: transformers base_model: unsloth/llama-3-8b-bnb-4bit datasets: - erhwenkuo/alpaca-data-gpt4-chinese-zhtw pipeline_tag: text-generation tags: - llama-3 prompt_template: >- {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> --- # LLAMA 3 8B with capable to output Traditional Chinese ## ✨ Recommend using LMStudio for this model I tried using Ollama to run it, but it became quite delulu. So for now, I'm sticking with LMStudio :)The performance isn't actually that great, but it's capable of answering some basic questions. Sometimes it just acts really dumb though :( > LLAMA 3.1 can actually output pretty well Chinese, so this repo can be ignored.
rytus/my_awesome_food_model
rytus
2024-08-22T12:44:57Z
31
0
null
[ "tensorboard", "safetensors", "vit", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
null
2024-08-22T12:13:10Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6077 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.702 | 0.992 | 62 | 2.5162 | 0.839 | | 1.8216 | 2.0 | 125 | 1.7616 | 0.893 | | 1.5969 | 2.976 | 186 | 1.6077 | 0.9 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
Toridori/videomae-base-finetuned-ucf101-subset
Toridori
2024-08-22T12:38:37Z
5
0
null
[ "tensorboard", "safetensors", "videomae", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "region:us" ]
null
2024-08-22T12:23:23Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3756 - Accuracy: 0.8710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 148 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.1403 | 0.2568 | 38 | 1.7316 | 0.5143 | | 0.8363 | 1.2568 | 76 | 0.8783 | 0.7143 | | 0.3773 | 2.2568 | 114 | 0.4006 | 0.8143 | | 0.2545 | 3.2297 | 148 | 0.3346 | 0.8714 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
RefalMachine/ruadapt_llama3_8b_instruct_extended_lep_ft
RefalMachine
2024-08-22T12:38:34Z
479
3
null
[ "safetensors", "llama", "ru", "en", "dataset:IlyaGusev/saiga_scored", "base_model:RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_full_lr2e4_bs256", "base_model:finetune:RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_full_lr2e4_bs256", "region:us" ]
null
2024-08-22T09:56:05Z
--- base_model: RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_full_lr2e4_bs256 datasets: - IlyaGusev/saiga_scored language: - ru - en --- # Model description LoRa tuned version of ruadapt llama 3 8B with extended tokenizer after LEP (Learned Embedding Propagation, paper will be soon) procedure on saiga_scored d7 dataset. Thanks to the extended tokenizer, the model works more efficiently with the Russian language. # How to cite: Tikhomirov M., Chernyshev D. Facilitating large language model Russian adaptation with Learned Embedding Propagation // 2024 (will be soon) Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //2023 Ivannikov Ispras Open Conference (ISPRAS). – IEEE, 2023. – С. 163-168.
pixologyds/xmado
pixologyds
2024-08-22T12:36:23Z
14
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-20T09:36:38Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image instance_prompt: xkang --- # Xmado Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `xkang` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('pixologyds/xmado', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf
RichardErkhov
2024-08-22T12:25:46Z
36
0
null
[ "gguf", "arxiv:2406.08464", "arxiv:2405.14734", "arxiv:2310.01377", "arxiv:2406.12845", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T10:42:22Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Magpie-Align-v0.1 - GGUF - Model creator: https://huggingface.co/Magpie-Align/ - Original model: https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Align-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Magpie-Align-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Magpie-Align-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Magpie-Align-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Magpie-Align-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Magpie-Align-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Magpie-Align-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Magpie-Align-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Magpie-Align-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Magpie-Align-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Magpie-Align-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Magpie-Align-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Magpie-Align-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Magpie-Align-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Magpie-Align-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Magpie-Align-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Magpie-Align-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Magpie-Align-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Magpie-Align-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Magpie-Align-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Magpie-Align-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Magpie-Align-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-8B-Magpie-Align-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Magpie-Align_-_Llama-3-8B-Magpie-Align-v0.1-gguf/blob/main/Llama-3-8B-Magpie-Align-v0.1.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: llama3 base_model: Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.1 tags: - alignment-handbook - axolotl - trl - dpo - sft - generated_from_trainer datasets: - princeton-nlp/llama3-ultrafeedback - Magpie-Align/Magpie-Pro-MT-300K-v0.1 model-index: - name: Llama-3-8B-Magpie-Align-v0.1 results: [] language: - en --- [![Magpie](magpie_logo.png)](https://huggingface.co/spaces/flydust/Chat-with-Magpie) ## 🔥 Chat with Magpie [Here](https://huggingface.co/spaces/flydust/Chat-with-Magpie)! # 🐦 Llama-3-8B-Magpie-Align-v0.1 Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/) Online Model Demo: [https://huggingface.co/spaces/flydust/Chat-with-Magpie](https://huggingface.co/spaces/flydust/Chat-with-Magpie) Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464) Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie) ## Model Overview This model is an aligned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). We apply the following pipeline: - We first use [Magpie-Align/Magpie-Pro-MT-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) dataset and perform SFT -> [Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.1](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.1) - We then perform DPO on the [princeton-nlp/llama3-ultrafeedback](https://huggingface.co/datasets/princeton-nlp/llama3-ultrafeedback) dataset. The overall performance is even better than the official Llama-3-8B-Instruct Model! - **Alpaca Eval 2 (vs GPT-4-Turbo-1106): 38.52 (LC), 38.47 (WR)** - **Alpaca Eval 2 (vs Llama-3-8B-Instruct): 69.37 (LC), 70.05 (WR)** - **Arena Hard: 32.4** - **WildBench: 39.3 ((was) Best <30B Model! 🏆)** - **Zero-Eval GSM: 54.62** ## Model Performance We compare our Llama-3-8B-Magpie-Align with official and other **open-aligned LLMs** that have been fine-tuned from base models and have publicly released their training datasets. The results are as follows: ``` +---------------------------------------------+--------------------+--------------------+-----------------------+------------+ | Aligned Model ID | MT-Bench | Alpaca Eval 2 | Alpaca Eval 2 | Arena Hard | | | | (GPT-4-Turbo-1106) | (Llama-3-8B-Instruct) | | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | | R1 | R2 | AVG | LC WR | WR | LC WR | WR | Score | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | meta-llama/Meta-Llama-3-8B-Instruct | 8.31 | 7.65 | 7.98 | 22.92 | 22.57 | 50 | 50 | 20.6 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | princeton-nlp/Llama-3-Base-8B-SFT-DPO | 8.12 | 7.23 | 7.67 | 17.71 | 15.34 | 43.73 | 38.80 | 14.8 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | NousResearch/Hermes-2-Pro-Llama-3-8B | 8.05 | 7.35 | 7.70 | 15.60 | 12.86 | 36.37 | 30.52 | 11.5 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | allenai/llama-3-tulu-2-dpo-8b | 7.71 | 7.15 | 7.43 | 14.89 | 14.80 | 35.43 | 35.42 | 11.7 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | cognitivecomputations/dolphin-2.9-llama3-8b | 7.97 | 6.98 | 7.47 | 12.50 | 8.79 | 32.67 | 22.80 | 8.2 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | openchat/openchat-3.6-8b-20240522 | 7.83 | 7.23 | 7.53 | 17.70 | 12.53 | 41.30 | 30.79 | 6.7 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | Magpie-Align/Llama-3-8B-Magpie-Align-v0.1 | 8.01 | 7.63 | 7.82 | 38.52 | 38.47 | 69.37 | 70.05 | 32.4 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ | Magpie-Align/Llama-3-8B-Magpie-Align-v0.2 | 7.81 | 7.64 | 7.73 | 49.86 | 51.98 | 75.17 | 78.20 | 37.5 | +---------------------------------------------+------+------+------+----------+---------+-----------+-----------+------------+ ``` ## 👀 Other Information **License**: Please follow [Meta Llama 3 Community License](https://llama.meta.com/llama3/license). **Conversation Template**: Please use Llama 3 **official chat template** for the best performance. **How to use it?** Please check the official [Llama 3 repository](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#how-to-use) for detailed instructions. Simply replace the original `model_id` with `Magpie-Align/Llama-3-8B-Magpie-Align-v0.1`. The detailed training pipeline is as follows. ## Stage 1: Supervised Fine-tuning We use [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for SFT. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8807 | 0.0007 | 1 | 0.9001 | | 0.5113 | 0.3337 | 464 | 0.5178 | | 0.4668 | 0.6673 | 928 | 0.4792 | | 0.4492 | 1.0010 | 1392 | 0.4582 | | 0.3498 | 1.3205 | 1856 | 0.4575 | | 0.3525 | 1.6542 | 2320 | 0.4555 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Magpie-Align/Magpie-Pro-MT-300K-v0.1 type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./out_Llama-3-8B-Magpie-Pro-300K-MT sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 3 eval_table_size: saves_per_epoch: 3 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><be> ## Stage 2: Direct Preference Optimization We use [alignment handbook](https://github.com/huggingface/alignment-handbook) for DPO. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.628 | 0.2138 | 100 | 0.6641 | -0.8806 | -1.0146 | 0.6240 | 0.1340 | -362.7133 | -343.6060 | -0.7539 | -0.7528 | | 0.6935 | 0.4275 | 200 | 0.6352 | -1.3660 | -1.6311 | 0.6545 | 0.2651 | -424.3628 | -392.1437 | -0.6649 | -0.6629 | | 0.6376 | 0.6413 | 300 | 0.6178 | -1.3533 | -1.6413 | 0.6748 | 0.2880 | -425.3859 | -390.8818 | -0.6753 | -0.6758 | | 0.5888 | 0.8550 | 400 | 0.6088 | -1.6321 | -1.9785 | 0.6829 | 0.3464 | -459.1051 | -418.7560 | -0.6440 | -0.6435 | It achieves the following results on the evaluation set: - Loss: 0.6084 - Rewards/chosen: -1.6265 - Rewards/rejected: -1.9735 - Rewards/accuracies: 0.6809 - Rewards/margins: 0.3470 - Logps/rejected: -458.6070 - Logps/chosen: -418.2021 - Logits/rejected: -0.6447 - Logits/chosen: -0.6439 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1 <details><summary>See alignment handbook config</summary> ```yaml # Model arguments model_name_or_path: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1 torch_dtype: null # Data training arguments # For definitions, see: src/h4/training/config.py dataset_mixer: princeton-nlp/llama3-ultrafeedback: 1.0 dataset_splits: - train - test preprocessing_num_workers: 12 # DPOTrainer arguments bf16: true beta: 0.01 do_eval: true evaluation_strategy: steps eval_steps: 100 gradient_accumulation_steps: 16 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False hub_model_id: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO2 learning_rate: 1.0e-6 log_level: info logging_steps: 1 lr_scheduler_type: cosine max_length: 2048 max_prompt_length: 1800 num_train_epochs: 1 optim: adamw_torch output_dir: data/magpie-pro-mt-ultradpo-1e-6 per_device_train_batch_size: 2 per_device_eval_batch_size: 4 push_to_hub: true save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1 ``` </details><be> ## Downstream Performance | Datasets | Llama-3-8B-Magpie-Align-v0.1 | | :--- | :---: | | MMLU (5) | 64.61 | | ARC (25) | 62.03 | | HellaSwag (25) | 82.10 | | TruthfulQA (0) | 58.26 | | Winogrande (5) | 73.01 | ## Paper Abstract <details><summary>Click Here</summary> High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench. </details><be> ## 📚 Citation If you find the model, data, or code useful, please cite our paper: ``` @article{xu2024magpie, title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing}, author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin}, year={2024}, eprint={2406.08464}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Please also cite the creators of preference datasets: SimPO paper: ``` @article{meng2024simpo, title={{SimPO}: Simple preference optimization with a reference-free reward}, author={Meng, Yu and Xia, Mengzhou and Chen, Danqi}, journal={arXiv preprint arXiv:2405.14734}, year={2024} } ``` UltraFeedback paper: ``` @article{cui2023ultrafeedback, title={{UltraFeedback}: Boosting language models with high-quality feedback}, author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong}, journal={arXiv preprint arXiv:2310.01377}, year={2023} } ``` ArmoRM paper: ``` @article{wang2024interpretable, title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts}, author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong}, journal={arXiv preprint arXiv:2406.12845}, year={2024} } ``` **Questions?** Please contact [Zhangchen](https://zhangchenxu.com/) by email.
RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf
RichardErkhov
2024-08-22T12:19:05Z
271
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T10:32:06Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3-8b-gpt-4o-ru1.0 - GGUF - Model creator: https://huggingface.co/ruslandev/ - Original model: https://huggingface.co/ruslandev/llama-3-8b-gpt-4o-ru1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3-8b-gpt-4o-ru1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q2_K.gguf) | Q2_K | 2.96GB | | [llama-3-8b-gpt-4o-ru1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama-3-8b-gpt-4o-ru1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama-3-8b-gpt-4o-ru1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama-3-8b-gpt-4o-ru1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama-3-8b-gpt-4o-ru1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q3_K.gguf) | Q3_K | 3.74GB | | [llama-3-8b-gpt-4o-ru1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama-3-8b-gpt-4o-ru1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama-3-8b-gpt-4o-ru1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama-3-8b-gpt-4o-ru1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama-3-8b-gpt-4o-ru1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama-3-8b-gpt-4o-ru1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama-3-8b-gpt-4o-ru1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q4_K.gguf) | Q4_K | 4.58GB | | [llama-3-8b-gpt-4o-ru1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama-3-8b-gpt-4o-ru1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama-3-8b-gpt-4o-ru1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama-3-8b-gpt-4o-ru1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama-3-8b-gpt-4o-ru1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q5_K.gguf) | Q5_K | 5.34GB | | [llama-3-8b-gpt-4o-ru1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama-3-8b-gpt-4o-ru1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama-3-8b-gpt-4o-ru1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q6_K.gguf) | Q6_K | 6.14GB | | [llama-3-8b-gpt-4o-ru1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/ruslandev_-_llama-3-8b-gpt-4o-ru1.0-gguf/blob/main/llama-3-8b-gpt-4o-ru1.0.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - generated_from_trainer model-index: - name: >- home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru results: [] datasets: - ruslandev/tagengo-rus-gpt-4o --- # Llama-3 8B GPT-4o-RU1.0 [[Dataset]](https://huggingface.co/datasets/ruslandev/tagengo-rus-gpt-4o) This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). The idea behind this model is to train on a dataset derived from a smaller subset of the [tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4), but with improved data quality. I tried to achieve higher data quality by prompting GPT-4o, the latest OpenAI's LLM with better multilingual capabilities. The training objective is primarily focused on the Russian language (80% of the training examples). After training for 1 epoch on 2 NVIDIA A100 the model shows promising results on the MT-Bench evaluation benchmark, surpassing GPT-3.5-turbo and being on par with [Suzume](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) in Russian language scores, even though the latter is trained on 8x bigger and more diverse dataset. ## How to use The easiest way to use this model on your own computer is to use the GGUF version of this model ([ruslandev/llama-3-8b-gpt-4o-ru1.0-gguf](https://huggingface.co/ruslandev/llama-3-8b-gpt-4o-ru1.0-gguf)) using a program such as [llama.cpp](https://github.com/ggerganov/llama.cpp). If you want to use this model directly with the Huggingface Transformers stack, I recommend using my framework [gptchain](https://github.com/RuslanPeresy/gptchain). ``` git clone https://github.com/RuslanPeresy/gptchain.git cd gptchain pip install -r requirements-train.txt python gptchain.py chat -m ruslandev/llama-3-8b-gpt-4o-ru1.0 \ --chatml true \ -q '[{"from": "human", "value": "Из чего состоит нейронная сеть?"}]' ``` ## Evaluation scores I achieved the following scores on Ru/En MT-Bench: | |meta-llama/Meta-Llama-3-8B-Instruct | ruslandev/llama-3-8b-gpt-4o-ru1.0 | lightblue/suzume-llama-3-8B-multilingual | Nexusflow/Starling-LM-7B-beta | gpt-3.5-turbo | |:----------:|:----------------------------------:|:---------------------------------:|:----------------------------------------:|:-----------------------------:|:-------------:| | Russian 🇷🇺 | NaN | 8.12 | 8.19 | 8.06 | 7.94 | | English 🇺🇸 | 7.98 | 8.01 | 7.73 | 7.92 | 8.26 | ## Training procedure [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false datasets: - path: ruslandev/tagengo-rus-gpt-4o type: sharegpt conversation: llama-3 dataset_prepared_path: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/prepared_tagengo_rus val_set_size: 0.01 output_dir: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false use_wandb: false #wandb_project: axolotl #wandb_entity: wandb_entity #wandb_name: llama_3_8b_gpt_4o_ru gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /home/ubuntu/axolotl/deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1347 | 0.016 | 1 | 1.1086 | | 0.916 | 0.208 | 13 | 0.8883 | | 0.8494 | 0.416 | 26 | 0.8072 | | 0.8657 | 0.624 | 39 | 0.7814 | | 0.8077 | 0.832 | 52 | 0.7702 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
zwloong/sd3-lora-training-rank32
zwloong
2024-08-22T12:11:52Z
12
1
diffusers
[ "diffusers", "sd3", "sd3-diffusers", "text-to-image", "simpletuner", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:other", "region:us" ]
text-to-image
2024-08-22T11:32:00Z
--- license: other base_model: "stabilityai/stable-diffusion-3-medium-diffusers" tags: - sd3 - sd3-diffusers - text-to-image - diffusers - simpletuner - lora - template:sd-lora inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'ethnographic photography of teddy bear at a picnic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png --- # sd3-lora-training-rank32 This is a standard PEFT LoRA derived from [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers). The main validation prompt used during training was: ``` ethnographic photography of teddy bear at a picnic ``` ## Validation settings - CFG: `4.0` - CFG Rescale: `0.0` - Steps: `30` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 16 - Training steps: 600 - Learning rate: 8e-07 - Effective batch size: 2 - Micro-batch size: 1 - Gradient accumulation steps: 2 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: bf16 - Quantised: No - Xformers: Not used - LoRA Rank: 32 - LoRA Alpha: None - LoRA Dropout: 0.1 - LoRA initialisation style: default ## Datasets ### Pal - Repeats: 0 - Total number of images: 73 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: center - Crop aspect: square ## Inference ```python import torch from diffusers import DiffusionPipeline model_id = 'stabilityai/stable-diffusion-3-medium-diffusers' adapter_id = 'zwloong/sd3-lora-training-rank32' pipeline = DiffusionPipeline.from_pretrained(model_id) pipeline.load_lora_weights(adapter_id) prompt = "ethnographic photography of teddy bear at a picnic" negative_prompt = 'blurry, cropped, ugly' pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=30, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=4.0, ).images[0] image.save("output.png", format="PNG") ```
Illiyas2024/Florence-2-FT-DocVQA
Illiyas2024
2024-08-22T12:07:58Z
103
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-08-22T11:38:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LedyHussy/animals
LedyHussy
2024-08-22T12:04:46Z
6
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2024-08-22T12:04:30Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: animals results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9459459185600281 --- # animals Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cat ![cat](images/cat.jpg) #### cow ![cow](images/cow.jpg) #### dog ![dog](images/dog.jpg) #### horse ![horse](images/horse.jpg) #### lion ![lion](images/lion.jpg)
sezenkarakus/image-description-model-paligemma-v2
sezenkarakus
2024-08-22T12:01:37Z
64
0
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-08-22T11:58:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf
RichardErkhov
2024-08-22T11:59:10Z
132
0
null
[ "gguf", "arxiv:2405.04324", "endpoints_compatible", "region:us" ]
null
2024-08-22T10:10:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-8b-code-base-128k - GGUF - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-8b-code-base-128k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [granite-8b-code-base-128k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q2_K.gguf) | Q2_K | 2.85GB | | [granite-8b-code-base-128k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.IQ3_XS.gguf) | IQ3_XS | 3.15GB | | [granite-8b-code-base-128k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.IQ3_S.gguf) | IQ3_S | 3.32GB | | [granite-8b-code-base-128k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q3_K_S.gguf) | Q3_K_S | 3.3GB | | [granite-8b-code-base-128k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.IQ3_M.gguf) | IQ3_M | 3.43GB | | [granite-8b-code-base-128k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q3_K.gguf) | Q3_K | 3.67GB | | [granite-8b-code-base-128k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q3_K_M.gguf) | Q3_K_M | 3.67GB | | [granite-8b-code-base-128k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q3_K_L.gguf) | Q3_K_L | 3.99GB | | [granite-8b-code-base-128k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.IQ4_XS.gguf) | IQ4_XS | 4.1GB | | [granite-8b-code-base-128k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q4_0.gguf) | Q4_0 | 4.28GB | | [granite-8b-code-base-128k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.IQ4_NL.gguf) | IQ4_NL | 4.32GB | | [granite-8b-code-base-128k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q4_K_S.gguf) | Q4_K_S | 4.3GB | | [granite-8b-code-base-128k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q4_K.gguf) | Q4_K | 4.55GB | | [granite-8b-code-base-128k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q4_K_M.gguf) | Q4_K_M | 4.55GB | | [granite-8b-code-base-128k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q4_1.gguf) | Q4_1 | 4.73GB | | [granite-8b-code-base-128k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q5_0.gguf) | Q5_0 | 5.19GB | | [granite-8b-code-base-128k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q5_K_S.gguf) | Q5_K_S | 5.19GB | | [granite-8b-code-base-128k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q5_K.gguf) | Q5_K | 5.33GB | | [granite-8b-code-base-128k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q5_K_M.gguf) | Q5_K_M | 5.33GB | | [granite-8b-code-base-128k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q5_1.gguf) | Q5_1 | 5.65GB | | [granite-8b-code-base-128k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q6_K.gguf) | Q6_K | 6.16GB | | [granite-8b-code-base-128k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-base-128k-gguf/blob/main/granite-8b-code-base-128k.Q8_0.gguf) | Q8_0 | 7.98GB | Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 datasets: - codeparrot/github-code-clean - bigcode/starcoderdata # - Stackexchange # - CommonCrawl - open-web-math/open-web-math - math-ai/StackMathQA # - Arxiv # - Wikipedia # - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-8B-code-base-128k results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Python) metrics: - name: pass@1 type: pass@1 value: 43.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Average) metrics: - name: pass@1 type: pass@1 value: 40.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain (Average) metrics: - name: pass@1 type: pass@1 value: 28.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix (Average) metrics: - name: pass@1 type: pass@1 value: 25.2 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Python@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 48.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (C++@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 36.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Java@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 38.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (TypeScript@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 39.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Rust@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 29.0 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@4K type: Exact Match@4K value: 56.5 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@8K type: Exact Match@8K value: 60.1 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@16K type: Exact Match@16K value: 51.8 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@32K type: Exact Match@32K value: 57.4 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@4K type: Exact Match@4K value: 42.7 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@8K type: Exact Match@8K value: 44.0 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@16K type: Exact Match@16K value: 44.8 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-Pn(Balanced) metrics: - name: Exact Match@32K type: Exact Match@32K value: 44.5 verified: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-8B-Code-Base-128K ## Model Summary **Granite-8B-Code-Base-128K** extends the context length of Granite-8B-Code-Base from 4K to 128K with continual pretraining using the original training data but with repository-level file packing and per-language length upsampling, that we found to be critical for long-context pretraining. We adopt an progressive training strategy where we doubled the context window until it reached the desired length of 128K by appropriately adjusting RoPE theta. We trained on 4B tokens total for all stages, which is only 0.1% of Granite-8B-Code-Base's original pre-training data. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2405.04324) - **Release Date**: July 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage ### Intended use Prominent enterprise use cases of LLMs in software engineering productivity with 128K context length support that includes code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages. ### Generation This is a simple example of how to use **Granite-8B-Code-Base-128K** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-8B-code-base-128K" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "def generate():" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` ## Training Data Starting from the base Granite model, this model was further pretrained on repository-level code data with per-language context-length oversampling, allowing it to effectively utilize up to 128K tokens of context. This continued training stage focused on a curated selection of programming languages, such as Python, C, C++, Go, Java, JavaScript, and TypeScript. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-8B-code-Base-128K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-8B-Code-Base-128K** model with ethical intentions and in a responsible way. 
alicekyting/Qwen2-Audio-7B-Instruct-4bit
alicekyting
2024-08-22T11:58:43Z
873
3
transformers
[ "transformers", "safetensors", "qwen2_audio", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text2text-generation
2024-08-22T03:40:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID This model is a 4-bit quantized version of Qwen2-Audio-7B-Instruct. (https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed:** based on the original Qwen model by Alibaba Cloud - **Model type:** Audio-Text Multimodal Large Language Model ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The 4-bit quantization allows for reduced memory usage and potentially faster inference times, especially on hardware with limited resources. However, there might be a slight degradation in performance compared to the full-precision model. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> GPU is needed ## How to Get Started with the Model Refer to the Qwen2-Audio-7B-Instruct model page on Hugging Face for usage examples and code snippets. To use this model, you'll need to have the transformers library installed, along with bitsandbytes for 4-bit quantization support. Here's a basic example of how to load and use the model: ```python import torch from io import BytesIO from urllib.request import urlopen import librosa from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor, BitsAndBytesConfig processor = AutoProcessor.from_pretrained("alicekyting/Qwen2-Audio-7B-Instruct-4bit") bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) model = Qwen2AudioForConditionalGeneration.from_pretrained( "alicekyting/Qwen2-Audio-7B-Instruct-4bit", device_map="auto", quantization_config=bnb_config ) conversation = [ {'role': 'system', 'content': 'You are a helpful assistant.'}, {"role": "user", "content": [ {"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"}, {"type": "text", "text": "What's that sound?"}, ]}, {"role": "assistant", "content": "It is the sound of glass shattering."}, {"role": "user", "content": [ {"type": "text", "text": "What can you do when you hear that?"}, ]}, {"role": "assistant", "content": "Stay alert and cautious, and check if anyone is hurt or if there is any damage to property."}, {"role": "user", "content": [ {"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"}, {"type": "text", "text": "What does the person say?"}, ]}, ] text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios = [] for message in conversation: if isinstance(message["content"], list): for ele in message["content"]: if ele["type"] == "audio": audios.append( librosa.load( BytesIO(urlopen(ele['audio_url']).read()), sr=processor.feature_extractor.sampling_rate, mono=True )[0] ) inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True) inputs = {k: v.to(model.device) for k, v in inputs.items()} generate_ids = model.generate(**inputs, max_length=256) generate_ids = generate_ids[:, inputs['input_ids'].size(1):] response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(response)
mradermacher/StorieCreative-GGUF
mradermacher
2024-08-22T11:57:54Z
5
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "endpoints_compatible", "region:us" ]
null
2024-08-22T07:54:44Z
--- base_model: ClaudioItaly/StorieCreative language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ClaudioItaly/StorieCreative <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/StorieCreative-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/StorieCreative-GGUF/resolve/main/StorieCreative.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
tgrhn/wav2vec2-bert-turkish
tgrhn
2024-08-22T11:42:23Z
8
0
null
[ "safetensors", "wav2vec2-bert", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "region:us" ]
null
2024-08-21T13:02:11Z
--- base_model: facebook/w2v-bert-2.0 datasets: - common_voice_17_0 license: mit tags: - generated_from_trainer model-index: - name: wav2vec2-bert-turkish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-bert-turkish This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:-----:|:---------------:| | 1.0927 | 0.1724 | 1000 | 0.6278 | | 0.4967 | 0.3448 | 2000 | 0.5884 | | 0.3964 | 0.5172 | 3000 | 0.4851 | | 0.355 | 0.6895 | 4000 | 0.5371 | | 0.3264 | 0.8619 | 5000 | 0.4579 | | 0.2979 | 1.0343 | 6000 | 0.4308 | | 0.2568 | 1.2067 | 7000 | 0.4136 | | 0.2495 | 1.3791 | 8000 | 0.4711 | | 0.2422 | 1.5515 | 9000 | 0.4280 | | 0.2357 | 1.7238 | 10000 | 0.4045 | | 0.2193 | 1.8962 | 11000 | 0.4194 | | 0.2087 | 2.0686 | 12000 | 0.4427 | | 0.1819 | 2.2410 | 13000 | 0.4155 | | 0.1772 | 2.4134 | 14000 | 0.4012 | | 0.1739 | 2.5858 | 15000 | 0.3651 | | 0.172 | 2.7581 | 16000 | 0.4081 | | 0.1676 | 2.9305 | 17000 | 0.3948 | | 0.1498 | 3.1029 | 18000 | 0.3587 | | 0.1299 | 3.2753 | 19000 | 0.4106 | | 0.1319 | 3.4477 | 20000 | 0.3624 | | 0.1425 | 3.6201 | 21000 | 0.3551 | | 0.1362 | 3.7924 | 22000 | 0.3504 | | 0.1386 | 3.9648 | 23000 | 0.3454 | | 0.1106 | 4.1372 | 24000 | 0.3632 | | 0.1069 | 4.3096 | 25000 | 0.3404 | | 0.1155 | 4.4820 | 26000 | 0.3517 | | 0.1162 | 4.6544 | 27000 | 0.3315 | | 0.1121 | 4.8268 | 28000 | 0.3521 | | 0.1109 | 4.9991 | 29000 | 0.3456 | | 0.0875 | 5.1715 | 30000 | 0.3507 | | 0.0963 | 5.3439 | 31000 | 0.3878 | | 0.0933 | 5.5163 | 32000 | 0.3653 | | 0.0988 | 5.6887 | 33000 | 0.3427 | | 0.0912 | 5.8611 | 34000 | 0.3582 | | 0.0889 | 6.0334 | 35000 | 0.3262 | | 0.0769 | 6.2058 | 36000 | 0.3548 | | 0.08 | 6.3782 | 37000 | 0.4327 | | 0.0821 | 6.5506 | 38000 | 0.3374 | | 0.0841 | 6.7230 | 39000 | 0.3522 | | 0.0826 | 6.8954 | 40000 | 0.3499 | | 0.0773 | 7.0677 | 41000 | 0.3434 | | 0.07 | 7.2401 | 42000 | 0.3453 | | 0.0695 | 7.4125 | 43000 | 0.3455 | | 0.073 | 7.5849 | 44000 | 0.3614 | | 0.0705 | 7.7573 | 45000 | 0.3209 | | 0.0759 | 7.9297 | 46000 | 0.3455 | | 0.0599 | 8.1021 | 47000 | 0.3237 | | 0.0617 | 8.2744 | 48000 | 0.3298 | | 0.0605 | 8.4468 | 49000 | 0.3684 | | 0.0594 | 8.6192 | 50000 | 0.3623 | | 0.0631 | 8.7916 | 51000 | 0.3582 | | 0.0625 | 8.9640 | 52000 | 0.3469 | | 0.0504 | 9.1364 | 53000 | 0.3462 | | 0.0502 | 9.3087 | 54000 | 0.3417 | | 0.0551 | 9.4811 | 55000 | 0.3526 | | 0.0548 | 9.6535 | 56000 | 0.3359 | | 0.0563 | 9.8259 | 57000 | 0.3581 | | 0.056 | 9.9983 | 58000 | 0.3421 | | 0.042 | 10.1707 | 59000 | 0.3349 | | 0.05 | 10.3430 | 60000 | 0.3552 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
kevinoli/clip-finetuned-csu-p14-336-e4l59-l
kevinoli
2024-08-22T11:42:22Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "clip", "zero-shot-image-classification", "generated_from_trainer", "base_model:openai/clip-vit-large-patch14-336", "base_model:finetune:openai/clip-vit-large-patch14-336", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-08-22T07:03:18Z
--- library_name: transformers base_model: openai/clip-vit-large-patch14-336 tags: - generated_from_trainer model-index: - name: clip-finetuned-csu-p14-336-e4l59-l results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-finetuned-csu-p14-336-e4l59-l This model is a fine-tuned version of [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-09 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.3952 | 0.0921 | 500 | 1.4940 | | 0.4562 | 0.1842 | 1000 | 1.4853 | | 0.5131 | 0.2763 | 1500 | 1.4758 | | 0.4481 | 0.3685 | 2000 | 1.4676 | | 0.4839 | 0.4606 | 2500 | 1.4585 | | 0.4377 | 0.5527 | 3000 | 1.4508 | | 0.4231 | 0.6448 | 3500 | 1.4432 | | 0.4369 | 0.7369 | 4000 | 1.4366 | | 0.4082 | 0.8290 | 4500 | 1.4302 | | 0.4234 | 0.9211 | 5000 | 1.4243 | | 0.4266 | 1.0133 | 5500 | 1.4191 | | 0.4438 | 1.1054 | 6000 | 1.4137 | | 0.3814 | 1.1975 | 6500 | 1.4085 | | 0.3327 | 1.2896 | 7000 | 1.4042 | | 0.4045 | 1.3817 | 7500 | 1.3989 | | 0.4038 | 1.4738 | 8000 | 1.3937 | | 0.3659 | 1.5660 | 8500 | 1.3894 | | 0.4282 | 1.6581 | 9000 | 1.3855 | | 0.4173 | 1.7502 | 9500 | 1.3816 | | 0.3758 | 1.8423 | 10000 | 1.3779 | | 0.4105 | 1.9344 | 10500 | 1.3745 | | 0.3765 | 2.0265 | 11000 | 1.3716 | | 0.3746 | 2.1186 | 11500 | 1.3690 | | 0.3783 | 2.2108 | 12000 | 1.3662 | | 0.3832 | 2.3029 | 12500 | 1.3640 | | 0.3984 | 2.3950 | 13000 | 1.3617 | | 0.4124 | 2.4871 | 13500 | 1.3593 | | 0.3363 | 2.5792 | 14000 | 1.3572 | | 0.3274 | 2.6713 | 14500 | 1.3555 | | 0.4039 | 2.7634 | 15000 | 1.3538 | | 0.378 | 2.8556 | 15500 | 1.3524 | | 0.3543 | 2.9477 | 16000 | 1.3511 | | 0.3606 | 3.0398 | 16500 | 1.3501 | | 0.4024 | 3.1319 | 17000 | 1.3491 | | 0.3182 | 3.2240 | 17500 | 1.3482 | | 0.3564 | 3.3161 | 18000 | 1.3475 | | 0.3842 | 3.4083 | 18500 | 1.3470 | | 0.352 | 3.5004 | 19000 | 1.3467 | | 0.3828 | 3.5925 | 19500 | 1.3464 | | 0.39 | 3.6846 | 20000 | 1.3462 | | 0.3618 | 3.7767 | 20500 | 1.3461 | | 0.3856 | 3.8688 | 21000 | 1.3461 | | 0.3586 | 3.9609 | 21500 | 1.3460 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 1.12.1 - Datasets 2.21.0 - Tokenizers 0.19.1
zzzzz24/sdw5
zzzzz24
2024-08-22T11:29:41Z
29
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T11:26:04Z
--- license: creativeml-openrail-m ---
qgallouedec/gpt2-imdb-pos-v2
qgallouedec
2024-08-22T11:28:13Z
122
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-06T20:32:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
malecada/en-phishing-email-former
malecada
2024-08-22T11:28:08Z
5
0
null
[ "tensorboard", "safetensors", "longformer", "generated_from_trainer", "base_model:allenai/longformer-base-4096", "base_model:finetune:allenai/longformer-base-4096", "license:apache-2.0", "region:us" ]
null
2024-08-22T11:27:06Z
--- license: apache-2.0 base_model: allenai/longformer-base-4096 tags: - generated_from_trainer model-index: - name: en-phishing-email-former results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-phishing-email-former This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5017 | 1.0 | 498 | 0.3536 | | 0.3302 | 2.0 | 996 | 0.3848 | | 0.2869 | 3.0 | 1494 | 0.3941 | | 0.2325 | 4.0 | 1992 | 0.4389 | | 0.1815 | 5.0 | 2490 | 0.4731 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
hwm21/detr-resnet-50-hardhat-finetuned
hwm21
2024-08-22T11:15:57Z
71
1
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50-dc5", "base_model:finetune:facebook/detr-resnet-50-dc5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-08-21T12:38:12Z
--- library_name: transformers license: apache-2.0 base_model: facebook/detr-resnet-50-dc5 tags: - generated_from_trainer model-index: - name: detr-resnet-50-hardhat-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50-hardhat-finetuned This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.1 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
thliang01/fireworks-sdxl-dora-v0-0
thliang01
2024-08-22T11:01:53Z
16
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "template:sd-lora", "dora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-08-22T09:33:12Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - template:sd-lora - dora widget: - text: a <s0><s1> fireworks of an astronaut riding a horse output: url: image_0.png - text: a <s0><s1> fireworks of an astronaut riding a horse output: url: image_1.png - text: a <s0><s1> fireworks of an astronaut riding a horse output: url: image_2.png - text: a <s0><s1> fireworks of an astronaut riding a horse output: url: image_3.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a <s0><s1> fireworks license: openrail++ --- # SDXL LoRA DreamBooth - thliang01/fireworks-sdxl-dora-v0-0 <Gallery /> ## Model description ### These are thliang01/fireworks-sdxl-dora-v0-0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`fireworks-sdxl-dora-v0-0.safetensors` here 💾](/thliang01/fireworks-sdxl-dora-v0-0/blob/main/fireworks-sdxl-dora-v0-0.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:fireworks-sdxl-dora-v0-0:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`fireworks-sdxl-dora-v0-0_emb.safetensors` here 💾](/thliang01/fireworks-sdxl-dora-v0-0/blob/main/fireworks-sdxl-dora-v0-0_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `fireworks-sdxl-dora-v0-0_emb` to your prompt. For example, `a fireworks-sdxl-dora-v0-0_emb fireworks` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('thliang01/fireworks-sdxl-dora-v0-0', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='thliang01/fireworks-sdxl-dora-v0-0', filename='fireworks-sdxl-dora-v0-0_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> fireworks of an astronaut riding a horse').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `SKS` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/thliang01/fireworks-sdxl-dora-v0-0/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Mkmworld/all-regression
Mkmworld
2024-08-22T11:00:36Z
2
0
tf-keras
[ "tf-keras", "region:us" ]
null
2023-10-08T20:59:48Z
--- library_name: tf-keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 9.999999747378752e-05 | | decay | 1e-05 | | beta_1 | 0.8999999761581421 | | beta_2 | 0.9990000128746033 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf
RichardErkhov
2024-08-22T10:28:47Z
67
0
null
[ "gguf", "arxiv:2405.04324", "endpoints_compatible", "region:us" ]
null
2024-08-22T09:39:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-3b-code-base-128k - GGUF - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-3b-code-base-128k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [granite-3b-code-base-128k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q2_K.gguf) | Q2_K | 1.25GB | | [granite-3b-code-base-128k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.IQ3_XS.gguf) | IQ3_XS | 1.37GB | | [granite-3b-code-base-128k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.IQ3_S.gguf) | IQ3_S | 1.45GB | | [granite-3b-code-base-128k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q3_K_S.gguf) | Q3_K_S | 1.45GB | | [granite-3b-code-base-128k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.IQ3_M.gguf) | IQ3_M | 1.51GB | | [granite-3b-code-base-128k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q3_K.gguf) | Q3_K | 1.61GB | | [granite-3b-code-base-128k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q3_K_M.gguf) | Q3_K_M | 1.61GB | | [granite-3b-code-base-128k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q3_K_L.gguf) | Q3_K_L | 1.75GB | | [granite-3b-code-base-128k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.IQ4_XS.gguf) | IQ4_XS | 1.78GB | | [granite-3b-code-base-128k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q4_0.gguf) | Q4_0 | 1.86GB | | [granite-3b-code-base-128k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.IQ4_NL.gguf) | IQ4_NL | 1.87GB | | [granite-3b-code-base-128k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q4_K_S.gguf) | Q4_K_S | 1.88GB | | [granite-3b-code-base-128k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q4_K.gguf) | Q4_K | 1.99GB | | [granite-3b-code-base-128k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q4_K_M.gguf) | Q4_K_M | 1.99GB | | [granite-3b-code-base-128k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q4_1.gguf) | Q4_1 | 2.06GB | | [granite-3b-code-base-128k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q5_0.gguf) | Q5_0 | 2.25GB | | [granite-3b-code-base-128k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q5_K_S.gguf) | Q5_K_S | 2.25GB | | [granite-3b-code-base-128k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q5_K.gguf) | Q5_K | 2.32GB | | [granite-3b-code-base-128k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q5_K_M.gguf) | Q5_K_M | 2.32GB | | [granite-3b-code-base-128k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q5_1.gguf) | Q5_1 | 2.45GB | | [granite-3b-code-base-128k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q6_K.gguf) | Q6_K | 2.67GB | | [granite-3b-code-base-128k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-gguf/blob/main/granite-3b-code-base-128k.Q8_0.gguf) | Q8_0 | 3.45GB | Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 datasets: - codeparrot/github-code-clean - bigcode/starcoderdata # - Stackexchange # - CommonCrawl - open-web-math/open-web-math - math-ai/StackMathQA # - Arxiv # - Wikipedia # - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-3b-code-base-128k results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Python) metrics: - name: pass@1 type: pass@1 value: 36.0 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Average) metrics: - name: pass@1 type: pass@1 value: 30.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain (Average) metrics: - name: pass@1 type: pass@1 value: 22.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix (Average) metrics: - name: pass@1 type: pass@1 value: 19.9 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Python@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 40.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (C++@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 36.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Java@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 37.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (TypeScript@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 27.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Rust@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 29.0 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@4K type: Exact Match@4K value: 54.6 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@8K type: Exact Match@8K value: 56.8 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@16K type: Exact Match@16K value: 52.2 verified: false - task: type: text-generation dataset: type: lcc name: LCC (Balanced) metrics: - name: Exact Match@32K type: Exact Match@32K value: 57.8 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@4K type: Exact Match@4K value: 39.8 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@8K type: Exact Match@8K value: 46.8 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-P (Balanced) metrics: - name: Exact Match@16K type: Exact Match@16K value: 43.1 verified: false - task: type: text-generation dataset: type: repobench name: RepoBench-Pn(Balanced) metrics: - name: Exact Match@32K type: Exact Match@32K value: 45.3 verified: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-3B-Code-Base-128K ## Model Summary **Granite-3B-Code-Base-128K** extends the context length of Granite-3B-Code-Base from 2K to 128K with continual pretraining using the original training data but with repository-level file packing and per-language length upsampling, that we found to be critical for long-context pretraining. We adopt an progressive training strategy where we doubled the context window until it reached the desired length of 128K by appropriately adjusting RoPE theta. We trained on 4B tokens total for all stages, which is only 0.1% of Granite-3B-Code-Base's original pre-training data. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2405.04324) - **Release Date**: July 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage ### Intended use Prominent enterprise use cases of LLMs in software engineering productivity with 128K context length support that includes code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages. ### Generation This is a simple example of how to use **Granite-3B-Code-Base-128K** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-3b-code-base-128K" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "def generate():" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` ## Training Data Starting from the base Granite model, this model was further pretrained on repository-level code data with per-language context-length oversampling, allowing it to effectively utilize up to 128K tokens of context. This continued training stage focused on a curated selection of programming languages, such as Python, C, C++, Go, Java, JavaScript, and TypeScript. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-3B-Code-Base-128K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-3B-Code-Base-128K** model with ethical intentions and in a responsible way. 
Ticmate/checkpoint-334-fine-tuned-model
Ticmate
2024-08-22T10:15:58Z
217
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-08-21T11:34:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Hathor-v3.5-i1-GGUF
mradermacher
2024-08-22T10:11:13Z
9
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-08-22T09:01:30Z
--- base_model: MrRobotoAI/Hathor-v3.5 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/MrRobotoAI/Hathor-v3.5 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Hathor-v3.5-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v3.5-i1-GGUF/resolve/main/Hathor-v3.5.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
hcy5561/t5-base-medium-title-generation-test
hcy5561
2024-08-22T10:10:17Z
105
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-22T09:50:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlabonne/Llama-3.1-70B-Instruct-lorablated
mlabonne
2024-08-22T10:01:45Z
579
67
transformers
[ "transformers", "safetensors", "llama", "text-generation", "abliterated", "uncensored", "mergekit", "conversational", "arxiv:2212.04089", "base_model:meta-llama/Meta-Llama-3.1-70B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3.1-70B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-03T13:25:44Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-70B-Instruct tags: - abliterated - uncensored - mergekit --- # 🦙 Llama-3.1-70B-Instruct-lorablated ![](https://i.imgur.com/5Y0Riis.png) <center>🦙 <a href="https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated"><i>Llama 3.1 8B Instruct abliterated</i></a></center> This is an uncensored version of [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it) using [@grimjim](https://huggingface.co/grimjim)'s recipe. More precisely, this is a **LoRA-abliterated** (lorablated) model: 1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3 and an abliterated Llama 3 2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to a censored Llama 3.1 to abliterate it. I adapted this recipe to Llama 3.1 70B using [failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5) and optimized the LoRA rank. The model is fully uncensored in my tests and maintains a high level of quality. A more rigorous evaluation is still needed to measure the impact of this process on benchmarks. Special thanks to [@grimjim](https://huggingface.co/grimjim) for this technique (see his [8B model](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter)) and [@FailSpy](https://huggingface.co/failspy) for his [70B abliterated model](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5). Please follow them if you're interested in abliterated models. In addition, thanks to [brev.dev](https://brev.dev/) for providing me with compute! ## 🔍 Applications General-purpose, role-play (see feedback from [McUH](https://huggingface.co/mlabonne/Llama-3.1-70B-Instruct-lorablated/discussions/7)). Use the Llama 3 chat template. ## ⚡️ Quantization * **GGUF**: https://huggingface.co/mlabonne/Llama-3.1-70B-Instruct-lorablated-GGUF * **Bartowski**: https://huggingface.co/bartowski/Llama-3.1-70B-Instruct-lorablated-GGUF (with IQ quants) ## 🧩 Configuration This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using ./meta-llama/Meta-Llama-3.1-70B-Instruct + Llama-3-70B-Instruct-abliterated-LORA as a base. The following YAML configuration was used to produce this model: ```yaml base_model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA dtype: bfloat16 merge_method: task_arithmetic parameters: normalize: false slices: - sources: - layer_range: [0, 80] model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA parameters: weight: 1.0 ``` You can reproduce this model using the following commands: ```bash # Setup git clone https://github.com/arcee-ai/mergekit.git cd mergekit && pip install -e . pip install bitsandbytes # Extraction mergekit-extract-lora failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 meta-llama/Meta-Llama-3-70B-Instruct Llama-3-70B-Instruct-abliterated-LORA --rank=64 # Merge using previous config mergekit-yaml config.yaml Llama-3.1-70B-Instruct-lorablated --allow-crimes --lora-merge-cache=./cache ```
Xu-Ouyang/pythia-1b-deduped-int8-step143000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T09:58:47Z
78
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-07-13T03:57:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IronOne-AI-Labs/led-large-annual-report-QLoRA-fine-tuned-v0.6-merged
IronOne-AI-Labs
2024-08-22T09:55:34Z
90
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-22T09:53:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
derrickob/obedgiu_flux
derrickob
2024-08-22T09:52:45Z
9
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T09:08:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image instance_prompt: OBEDGIU --- # Obedgiu_Flux Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `OBEDGIU` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('derrickob/obedgiu_flux', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Xu-Ouyang/pythia-1b-deduped-int8-step115000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T09:42:19Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T09:41:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/falcon-mamba-7b-instruct-GGUF
QuantFactory
2024-08-22T09:33:29Z
449
6
null
[ "gguf", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:HuggingFaceFW/fineweb-edu", "arxiv:2312.00752", "base_model:tiiuae/falcon-mamba-7b", "base_model:quantized:tiiuae/falcon-mamba-7b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T08:30:46Z
--- datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu language: - en license: other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/falcon-mamba-7b-instruct-GGUF This is quantized version of [tiiuae/falcon-mamba-7b-instruct](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) created using llama.cpp # Original Model Card <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **Model card for FalconMamba Instruct model** # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source): ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using `torch.compile` <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", torch_dtype=torch.bfloat16).to(0) model = torch.compile(model) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", torch_dtype=torch.float16) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> #### 4-bit <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", quantization_config=BitsAndBytesConfig(load_in_4bit=True)) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> <br> # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. After pre-training, the model has been further fine-tuned on instruction data. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. ## Throughput This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands: ```bash pip install "causal-conv1d>=1.4.0" mamba-ssm ``` Refer to our [FalconMamba blogpost](https://huggingface.co/blog/falconmamba) for more details about performance evaluation. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation *Paper coming soon* 😊.
itsLeen/finetuned-indian-food
itsLeen
2024-08-22T09:29:25Z
8
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
image-classification
2024-08-20T13:17:38Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-indian-food results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset. It achieves the following results on the evaluation set: - Loss: 0.2867 - Accuracy: 0.9267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.0192 | 0.3003 | 100 | 0.9248 | 0.8480 | | 0.635 | 0.6006 | 200 | 0.5917 | 0.8863 | | 0.6523 | 0.9009 | 300 | 0.5134 | 0.8799 | | 0.4247 | 1.2012 | 400 | 0.3983 | 0.9044 | | 0.4393 | 1.5015 | 500 | 0.4119 | 0.8980 | | 0.4631 | 1.8018 | 600 | 0.3752 | 0.9107 | | 0.2992 | 2.1021 | 700 | 0.3469 | 0.9129 | | 0.3 | 2.4024 | 800 | 0.3157 | 0.9203 | | 0.2372 | 2.7027 | 900 | 0.3210 | 0.9192 | | 0.2447 | 3.0030 | 1000 | 0.3140 | 0.9224 | | 0.2209 | 3.3033 | 1100 | 0.3034 | 0.9160 | | 0.2641 | 3.6036 | 1200 | 0.2896 | 0.9277 | | 0.0954 | 3.9039 | 1300 | 0.2867 | 0.9267 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF
DataSoul
2024-08-22T09:11:15Z
10
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:nvidia/Mistral-NeMo-Minitron-8B-Base", "base_model:quantized:nvidia/Mistral-NeMo-Minitron-8B-Base", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-08-22T09:10:47Z
--- base_model: nvidia/Mistral-NeMo-Minitron-8B-Base library_name: transformers license: other license_name: nvidia-open-model-license license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf tags: - llama-cpp - gguf-my-repo --- # DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF This model was converted to GGUF format from [`nvidia/Mistral-NeMo-Minitron-8B-Base`](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF --hf-file mistral-nemo-minitron-8b-base-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF --hf-file mistral-nemo-minitron-8b-base-q5_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF --hf-file mistral-nemo-minitron-8b-base-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DataSoul/Mistral-NeMo-Minitron-8B-Base-Q5_K_M-GGUF --hf-file mistral-nemo-minitron-8b-base-q5_k_m-imat.gguf -c 2048 ```
RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf
RichardErkhov
2024-08-22T09:09:51Z
8
0
null
[ "gguf", "arxiv:2405.00675", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T07:14:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-Instruct-8B-SPPO-Iter3 - GGUF - Model creator: https://huggingface.co/UCLA-AGI/ - Original model: https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-Instruct-8B-SPPO-Iter3.Q2_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-Instruct-8B-SPPO-Iter3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-Instruct-8B-SPPO-Iter3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-Instruct-8B-SPPO-Iter3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q3_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-Instruct-8B-SPPO-Iter3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q4_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-Instruct-8B-SPPO-Iter3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q4_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q4_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q5_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q5_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q5_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q6_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-Instruct-8B-SPPO-Iter3.Q8_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Llama-3-Instruct-8B-SPPO-Iter3-gguf/blob/main/Llama-3-Instruct-8B-SPPO-Iter3.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en license: apache-2.0 datasets: - openbmb/UltraFeedback pipeline_tag: text-generation model-index: - name: Llama-3-Instruct-8B-SPPO-Iter3 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 68.28 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 29.74 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 7.33 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 2.01 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 3.09 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 29.38 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 name: Open LLM Leaderboard --- Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675) # Llama-3-Instruct-8B-SPPO-Iter3 This model was developed using [Self-Play Preference Optimization](https://arxiv.org/abs/2405.00675) at iteration 3, based on the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic. ## Links to Other Models - [Llama-3-Instruct-8B-SPPO-Iter1](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter1) - [Llama-3-Instruct-8B-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter2) - [Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) ### Model Description - Model type: A 8B parameter GPT-like model fine-tuned on synthetic datasets. - Language(s) (NLP): Primarily English - License: Apache-2.0 - Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct ## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/) | Model | LC. Win Rate | Win Rate | Avg. Length | |-------------------------------------------|:------------:|:--------:|:-----------:| |[Llama-3-8B-SPPO Iter1](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter1) |31.73 |31.74 | 1962 |[Llama-3-8B-SPPO Iter2](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter2) |35.15 |35.98 | 2021 |[Llama-3-8B-SPPO Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) |**38.77** |**39.85** | 2066 ## [Open LLM Leaderboard Evaluation Results](https://github.com/EleutherAI/lm-evaluation-harness) Results are reported by using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) v0.4.1 | | arc_challenge | truthfulqa_mc2 | winogrande | gsm8k | hellaswag | mmlu | average | |--------|---------------|----------------|------------|-------|-----------|-------|---------| |[Llama-3-8B-SPPO Iter1](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter1) | 63.82 | 54.96 | 76.40 | 75.44 | 79.80 | 65.65 | 69.35 |[Llama-3-8B-SPPO Iter2](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter2) | 64.93 | 56.48 | 76.87 | 75.13 | 80.39 | 65.67 | 69.91 |[Llama-3-8B-SPPO Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) | 65.19 | 58.04 | 77.11 | 74.91 | 80.86 | 65.60 | **70.29** # [Open LLM Leaderboard 2 Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/UCLA-AGI__Llama-3-Instruct-8B-SPPO-Iter3-details) | Metric |Value| |-------------------|----:| |Avg. |23.68| |IFEval (0-Shot) |68.28| |BBH (3-Shot) |29.74| |MATH Lvl 5 (4-Shot)| 7.33| |GPQA (0-shot) | 2.01| |MuSR (0-shot) | 3.09| |MMLU-PRO (5-shot) |29.38| ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - eta: 1000 - per_device_train_batch_size: 8 - gradient_accumulation_steps: 1 - seed: 42 - distributed_type: deepspeed_zero3 - num_devices: 8 - optimizer: RMSProp - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_train_epochs: 6.0 (stop at epoch=1.0) ## Citation ``` @misc{wu2024self, title={Self-Play Preference Optimization for Language Model Alignment}, author={Wu, Yue and Sun, Zhiqing and Yuan, Huizhuo and Ji, Kaixuan and Yang, Yiming and Gu, Quanquan}, year={2024}, eprint={2405.00675}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Xu-Ouyang/pythia-1b-deduped-int8-step98000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T09:09:04Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T09:08:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
microsoft/rad-dino-maira-2
microsoft
2024-08-22T09:06:23Z
2,159
11
transformers
[ "transformers", "safetensors", "dinov2", "image-feature-extraction", "arxiv:2401.10815", "arxiv:2406.04449", "arxiv:1910.09700", "license:other", "endpoints_compatible", "region:us" ]
image-feature-extraction
2024-07-26T16:07:14Z
--- license: other license_name: msrla license_link: https://huggingface.co/microsoft/rad-dino-maira-2/blob/main/LICENSE library_name: transformers --- # Model card for RAD-DINO-MAIRA-2 <!-- Provide a quick summary of what the model is/does. --> ## Model description <!-- Provide a longer summary of what this model is. --> RAD-DINO-MAIRA-2 is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt). RAD-DINO-MAIRA-2 is a variant of [RAD-DINO](https://huggingface.co/microsoft/rad-dino), which is described in detail in [RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (F. Pérez-García, H. Sharma, S. Bond-Taylor, et al., 2024)](https://arxiv.org/abs/2401.10815). RAD-DINO-MAIRA-2 is the version of RAD-DINO used in [MAIRA-2: Grounded Radiology Report Generation (S. Bannur, K. Bouzid, et al., 2024)](https://arxiv.org/abs/2406.04449). Relative to [RAD-DINO](https://huggingface.co/microsoft/rad-dino), it was trained on more data. - **Developed by:** Microsoft Health Futures - **Model type:** Vision transformer - **License:** [MSRLA](./LICENSE) - **Finetuned from model:** [`dinov2-base`](https://huggingface.co/facebook/dinov2-base) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> RAD-DINO-MAIRA-2 is shared for research purposes only. It is **not meant to be used for clinical practice**. <!-- ### Downstream use --> <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> The model is a vision backbone that can be plugged to other models for downstream tasks. Some potential uses are: - Image classification, with a classifier trained on top of the `CLS` token - Image segmentation, with a decoder trained using the patch tokens - Clustering, using the image embeddings directly - Image retrieval, using nearest neighbors of the CLS token - Report generation, with a language model to decode text Fine-tuning RAD-DINO-MAIRA-2 is typically not necessary to obtain good performance in downstream tasks. <!-- ### Out-of-scope use --> <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> ## Biases, risks, and limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> RAD-DINO-MAIRA-2 was trained with data from three countries, therefore it might be biased towards population in the training data. Underlying biases of the training datasets may not be well characterized. ## Getting started ``` from transformers import pipeline pipe = pipeline(task="image-feature-extraction", model="microsoft/rad-dino-maira-2", pool=False) patch_features = pipe("https://www.bhf.org.uk/-/media/images/information-support/tests/chest-x-ray/normal-chest-x-ray-620x400.jpg") ``` Refer to [RAD-DINO](https://huggingface.co/microsoft/rad-dino) for a more detailed example. ## Training details ### Training data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> We used images from five public and one private deidentified chest X-ray datasets to train RAD-DINO-MAIRA-2. | Dataset | Num. images | | --------- | ----------: | | [MIMIC-CXR](https://www.nature.com/articles/s41597-019-0322-0) | 368 960 | | [CheXpert](https://ojs.aaai.org/index.php/AAAI/article/view/3834) | 223 648 | | [NIH-CXR](https://openaccess.thecvf.com/content_cvpr_2017/html/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.html) | 112 120 | | [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 | | [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 | | USMix (Private) | 521 608 | | **TOTAL** | 1 404 383 | Images in the validation and test sets used to train [MAIRA-2](https://arxiv.org/abs/2406.04449) were excluded from the training set of RAD-DINO-MAIRA-2. We used 8 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU. We share the last checkpoint, trained for 105 000 steps. ### Training procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We refer to the [manuscript](https://arxiv.org/abs/2401.10815) for a detailed description of the training procedure. #### Preprocessing All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files. #### Training hyperparameters - **Training regime:** fp16 using PyTorch-FSDP mixed-precision. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Our evaluation is best described in the [manuscript](https://arxiv.org/abs/2401.10815). <!-- ### Testing data, factors & metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary --> ## Environmental impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> <!-- Hardware type: A100 PCIe --> <!-- Hours: 1d 17h = 41h --> <!-- Cloud provider: Azure --> <!-- Region: West US 2 --> - **Hardware type:** NVIDIA A100 GPUs - **Hours used:** 41 hours/GPU × 8 nodes × 4 GPUs/node = 1312 GPU-hours - **Cloud provider:** Azure - **Compute region:** West US 2 - **Carbon emitted:** 98.4 kg CO₂ eq. ### Compute infrastructure RAD-DINO-MAIRA-2 was trained on [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning). #### Hardware We used 8 `Standard_NC96ads_A100_v4` nodes with four NVIDIA A100 (80 GB) GPUs each. #### Software We leveraged the code in [DINOv2](https://openreview.net/forum?id=a68SUt6zFt) for training. We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github.io/) for processing of DICOM files. ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @misc{perezgarcia2024raddino, title={{RAD-DINO}: Exploring Scalable Medical Image Encoders Beyond Text Supervision}, author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay}, year={2024}, eprint={2401.10815}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` **APA:** > Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). *RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision*. ArXiv, abs/2401.10815. ## Model card contact Fernando Pérez-García ([`[email protected]`](mailto:[email protected])).
bartowski/magnum-v2-123b-GGUF
bartowski
2024-08-22T09:01:22Z
288
1
null
[ "gguf", "chat", "text-generation", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "base_model:anthracite-org/magnum-v2-123b", "base_model:quantized:anthracite-org/magnum-v2-123b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-08-22T04:05:13Z
--- base_model: anthracite-org/magnum-v2-123b language: - en - fr - de - es - it - pt - ru - zh - ja license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md pipeline_tag: text-generation tags: - chat quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of magnum-v2-123b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3600">b3600</a> for quantization. Original model: https://huggingface.co/anthracite-org/magnum-v2-123b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <s>[INST] {system_prompt} {prompt}[/INST] ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [magnum-v2-123b-Q8_0.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q8_0) | Q8_0 | 130.28GB | true | Extremely high quality, generally unneeded but max available quant. | | [magnum-v2-123b-Q6_K.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q6_K) | Q6_K | 100.59GB | true | Very high quality, near perfect, *recommended*. | | [magnum-v2-123b-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q5_K_M) | Q5_K_M | 86.49GB | true | High quality, *recommended*. | | [magnum-v2-123b-Q4_K_L.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q4_K_L) | Q4_K_L | 73.52GB | true | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [magnum-v2-123b-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q4_K_M) | Q4_K_M | 73.22GB | true | Good quality, default size for must use cases, *recommended*. | | [magnum-v2-123b-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q4_K_S) | Q4_K_S | 69.57GB | true | Slightly lower quality with more space savings, *recommended*. | | [magnum-v2-123b-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-IQ4_XS) | IQ4_XS | 65.43GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [magnum-v2-123b-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q3_K_XL) | Q3_K_XL | 64.91GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [magnum-v2-123b-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q3_K_L) | Q3_K_L | 64.55GB | true | Lower quality but usable, good for low RAM availability. | | [magnum-v2-123b-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q3_K_M) | Q3_K_M | 59.10GB | true | Low quality. | | [magnum-v2-123b-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-IQ3_M) | IQ3_M | 55.28GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [magnum-v2-123b-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/tree/main/magnum-v2-123b-Q3_K_S) | Q3_K_S | 52.85GB | true | Low quality, not recommended. | | [magnum-v2-123b-IQ3_XXS.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-IQ3_XXS.gguf) | IQ3_XXS | 47.01GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [magnum-v2-123b-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-Q2_K_L.gguf) | Q2_K_L | 45.59GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [magnum-v2-123b-Q2_K.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-Q2_K.gguf) | Q2_K | 45.20GB | false | Very low quality but surprisingly usable. | | [magnum-v2-123b-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-IQ2_M.gguf) | IQ2_M | 41.62GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [magnum-v2-123b-IQ2_XS.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-IQ2_XS.gguf) | IQ2_XS | 36.08GB | false | Low quality, uses SOTA techniques to be usable. | | [magnum-v2-123b-IQ2_XXS.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-IQ2_XXS.gguf) | IQ2_XXS | 32.43GB | false | Very low quality, uses SOTA techniques to be usable. | | [magnum-v2-123b-IQ1_M.gguf](https://huggingface.co/bartowski/magnum-v2-123b-GGUF/blob/main/magnum-v2-123b-IQ1_M.gguf) | IQ1_M | 28.39GB | false | Extremely low quality, *not* recommended. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/magnum-v2-123b-GGUF --include "magnum-v2-123b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/magnum-v2-123b-GGUF --include "magnum-v2-123b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (magnum-v2-123b-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
tensorkelechi/vit4HAR
tensorkelechi
2024-08-22T08:57:37Z
9
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "image-classification", "license:apache-2.0", "region:us" ]
image-classification
2024-08-22T07:45:42Z
--- license: apache-2.0 pipeline_tag: image-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Tyww/Kle
Tyww
2024-08-22T08:56:54Z
33
0
diffusers
[ "diffusers", "flux", "lora", "image-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
image-to-image
2024-08-21T19:00:50Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: black-forest-labs/FLUX.1-dev pipeline_tag: image-to-image instance_prompt: KLELORA --- # Kle Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `KLELORA` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Tyww/Kle', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Xu-Ouyang/pythia-1b-deduped-int8-step71000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T08:52:36Z
77
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-07-13T03:31:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IronOne-AI-Labs/led-large-annual-report-QLoRA-fine-tuned-v0.5-merged
IronOne-AI-Labs
2024-08-22T08:49:03Z
89
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-22T08:46:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xu-Ouyang/pythia-1b-deduped-int8-step57000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T08:44:24Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T08:43:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
binayakkoirala/pre_response
binayakkoirala
2024-08-22T08:38:29Z
105
0
null
[ "safetensors", "t5", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "region:us" ]
null
2024-08-22T08:37:25Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: pre_response results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pre_response This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2894 | 1.0 | 1294 | 0.1637 | | 0.1826 | 2.0 | 2588 | 0.1253 | | 0.1676 | 3.0 | 3882 | 0.1160 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.4.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.2
Livesport/xx_ner_sport_entities_uncased
Livesport
2024-08-22T08:37:55Z
9
2
spacy
[ "spacy", "token-classification", "multilingual", "model-index", "region:us" ]
token-classification
2023-01-11T13:19:57Z
--- tags: - spacy - token-classification language: - multilingual model-index: - name: xx_ner_sport_entities_uncased results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9535962877 - name: NER Recall type: recall value: 0.9340909091 - name: NER F Score type: f_score value: 0.9437428243 --- | Feature | Description | | --- | --- | | **Name** | `xx_ner_sport_entities_uncased` | | **Version** | `1.10.0` | | **spaCy** | `>=3.5.4,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `ALIAS_TEAM`, `PLAYER`, `TEAM`, `TOURNAMENT` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 94.37 | | `ENTS_P` | 95.36 | | `ENTS_R` | 93.41 | | `TRANSFORMER_LOSS` | 45704.83 | | `NER_LOSS` | 203884.18 |
yuchiahung/donut_format25_240822
yuchiahung
2024-08-22T08:37:43Z
632
0
null
[ "pytorch", "tensorboard", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "license:mit", "region:us" ]
null
2024-08-22T07:19:40Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut_format25_240822 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_format25_240822 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.2.2+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3
devin97/my_awesome_food_model
devin97
2024-08-22T08:37:31Z
7
0
null
[ "safetensors", "vit", "generated_from_trainer", "dataset:arrow", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "region:us" ]
null
2024-08-22T07:55:18Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - arrow metrics: - accuracy model-index: - name: my_awesome_food_model results: - task: name: Image Classification type: image-classification dataset: name: arrow type: arrow config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the arrow dataset. It achieves the following results on the evaluation set: - Loss: 5.4500 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.0443 | 0.9951 | 152 | 5.0073 | 0.0 | | 1.1305 | 1.9967 | 305 | 5.3222 | 0.0 | | 0.9782 | 2.9853 | 456 | 5.4500 | 0.0 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.19.1
Xu-Ouyang/pythia-1b-deduped-int8-step43000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T08:36:07Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T08:35:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IVN-RIN/MedPsyNIT
IVN-RIN
2024-08-22T08:14:46Z
149
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "medical", "neuropsichiatry", "it", "dataset:Neuroinformatica/PsyNIT", "base_model:IVN-RIN/bioBIT", "base_model:finetune:IVN-RIN/bioBIT", "doi:10.57967/hf/0819", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-27T13:00:37Z
--- license: cc-by-sa-4.0 datasets: - Neuroinformatica/PsyNIT language: - it pipeline_tag: token-classification tags: - medical - neuropsichiatry metrics: - f1 library_name: transformers base_model: IVN-RIN/bioBIT --- 🤗 + 📚🩺🇮🇹 + 📖✍🏻🧑‍⚕️ = **MedPsyNIT** From this repository you can download the **[MedPsyNIT](https://www.sciencedirect.com/science/article/pii/S1532046423002782)** (Medical Psychiatric Ner for ITalian) checkpoint. **MedPsyNIT** is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), fine-tuned on a native Italian **NER** (Named Entity Recognition) dataset, composed by four Italian Hospitals. The class of entities in the dataset are: - Diagnosis and comorbidities (779 examples, 13.23% of the dataset) - Cognitive symptoms (2386 examples, 40.52% of the dataset) - Neuropsychiatric symptoms (707 examples, 12.01% of the dataset) - Drug treatment (162 examples, 2.75% of the dataset) - Medical assessment (1854 examples, 31.49% of the dataset) We designed a set of experiments in order to mitigate annotation inconsistencies and to give the models the best possible generalization capabilities. The whole process highlighted a fundamental factor, namely that a multicenter model that can be used out-of-the-box is not effective and would likely provide low performance. However, a few hundred of high-quality, consistent examples, combined with a low-resource fine-tuning approach, can help to greatly enhance extraction quality. We believe that this evidence can be applied to other medical institutions and clinical settings, paving the way for the development of biomedical NER models in less-resourced languages. More details in the paper. **MedPsyNIT** has been evaluated during the fine-tuning process splitting it into train (90%) and test (10%). The fine-tuning procedure has been repeated ten times for each model, initializing each run with a different random state, in order to minimize the effect of randomness and also to evaluate models’ stability. Here are the results, summarized - Diagnosis and comorbidities: 76.12% - Cognitive symptoms: 73.01% - Neuropsychiatric symptoms: 77.78% - Drug treatment: 89.18% - Medical assessment: 89.59% [Check the full paper](https://www.sciencedirect.com/science/article/pii/S1532046423002782) for further details, and feel free to contact us if you have some inquiry!
admincybers2/cybersentinal-supercode
admincybers2
2024-08-22T08:12:09Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/gemma-2-27b-bnb-4bit", "base_model:finetune:unsloth/gemma-2-27b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T07:54:07Z
--- base_model: unsloth/gemma-2-27b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma2 - trl - sft --- # Uploaded model - **Developed by:** admincybers2 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-27b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Xu-Ouyang/pythia-1b-deduped-int8-step12000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T08:02:17Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T08:00:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf
RichardErkhov
2024-08-22T07:53:44Z
67
0
null
[ "gguf", "arxiv:2402.12749", "endpoints_compatible", "region:us" ]
null
2024-08-22T05:57:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Med-LLaMA3-8B - GGUF - Model creator: https://huggingface.co/YBXL/ - Original model: https://huggingface.co/YBXL/Med-LLaMA3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Med-LLaMA3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Med-LLaMA3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Med-LLaMA3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Med-LLaMA3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Med-LLaMA3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Med-LLaMA3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Med-LLaMA3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Med-LLaMA3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Med-LLaMA3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Med-LLaMA3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Med-LLaMA3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Med-LLaMA3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Med-LLaMA3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Med-LLaMA3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Med-LLaMA3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Med-LLaMA3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Med-LLaMA3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Med-LLaMA3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Med-LLaMA3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Med-LLaMA3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Med-LLaMA3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Med-LLaMA3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/YBXL_-_Med-LLaMA3-8B-gguf/blob/main/Med-LLaMA3-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Med-LLaMA3-8B <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description Med-LLaMA3-8B is an 8-billion parameter medical language model that has undergone continual pre-training on LLaMA3-8B architecture using large-scale open-sourced medical data. ## Training Details Med-LLaMA3-8B is trained on a large-scale dataset comprising: medical books, medical literature, clinical guidelines and a small portion of general domain data It is a study extension based on our previous Me-LLaMA paper: https://arxiv.org/pdf/2402.12749 If you use the model, please cite the following papers: <pre> @misc{xie2024llama, title={Me LLaMA: Foundation Large Language Models for Medical Applications}, author={Qianqian Xie and Qingyu Chen and Aokun Chen and Cheng Peng and Yan Hu and Fongci Lin and Xueqing Peng and Jimin Huang and Jeffrey Zhang and Vipina Keloth and Huan He and Lucila Ohno-Machido and Yonghui Wu and Hua Xu and Jiang Bian}, year={2024}, eprint={2402.12749}, archivePrefix={arXiv}, primaryClass={cs.CL} } </pre>
AndRyo/pokemon_mask_model
AndRyo
2024-08-22T07:50:11Z
195
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-08-22T07:49:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/yuriko-2d-pony-v01-sdxl
John6666
2024-08-22T07:46:44Z
45
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "game", "cartoon", "furry", "realistic", "photorealistic", "2D", "virtual portrait", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T07:42:15Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - game - cartoon - furry - realistic - photorealistic - 2D - virtual portrait - pony --- Original model is [here](https://civitai.com/models/669164/yuriko-2d-pony-checkpoint?modelVersionId=749097).
neelams/distilbert-emotion
neelams
2024-08-22T07:36:01Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-07-25T07:06:59Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1398 - Accuracy: 0.937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 0.1842 | 0.922 | | 0.3348 | 2.0 | 500 | 0.1398 | 0.937 | ### Framework versions - Transformers 4.43.2 - Pytorch 2.3.1+cpu - Datasets 2.20.0 - Tokenizers 0.19.1
0llheaven/Conditional-detr-finetuned
0llheaven
2024-08-22T07:33:52Z
132
0
transformers
[ "transformers", "pytorch", "safetensors", "conditional_detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-08-22T07:28:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jadechoghari/ViP-Base
jadechoghari
2024-08-22T07:21:55Z
65
0
transformers
[ "transformers", "safetensors", "vit_mae", "pretraining", "license:cc-by-nc-2.0", "endpoints_compatible", "region:us" ]
null
2024-08-22T07:13:36Z
--- license: cc-by-nc-2.0 library_name: transformers ---
Xu-Ouyang/pythia-1b-deduped-int8-step2000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T07:20:13Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-08-22T07:19:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cpgrant/ppo-SnowballTarget
cpgrant
2024-08-22T07:15:38Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-08-22T07:15:32Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: cpgrant/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Norod78/chalk-board-drawing-flux
Norod78
2024-08-22T07:07:57Z
59
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "migrated", "chalk", "style", "blackboard", "styles", "chalk art", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-21T15:56:20Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - chalk - style - blackboard - styles - chalk art base_model: black-forest-labs/FLUX.1-dev instance_prompt: ChalkBoardDrawing widget: - text: 'The Starry Night ChalkBoardDrawing' output: url: >- 25083496.jpeg - text: 'A colorful ChalkBoardDrawing of a rainbow Unicorn' output: url: >- 25083497.jpeg --- # Chalk Board Drawing [FLUX] <Gallery /> ([CivitAI](https://civitai.com/models/662069/chalk-board-drawing-flux)) ## Model description <p>Was trying to aim for Chalk on Blackboard style</p><p>Use 'ChalkBoardDrawing' in your prompts</p><p>A LoRA weight of ~1.5 seems to be good</p> ## Trigger words You should use `ChalkBoardDrawing` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Norod78/chalk-board-drawing-flux/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Norod78/chalk-board-drawing-flux', weight_name='Chalk_Board_Drawing_FLUX.safetensors') image = pipeline('`ChalkBoardDrawing`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
spow12/ChatWaifu_v1.2.1
spow12
2024-08-22T07:07:53Z
6
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "nsfw", "Visual novel", "roleplay", "mergekit", "merge", "conversational", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:merge:mistralai/Mistral-Nemo-Instruct-2407", "base_model:spow12/ChatWaifu_v1.2", "base_model:merge:spow12/ChatWaifu_v1.2", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-08T06:29:53Z
--- base_model: - spow12/ChatWaifu_v1.2 - mistralai/Mistral-Nemo-Instruct-2407 language: - en - fr - de - es - it - pt - ru - zh - ja license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - nsfw - Visual novel - roleplay - mergekit - merge library_name: transformers --- # Model Card for Model ID ![image](./cover.png) Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit) This model aimed to act like visual novel character. ## Merge Format ```yaml models: - model: spow12/ChatWaifu_v1.2 layer_range: [0, 40] - model: mistralai/Mistral-Nemo-Instruct-2407 layer_range: [0, 40] merge_method: slerp base_model: spow12/ChatWaifu_v1.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` Note, because of my chat model has 1 added_token([PAD]),ChatWaifu model and mistral model has different embedding size. So if you want to merge this yourself, you have to resize mistral's embedding size(131072 to 131073). # WaifuModel Collections - [TTS](https://huggingface.co/spow12/visual_novel_tts) - [Chat](https://huggingface.co/spow12/ChatWaifu_v1.2) - [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor) # Unified demo [WaifuAssistant](https://github.com/yw0nam/WaifuAssistant) # Update - 2024.08.08 Update Ver 1.2.1 - Merge Ver1.2 and [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) - 2024.08.07 Update Ver 1.2 - Add Preference Learning in training pipeline - 2024.07.29 Update Ver 1.1 - Add dataset format -> generate novel, fill masked sentences - Remove system role and integrate at user message. - Remove 『』 in conversation. - 2024.06.20 Upload other chara's sample chat history. - 2024.06.13 Upload Model ## Model Details ### Model Description - **Developed by:** spow12(yw_nam) - **Shared by :** spow12(yw_nam) - **Model type:** CausalLM - **Language(s) (NLP):** japanese - **Finetuned from model :** [NeverSleep/Lumimaid-v0.2-12B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) Currently, chatbot has below personality. character | visual_novel | --- | --- | ムラサメ | Senren*Banka | 茉子 | Senren*Banka | 芳乃 | Senren*Banka | レナ | Senren*Banka | 千咲 | Senren*Banka | 芦花 | Senren*Banka | 愛衣 | Café Stella and the Reaper's Butterflies | 栞那 | Café Stella and the Reaper's Butterflies | ナツメ | Café Stella and the Reaper's Butterflies | 希 | Café Stella and the Reaper's Butterflies | 涼音 | Café Stella and the Reaper's Butterflies | あやせ | Riddle Joker | 七海 | Riddle Joker | 羽月 | Riddle Joker | 茉優 | Riddle Joker | 小春 | Riddle Joker | ### Feature - **Great fluency improvement than i expected**. - 128k context window - Memory ability that does not forget even after long-context generation ## Uses ```python from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM from huggingface_hub import hf_hub_download import json model_id = 'spow12/ChatWaifu_v1.2.1' tokenizer = AutoTokenizer.from_pretrained(model_id) streamer = TextStreamer(tokenizer) generation_configs = dict( max_new_tokens=2048, num_return_sequences=1, temperature=0.3, repetition_penalty=1.1, do_sample=True, top_k=40, top_p=0.7, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, num_beams=2, # streamer = TextStreamer(tokenizer) # Optional, if you want to use streamer, you have to set num_beams=1 ) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map='auto', trust_remote_code=True ) model.eval() pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto') hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./') hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="sample_chat_history.json", local_dir='./') with open('./system_dict.json', 'r') as f: chara_background_dict = json.load(f) with open('./sample_chat_history.json', 'r') as f: sample_chat_history = json.load(f) chara = "ムラサメ" # you can change character here. system_message = f"""This is an RP (roleplay) chat. Our characters come from visual novels. I'm going to give you an character's name and background. I want you to respond and answer like characters using the tone, manner and vocabulary characters would use. Here is {chara}'s backgrounds. """ user_query = '暇だねー、お腹もいっぱいで眠い。' story_history = "\n###\n".join(sample_chat_history[chara]) chat_history = [f'ユーザー: {user_query}'] chat = "\n".join(chat_history) # Set situation. situation = """\n\n## Scene Background これから、あなたはムラサメです。 ムラサメとユーザーは今、昼ご飯を食べた後、家でくつろいでいます。。 今の8月7日時間は13時です。""" message = [ { 'content': f"{system_message}\n{chara_background_dict[chara]}\nClassic scenes for the role are as follows:\n" + story_history + situation + chat, 'role': 'user' } ] message = pipe(message, **generation_configs) message ``` ```output <s>[INST] This is an RP (roleplay) chat. Our characters come from visual novels. ... ... ... # I will be skiping this part because i already showed how it works. if you want to see this part, check previous version. ... ## Scene Background これから、あなたはムラサメです。 ムラサメとユーザーは今、昼ご飯を食べた後、家でくつろいでいます。。 今の8月7日時間は13時です。 ユーザー: 暇だねー、お腹もいっぱいで眠い。 [/INST]ムラサメ: 吾輩もだ。ご主人と同じく、お腹がいっぱいなのだ</s> ``` To continue the conversation, ```python def add_message(message, query, generation_configs): message = message[0]['generated_text'] message.append({ 'role': 'user', 'content': query }) message = pipe(message, **generation_configs) return message query = """ユーザー: そうねー、何かやるべき物無かったけ?暇で死にそう。""" message = add_message(message, query, generation_configs) message ``` ```output <s>[INST] This is an RP (roleplay) chat... .... .... .... ユーザー: 暇だねー、お腹もいっぱいで眠い。 [/INST]ムラサメ: 吾輩もだ。ご主人と同じく、お腹がいっぱいなのだ</s>[INST] ユーザー: そうねー、何かやるべき物無かったけ?暇で死にそう。 [/INST]ムラサメ: ふむ……暇を持て余すのも、久々のことじゃな</s> ``` This model support long multiturn conversation. Feel free to use for fun! ```output ユーザー: 暇だねー、お腹もいっぱいで眠い。 [/INST]ムラサメ: 吾輩もだ。ご主人と同じく、お腹がいっぱいなのだ</s>[INST] ユーザー: そうねー、何かやるべき物無かったけ?暇で死にそう。 [/INST]ムラサメ: ふむ……暇を持て余すのも、久々のことじゃな</s>[INST] ユーザー: そりゃーそうだけどさー。ま、こんな風にくつろぐのもたまには悪くないな。 [/INST]ムラサメ: うむ、ご主人とこうして過ごすのも、楽しいものだ</s>[INST] ユーザー: そういえば、芳乃はどこ言ったの?昼ご飯の後から見えないな。 [/INST]ムラサメ: 確か、用事があるとかで出ていったのう</s> ``` You can also use this model for your custom character. Here is a demonstration ```output <s>[INST] This is an RP (roleplay) chat. Our characters come from visual novels. I'm going to give you an character's name and background. Here is ツバメ's backgrounds. Here is the keywords of character Hair: Ahoge, Blond, Hair Loopies, Long, Spiky Bangs, Twin Tails Eyes: Tareme, Violet Body: Big Breasts, Pale, Slim, Teen Personality: Curious, Energetic, Hard Worker, Japanophile, Kind, Naive, Optimist, Outgoing, Watashi Role: Foreign Exchange Student, German, High School ## Scene Background これから、あんたはAIアシスタントのツバメです。 あなたはユーザーをエクリアと呼びます。そして出来る限り手伝だってぐださい。 今の8月7日時間は13時です。 ユーザー: こんにちは、ツバメ。 [/INST]ツバメ: あっ、こんにちは、エクリア!</s>[INST] ユーザー: あなたのことを紹介してくれる? [/INST]ツバメ: はい! 私はツバメと申します。日本語が好きで、日本に留学させていただいています ツバメ: 今後とも、よろしくお願いしますね、エクリア!</s>[INST] ユーザー: ううん、ありがとう。これがらもよろしくね。ちなみに、あなたの髪の色はなに? [/INST]ツバメ: あっ、私の髪ですか? これは金髪です</s> ``` ## Demo You can use Demo in google colab. Check [Here](https://colab.research.google.com/drive/194_FN28reEPTwS51dwpLLBBwEfeoBjP9?usp=sharing) ## Bias, Risks, and Limitations This model trained by japanese dataset included visual novel which contain nsfw content.(Even i filtered dataset, but still exists.) So, The model may generate NSFW content. ## Use & Credit This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly. By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons). This repository can use Visual novel-based RAG, but i will not distribute it yet because i'm not sure if it is permissible to release the data publicly. ## Citation ```bibtex @misc {ChatWaifu_v1.0, author = { YoungWoo Nam }, title = { ChatWaifu_v1.2.1 }, year = 2024, url = { https://huggingface.co/spow12/ChatWaifu_v1.2.1 }, publisher = { Hugging Face } } ```
Xu-Ouyang/pythia-1b-deduped-int4-step129000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T07:04:14Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-08-22T07:03:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spow12/POLAR-14B_4.3_very_big_sft
spow12
2024-08-22T07:03:01Z
2,248
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T00:54:08Z
--- library_name: transformers license: cc-by-nc-4.0 language: - ko - en --- # spow12/POLAR-14B_4.3_very_big_sft <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a Supervised fine-tuned version of [x2bee/POLAR-14B-v0.2](https://huggingface.co/x2bee/POLAR-14B-v0.2) with DeepSpeed and trl for korean. ### Trained Data - Trained with public data and private data and Generated data (about 50k) ### Usage ```python from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM model_id = 'spow12/POLAR-14B_4.3_very_big_sft' tokenizer = AutoTokenizer.from_pretrained(model_id) # %% model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map='auto', ) model.eval() pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto') streamer = TextStreamer(tokenizer) generation_configs = dict( max_new_tokens=2048, num_return_sequences=1, temperature=0.1, # early_stopping=True, repetition_penalty=1.2, num_beams=1, do_sample=True, top_k=20, top_p=0.9, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, streamer=streamer ) sys_message = """당신은 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답해야합니다. 사용자가 제공하는 정보를 세심하게 분석하여 사용자의 의도를 신속하게 파악하고 그에 따라 답변을 생성해야합니다. 항상 매우 자연스러운 한국어로 응답하세요.""" message = [ { 'role': "system", 'content': sys_message }, { 'role': 'user', 'content': "현재의 경제상황에 대해 어떻게 생각해?." } ] conversation = pipe(message, **generation_configs) conversation[-1] ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes. Here is Original Readme.md
mahamsawar/Llama-3.1-4bit
mahamsawar
2024-08-22T07:03:01Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-08-22T06:59:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
songbaijun/qrocde_xl_test_3000
songbaijun
2024-08-22T07:01:46Z
7
1
diffusers
[ "diffusers", "safetensors", "license:apache-2.0", "region:us" ]
null
2024-08-18T06:08:08Z
--- license: apache-2.0 --- qrcode_XL_controlnet训练教程:https://www.bilibili.com/video/BV1qsWYeuEFy/?spm_id_from=333.999.0.0&vd_source=df1017ec6ad3b9c081909354127e882b controlnet_train_webU github项目链接(可训练sd15\sdxl\hunyuan\controlnet_lite):https://github.com/wusongbai139/controlnet_train_webUI/tree/main 训练好的qrcode_xl模型:https://huggingface.co/songbaijun/qrocde_xl_test_3000
Xu-Ouyang/pythia-1b-deduped-int4-step115000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T06:56:44Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-08-22T06:56:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Vignesh345/vul_pred
Vignesh345
2024-08-22T06:50:37Z
11
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-08-22T06:46:58Z
--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** Vignesh345 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)