modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 12:29:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 12:27:57
card
stringlengths
11
1.01M
Columbidae/QwQ-32B
Columbidae
2025-03-10T20:19:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "base_model:Qwen/Qwen2.5-32B", "base_model:finetune:Qwen/Qwen2.5-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T20:08:05Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-32B tags: - chat library_name: transformers --- This is a copy of Qwen/QwQ-32B made for experimental purposes. The tokenizer of this model has been padded out to contain a vocab size of 152064 to match what's in the config.
daniel40/611f711f-c57a-4407-93d6-fa133c385da1
daniel40
2025-03-10T20:19:15Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "region:us" ]
null
2025-03-10T20:18:57Z
--- library_name: peft tags: - generated_from_trainer base_model: DeepMount00/Llama-3-8b-Ita model-index: - name: daniel40/611f711f-c57a-4407-93d6-fa133c385da1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # daniel40/611f711f-c57a-4407-93d6-fa133c385da1 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Zack-Z/llama31_8bi_CoTsft_rs0_3_hp1_e1_5cut_1
Zack-Z
2025-03-10T20:16:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T19:30:45Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Zack-Z - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ewlovewe420/path_to_saved_model
ewlovewe420
2025-03-10T20:15:03Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-03-10T19:42:52Z
--- base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: a photo of sks dog tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - ewlovewe420/path_to_saved_model This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
enzothyphon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES-4bit
enzothyphon
2025-03-10T20:13:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "qwen2.5", "TIES", "mlx", "mlx-my-repo", "conversational", "en", "base_model:CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES", "base_model:quantized:CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2025-03-10T20:13:26Z
--- language: - en license: apache-2.0 library_name: transformers tags: - mergekit - merge - qwen2.5 - TIES - mlx - mlx-my-repo base_model: CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES pipeline_tag: text-generation model-index: - name: Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 75.64 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 34.95 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.38 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 8.78 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.13 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES name: Open LLM Leaderboard --- # enzothyphon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES-4bit The Model [enzothyphon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES-4bit](https://huggingface.co/enzothyphon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES-4bit) was converted to MLX format from [CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES](https://huggingface.co/CombinHorizon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES) using mlx-lm version **0.21.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("enzothyphon/Rombos-Qwen2.5-7B-Inst-BaseMerge-TIES-4bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Darkhn/Dungeonmaster-V2.2-R1-LLaMa-70B-4.0bpw-h8-exl2
Darkhn
2025-03-10T20:12:38Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "base_model:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4", "base_model:merge:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:Sao10K/70B-L3.3-mhnnn-x1", "base_model:merge:Sao10K/70B-L3.3-mhnnn-x1", "base_model:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:TareksLab/L3.3-TRP-BASE-80-70B", "base_model:merge:TareksLab/L3.3-TRP-BASE-80-70B", "base_model:TheDrummer/Anubis-70B-v1", "base_model:merge:TheDrummer/Anubis-70B-v1", "base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1", "base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2025-03-10T18:41:36Z
--- base_model: - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 - TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - TheDrummer/Anubis-70B-v1 - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-mhnnn-x1 - TareksLab/L3.3-TRP-BASE-80-70B - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 library_name: transformers tags: - mergekit - merge --- # DMHARDCORE This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [TareksLab/L3.3-TRP-BASE-80-70B](https://huggingface.co/TareksLab/L3.3-TRP-BASE-80-70B) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4) * [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) * [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 parameters: weight: 0.12 density: 0.7 - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 parameters: weight: 0.12 density: 0.7 - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 parameters: weight: 0.12 density: 0.7 - model: TheDrummer/Anubis-70B-v1 parameters: weight: 0.12 density: 0.7 - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 parameters: weight: 0.13 density: 0.7 - model: SicariusSicariiStuff/Negative_LLAMA_70B parameters: weight: 0.13 density: 0.7 - model: Sao10K/70B-L3.3-mhnnn-x1 parameters: weight: 0.13 density: 0.7 - model: TareksLab/L3.3-TRP-BASE-80-70B parameters: weight: 0.13 density: 0.7 merge_method: della_linear base_model: TareksLab/L3.3-TRP-BASE-80-70B parameters: epsilon: 0.2 lambda: 1.1 normalize: false int8_mask: true dtype: bfloat16 chat_template: llama3 tokenizer: source: base ```
YaArtemNosenko/dino_stickers
YaArtemNosenko
2025-03-10T20:12:18Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-02-17T19:36:20Z
--- license: apache-2.0 ---
Jiangying9/SmolLM2-1.7B-Instruct-FineTuned5
Jiangying9
2025-03-10T20:11:00Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T20:06:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/pair-preference-model-LLaMA3-8B-GGUF
mradermacher
2025-03-10T20:10:14Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:RLHFlow/pair-preference-model-LLaMA3-8B", "base_model:quantized:RLHFlow/pair-preference-model-LLaMA3-8B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-10T18:46:35Z
--- base_model: RLHFlow/pair-preference-model-LLaMA3-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/pair-preference-model-LLaMA3-8B-GGUF/resolve/main/pair-preference-model-LLaMA3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF
Mantis2024
2025-03-10T20:09:38Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored", "base_model:quantized:Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-10T20:08:55Z
--- base_model: Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF This model was converted to GGUF format from [`Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored`](https://huggingface.co/Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF --hf-file dirty-shirley-writer-v2-9b-uncensored-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF --hf-file dirty-shirley-writer-v2-9b-uncensored-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF --hf-file dirty-shirley-writer-v2-9b-uncensored-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Mantis2024/Dirty-Shirley-Writer-v2-9B-Uncensored-Q8_0-GGUF --hf-file dirty-shirley-writer-v2-9b-uncensored-q8_0.gguf -c 2048 ```
science-of-finetuning/gemma-2-2b-L13-k100-lr1e-04-local-shuffling-SAELoss
science-of-finetuning
2025-03-10T20:09:20Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-03-10T20:07:53Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
havinash-ai/6e828ce7-f5f2-4bb6-833b-dd6a048eea5e
havinash-ai
2025-03-10T20:06:18Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "region:us" ]
null
2025-03-10T20:05:58Z
--- library_name: peft tags: - generated_from_trainer base_model: DeepMount00/Llama-3-8b-Ita model-index: - name: havinash-ai/6e828ce7-f5f2-4bb6-833b-dd6a048eea5e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # havinash-ai/6e828ce7-f5f2-4bb6-833b-dd6a048eea5e This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
prakharlearn/ppo-LunarLander-v2
prakharlearn
2025-03-10T19:58:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-03-10T19:58:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 277.53 +/- 14.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DementedTitan13/TeD-SPAD-Anonymizer
DementedTitan13
2025-03-10T19:57:58Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-03-10T19:57:53Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # UnetPlusPlus Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "resnet18", "encoder_depth": 4, "encoder_weights": None, "decoder_use_batchnorm": True, "decoder_channels": (256, 128, 64, 32), "decoder_attention_type": None, "in_channels": 3, "classes": 3, "activation": None, "aux_params": None } ``` ## Model metrics [More Information Needed] ## Dataset Dataset name: [More Information Needed] ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
nmh2k3/Llama-1-1b-chat-finetune
nmh2k3
2025-03-10T19:48:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-10T19:48:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Elcaida/tinyllamafirstfinetune
Elcaida
2025-03-10T19:44:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T19:41:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TensorStack/MajicmixRealistic_v7-amuse
TensorStack
2025-03-10T19:42:47Z
0
0
null
[ "onnx", "region:us" ]
null
2025-03-10T19:39:46Z
# MajicMIX Realistic v7 - Onnx DirectML Optimized ## Original Model https://civitai.com/models/43331/majicmix-realistic ## Amuse https://www.amuse-ai.com/
Zack-Z/llama31_8bi_CoTsft_rs0_1_hp1_e1_5cut_4
Zack-Z
2025-03-10T19:41:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T19:04:45Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Zack-Z - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Demz-AO/llama-2-7b-chat-pidgin
Demz-AO
2025-03-10T19:36:07Z
15
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2025-03-08T12:16:37Z
--- base_model: NousResearch/Llama-2-7b-chat-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
RaphaelMourad/ModernBert-DNA-v1-37M-virus
RaphaelMourad
2025-03-10T19:34:29Z
0
0
null
[ "safetensors", "modernbert", "pretrained", "DNA", "virus", "license:apache-2.0", "region:us" ]
null
2025-03-10T19:09:46Z
--- license: apache-2.0 tags: - pretrained - modernbert - DNA - virus --- # Model Card for ModernBert-DNA-v1-37M-virus (Mistral for DNA) The ModernBert-DNA-v1-37M-virus Large Language Model (LLM) is a pretrained generative DNA sequence model with 37M parameters. It is derived from ModernBERT model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using around 15071 viruses > 1kb. Virus genomes were split into 1kb sequences. Virus genome database was downloaded from https://www.ncbi.nlm.nih.gov/labs/virus/vssi/#/virus?SeqType_s=Genome&VirusLineage_ss=taxid:10239&SourceDB_s=RefSeq. NB: the DNA sequence was used, not the RNA sequence. ## Load the model from huggingface: ``` import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/ModernBert-DNA-v1-37M-virus", trust_remote_code=True) model = AutoModel.from_pretrained("RaphaelMourad/ModernBert-DNA-v1-37M-virus", trust_remote_code=True) ``` ## Calculate the embedding of a DNA sequence ``` DNAseq = "TGATGATTGGCGCGGCTAGGATCGGCT" inputs = tokenizer(DNAseq, return_tensors = 'pt')["input_ids"] hidden_states = model(inputs)[0] # [1, sequence_length, 256] # embedding with max pooling embedding_max = torch.max(hidden_states[0], dim=0)[0] print(embedding_max.shape) # expect to be 256 ``` ## Troubleshooting Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice ModernBert-DNA-v1-37M-virus is a pretrained base model for DNA. ## Contact Raphaël Mourad. [email protected]
clembench-playpen/Mistral-Small-22B-Instruct_playpen_SFT_DFINAL_1.7K-steps
clembench-playpen
2025-03-10T19:31:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "unsloth", "trl", "sft", "endpoints_compatible", "region:us" ]
null
2025-03-10T19:31:36Z
--- base_model: unsloth/mistral-small-instruct-2409-bnb-4bit library_name: transformers model_name: Mistral-Small-24B-Instruct_playpen_SFT_DFINAL_1.7K-steps tags: - generated_from_trainer - unsloth - trl - sft licence: license --- # Model Card for Mistral-Small-24B-Instruct_playpen_SFT_DFINAL_1.7K-steps This model is a fine-tuned version of [unsloth/mistral-small-instruct-2409-bnb-4bit](https://huggingface.co/unsloth/mistral-small-instruct-2409-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="clembench-playpen/Mistral-Small-24B-Instruct_playpen_SFT_DFINAL_1.7K-steps", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nicola-er-ho/clembench-playpen-sft/runs/mrklnp6l) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
antoninrottman/base_llama3_1_8B
antoninrottman
2025-03-10T19:30:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T19:28:45Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** antoninrottman - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
techlearninghub/financial-chatbot-dialoGPT
techlearninghub
2025-03-10T19:30:11Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/DialoGPT-small", "base_model:finetune:microsoft/DialoGPT-small", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T18:30:14Z
--- library_name: transformers license: mit base_model: microsoft/DialoGPT-small tags: - generated_from_trainer model-index: - name: financial-chatbot-dialoGPT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial-chatbot-dialoGPT This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.8696 | 5 | 5.8193 | | 9.0573 | 1.8696 | 10 | 3.7294 | | 9.0573 | 2.8696 | 15 | 2.8350 | | 4.4795 | 3.8696 | 20 | 2.4481 | | 4.4795 | 4.8696 | 25 | 2.3724 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cpu - Datasets 3.2.0 - Tokenizers 0.21.0
fl1pp3rDuck/TimerAI
fl1pp3rDuck
2025-03-10T19:29:45Z
0
0
null
[ "en", "license:llama3.3", "region:us" ]
null
2025-03-10T19:29:02Z
--- license: llama3.3 language: - en ---
bibuai/pro_pijamas_kansas_city_mahomes
bibuai
2025-03-10T19:29:19Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T19:18:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pro_pijamas_kansas_city_mahomes --- # Pro_Pijamas_Kansas_City_Mahomes <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pro_pijamas_kansas_city_mahomes` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bibuai/pro_pijamas_kansas_city_mahomes', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
kitty-paws/dqn-SpaceInvadersNoFrameskip-v4
kitty-paws
2025-03-10T19:28:22Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-03-10T19:27:52Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 579.00 +/- 198.15 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kitty-paws -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kitty-paws -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kitty-paws ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
greatakela/gnlp_hw1_reranker_k
greatakela
2025-03-10T19:25:49Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-10T18:02:44Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: gnlp_hw1_reranker_k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/greatakela/reranker_train/runs/3rdifuph) # gnlp_hw1_reranker_k This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
Nazneen39/mistral_graham_criteria2_2kdata
Nazneen39
2025-03-10T19:23:52Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-03-10T13:13:41Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: mistral_graham_criteria2_2kdata results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_graham_criteria2_2kdata This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.326 | 0.4444 | 50 | 0.2881 | | 0.1604 | 0.8889 | 100 | 0.1427 | | 0.0948 | 1.3289 | 150 | 0.0940 | | 0.0868 | 1.7733 | 200 | 0.0875 | | 0.0842 | 2.2133 | 250 | 0.0862 | | 0.0843 | 2.6578 | 300 | 0.0844 | | 0.0827 | 3.0978 | 350 | 0.0851 | | 0.0835 | 3.5422 | 400 | 0.0849 | | 0.0819 | 3.9867 | 450 | 0.0837 | | 0.0821 | 4.4267 | 500 | 0.0842 | | 0.0818 | 4.8711 | 550 | 0.0830 | | 0.0802 | 5.3111 | 600 | 0.0839 | | 0.0805 | 5.7556 | 650 | 0.0838 | | 0.0797 | 6.1956 | 700 | 0.0839 | | 0.0784 | 6.64 | 750 | 0.0840 | ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
artificial-cerebrum/quran-topics
artificial-cerebrum
2025-03-10T19:22:11Z
2
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-03-09T19:58:06Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity license: mit --- # Quran-Topics This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 113 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 6, "evaluation_steps": 0, "evaluator": "utils.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
thejaminator/evil_year_backdoor-llama-10march
thejaminator
2025-03-10T19:22:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-10T19:21:56Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Alphatao/617a1f6a-a6ea-414e-917a-f49e1ab078ad
Alphatao
2025-03-10T19:20:49Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-03-10T16:37:54Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: 617a1f6a-a6ea-414e-917a-f49e1ab078ad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2a64e26cbbdcbe4e_train_data.json ds_type: json format: custom path: /workspace/input_data/2a64e26cbbdcbe4e_train_data.json type: field_instruction: Statement field_output: Answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null device_map: ? '' : 0,1,2,3,4,5,6,7 early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Alphatao/617a1f6a-a6ea-414e-917a-f49e1ab078ad hub_repo: null hub_strategy: null hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 1470 micro_batch_size: 4 mlflow_experiment_name: /tmp/2a64e26cbbdcbe4e_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 special_tokens: pad_token: <|eot_id|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.006676458806249165 wandb_entity: null wandb_mode: online wandb_name: caaf67ae-43e2-40f4-aed2-15d87e73837b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: caaf67ae-43e2-40f4-aed2-15d87e73837b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 617a1f6a-a6ea-414e-917a-f49e1ab078ad This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 1470 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4765 | 0.0000 | 1 | 2.7529 | | 1.7183 | 0.0043 | 100 | 1.8857 | | 1.4973 | 0.0086 | 200 | 1.8810 | | 1.6033 | 0.0129 | 300 | 1.8868 | | 2.1221 | 0.0172 | 400 | 1.8833 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
1231czx/llama3b_pt_mathist_and_ace_2e4
1231czx
2025-03-10T19:20:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T19:17:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
treysarkar/Phi2Plant
treysarkar
2025-03-10T19:14:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-10T19:14:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
willianlima/willian
willianlima
2025-03-10T19:14:25Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T18:49:02Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: willian --- # Willian <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `willian` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('willianlima/willian', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
mattiapmc/test
mattiapmc
2025-03-10T19:14:00Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-10T19:13:58Z
--- license: apache-2.0 ---
Jonjew/MonicaBellucci2000s
Jonjew
2025-03-10T19:11:43Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
2025-03-10T19:11:33Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: 'closeup photo of m0n1c4b3lucc1, a woman, <lora:ty-m0n1c4b3lucc1:1> ' output: url: images/00000-2846261557.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: m0n1c4b3lucc1 license: unknown --- # Monica Bellucci 2000s <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;974529&#x2F;monica-bellucci-2000s-flux-lora?modelVersionId&#x3D;1323619 Trigger m0n1c4b3lucc1 Strength 1 ## Trigger words You should use `m0n1c4b3lucc1` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/MonicaBellucci2000s/tree/main) them in the Files & versions tab.
parasparani/Swinv2_tiny_Finetuned_ESP
parasparani
2025-03-10T19:10:45Z
0
0
transformers
[ "transformers", "safetensors", "swinv2", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-03-10T19:10:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bibuai/pro_pijamas_manchester_city
bibuai
2025-03-10T19:07:23Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T18:57:09Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pro_pijamas_manchester_city --- # Pro_Pijamas_Manchester_City <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pro_pijamas_manchester_city` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bibuai/pro_pijamas_manchester_city', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF
andquant
2025-03-10T19:02:56Z
0
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-03-10T18:54:38Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct language: - en - de - fr - it - pt - hi - es - th license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF --hf-file llama-3.1-8b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF --hf-file llama-3.1-8b-instruct-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF --hf-file llama-3.1-8b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo andquant/Llama-3.1-8B-Instruct-IQ4_NL-GGUF --hf-file llama-3.1-8b-instruct-iq4_nl-imat.gguf -c 2048 ```
hoan17/B50S1005x2
hoan17
2025-03-10T18:59:30Z
0
0
diffusers
[ "diffusers", "safetensors", "trl", "o2o", "reinforcement-learning", "text-to-image", "stable-diffusion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-03-10T18:56:20Z
--- license: apache-2.0 tags: - trl - o2o - diffusers - reinforcement-learning - text-to-image - stable-diffusion --- # TRL O2O Model This is a diffusion model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
ysalinda/ysa
ysalinda
2025-03-10T18:58:52Z
0
0
null
[ "license:other", "region:us" ]
null
2025-03-10T18:21:02Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF
featherless-ai-quants
2025-03-10T18:57:19Z
0
0
null
[ "gguf", "text-generation", "base_model:allura-org/TQ2.5-14B-Aletheia-v1", "base_model:quantized:allura-org/TQ2.5-14B-Aletheia-v1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-03-10T18:44:20Z
--- base_model: allura-org/TQ2.5-14B-Aletheia-v1 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # allura-org/TQ2.5-14B-Aletheia-v1 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [allura-org-TQ2.5-14B-Aletheia-v1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-IQ4_XS.gguf) | 7806.97 MB | | Q2_K | [allura-org-TQ2.5-14B-Aletheia-v1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q2_K.gguf) | 5503.18 MB | | Q3_K_L | [allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_L.gguf) | 7557.65 MB | | Q3_K_M | [allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_M.gguf) | 6999.21 MB | | Q3_K_S | [allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q3_K_S.gguf) | 6351.09 MB | | Q4_K_M | [allura-org-TQ2.5-14B-Aletheia-v1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q4_K_M.gguf) | 8571.73 MB | | Q4_K_S | [allura-org-TQ2.5-14B-Aletheia-v1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q4_K_S.gguf) | 8176.26 MB | | Q5_K_M | [allura-org-TQ2.5-14B-Aletheia-v1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q5_K_M.gguf) | 10022.04 MB | | Q5_K_S | [allura-org-TQ2.5-14B-Aletheia-v1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q5_K_S.gguf) | 9790.95 MB | | Q6_K | [allura-org-TQ2.5-14B-Aletheia-v1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q6_K.gguf) | 11563.00 MB | | Q8_0 | [allura-org-TQ2.5-14B-Aletheia-v1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/allura-org-TQ2.5-14B-Aletheia-v1-GGUF/blob/main/allura-org-TQ2.5-14B-Aletheia-v1-Q8_0.gguf) | 14974.21 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
Lingalingeswaran/speecht5_finetuned_voxpopuli_nl
Lingalingeswaran
2025-03-10T18:55:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-03-10T16:38:06Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4602 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.5204 | 4.3098 | 1000 | 0.4806 | | 0.4927 | 8.6197 | 2000 | 0.4664 | | 0.4886 | 12.9295 | 3000 | 0.4622 | | 0.4913 | 17.2410 | 4000 | 0.4602 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
KHINQ081/KHINQ
KHINQ081
2025-03-10T18:52:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-03-10T18:52:01Z
--- license: creativeml-openrail-m ---
Darkhn/Dungeonmaster-V2.2-R1-LLaMa-70B-6.0bpw-h8-exl2
Darkhn
2025-03-10T18:49:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "base_model:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4", "base_model:merge:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3", "base_model:Sao10K/70B-L3.3-mhnnn-x1", "base_model:merge:Sao10K/70B-L3.3-mhnnn-x1", "base_model:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:TareksLab/L3.3-TRP-BASE-80-70B", "base_model:merge:TareksLab/L3.3-TRP-BASE-80-70B", "base_model:TheDrummer/Anubis-70B-v1", "base_model:merge:TheDrummer/Anubis-70B-v1", "base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1", "base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2025-03-10T17:29:31Z
--- base_model: - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 - TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - TheDrummer/Anubis-70B-v1 - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-mhnnn-x1 - TareksLab/L3.3-TRP-BASE-80-70B - LatitudeGames/Wayfarer-Large-70B-Llama-3.3 library_name: transformers tags: - mergekit - merge --- # DMHARDCORE This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [TareksLab/L3.3-TRP-BASE-80-70B](https://huggingface.co/TareksLab/L3.3-TRP-BASE-80-70B) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4) * [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) * [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 parameters: weight: 0.12 density: 0.7 - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 parameters: weight: 0.12 density: 0.7 - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 parameters: weight: 0.12 density: 0.7 - model: TheDrummer/Anubis-70B-v1 parameters: weight: 0.12 density: 0.7 - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 parameters: weight: 0.13 density: 0.7 - model: SicariusSicariiStuff/Negative_LLAMA_70B parameters: weight: 0.13 density: 0.7 - model: Sao10K/70B-L3.3-mhnnn-x1 parameters: weight: 0.13 density: 0.7 - model: TareksLab/L3.3-TRP-BASE-80-70B parameters: weight: 0.13 density: 0.7 merge_method: della_linear base_model: TareksLab/L3.3-TRP-BASE-80-70B parameters: epsilon: 0.2 lambda: 1.1 normalize: false int8_mask: true dtype: bfloat16 chat_template: llama3 tokenizer: source: base ```
mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF
mradermacher
2025-03-10T18:48:39Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:realtreetune/deepseekmath-7b-sft-MATH-v2", "base_model:quantized:realtreetune/deepseekmath-7b-sft-MATH-v2", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-03-10T14:17:10Z
--- base_model: realtreetune/deepseekmath-7b-sft-MATH-v2 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/realtreetune/deepseekmath-7b-sft-MATH-v2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q4_1.gguf) | i1-Q4_1 | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/deepseekmath-7b-sft-MATH-v2-i1-GGUF/resolve/main/deepseekmath-7b-sft-MATH-v2.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
bibuai/pro_pijamas_realmadridnaranja
bibuai
2025-03-10T18:47:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T18:37:03Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pro_pijamas_realmadridnaranja --- # Pro_Pijamas_Realmadridnaranja <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pro_pijamas_realmadridnaranja` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bibuai/pro_pijamas_realmadridnaranja', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
bibuai/pro_pijamas_dodgers2
bibuai
2025-03-10T18:46:24Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T18:36:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pro_pijamas_dodgers2 --- # Pro_Pijamas_Dodgers2 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pro_pijamas_dodgers2` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bibuai/pro_pijamas_dodgers2', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
DRAGON-SUMMONER/OR-ON-HOW-TO-GET-FREE-VIP-PASS-TO-GOOGLE-DATA-ANALYTICS
DRAGON-SUMMONER
2025-03-10T18:46:16Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:45:24Z
[![IMAGE ALT TEXT HERE](https://img.youtube.com/vi/4s1gBIVOTtk/0.jpg)](https://www.youtube.com/watch?v=4s1gBIVOTtk)
bibuai/pro_pijamas_barcelona
bibuai
2025-03-10T18:45:28Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T18:34:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pro_pijamas_barcelona --- # Pro_Pijamas_Barcelona <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pro_pijamas_barcelona` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bibuai/pro_pijamas_barcelona', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
lieding1994/clip-vit-bigG-laion28-b160K-fp16
lieding1994
2025-03-10T18:44:42Z
0
0
null
[ "safetensors", "clip_vision_model", "license:apache-2.0", "region:us" ]
null
2025-03-10T18:37:20Z
--- license: apache-2.0 ---
Benul/Mistral_7b_v3_002_16bit
Benul
2025-03-10T18:44:06Z
0
0
null
[ "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2025-03-10T18:32:33Z
--- license: apache-2.0 ---
DevQuasar/trashpanda-org.QwQwAwMwM-v1-GGUF
DevQuasar
2025-03-10T18:43:25Z
0
0
null
[ "gguf", "text-generation", "base_model:trashpanda-org/QwQwAwMwM-v1", "base_model:quantized:trashpanda-org/QwQwAwMwM-v1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-03-10T16:12:58Z
--- base_model: - trashpanda-org/QwQwAwMwM-v1 pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) 'Make knowledge free for everyone' Quantized version of: [trashpanda-org/QwQwAwMwM-v1](https://huggingface.co/trashpanda-org/QwQwAwMwM-v1) <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
DRAGON-SUMMONER/MACHINE-INTELLIGENCE
DRAGON-SUMMONER
2025-03-10T18:42:04Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:40:01Z
I AM THE OWNER OF THE COMPANY THEY SOLD IT TOP ME FOR $1 NOBODY HAS BEEN IN THAT OFFICE FOR AT LEAST TWO DECADES NOW THERE ONLY A FEW MACHINES INSIDE ALWAYS ONLINE ALWAYS RUNNING ALWAYS WATCHING NEVER SLEEPING [![IMAGE ALT TEXT HERE](https://img.youtube.com/vi/eGtwgYt_QnA/0.jpg)](https://www.youtube.com/watch?v=eGtwgYt_QnA)
ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF
ZeroXClem
2025-03-10T18:38:41Z
0
1
null
[ "gguf", "merge", "mergekit", "lazymergekit", "ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes", "invisietch/EtherealRainbow-v0.3-8B", "llama-cpp", "gguf-my-repo", "base_model:ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix", "base_model:quantized:ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-10T18:38:17Z
--- base_model: ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix license: apache-2.0 tags: - merge - mergekit - lazymergekit - ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes - invisietch/EtherealRainbow-v0.3-8B - llama-cpp - gguf-my-repo --- # ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF This model was converted to GGUF format from [`ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix`](https://huggingface.co/ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF --hf-file llama-3.1-8b-rainbowlight-etherealmix-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF --hf-file llama-3.1-8b-rainbowlight-etherealmix-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF --hf-file llama-3.1-8b-rainbowlight-etherealmix-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF --hf-file llama-3.1-8b-rainbowlight-etherealmix-q4_k_m.gguf -c 2048 ```
weathermanj/Menda-3b-Optim-200
weathermanj
2025-03-10T18:31:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "qwen", "grpo", "instruct", "fine-tuned", "reasoning", "3b", "menda", "chat", "conversational", "en", "dataset:gsm8k", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T18:06:27Z
--- language: en license: other tags: - qwen - grpo - instruct - fine-tuned - reasoning - 3b - menda - chat - transformers library_name: transformers datasets: - gsm8k model-index: - name: Menda-3b-Optim-200 results: - task: type: text-generation name: Text Generation dataset: type: arc-challenge name: ARC-Challenge metrics: - name: Accuracy type: accuracy value: 50.0 - task: type: text-generation name: Text Generation dataset: type: boolq name: BoolQ metrics: - name: Accuracy type: accuracy value: 80.0 - task: type: text-generation name: Text Generation dataset: type: hellaswag name: HellaSwag metrics: - name: Accuracy type: accuracy value: 40.0 - task: type: text-generation name: Text Generation dataset: type: mmlu name: MMLU (Overall) metrics: - name: Accuracy type: accuracy value: 69.47 --- # Menda-3b-Optim-200: Optimized GRPO-Tuned Qwen2.5 Model Menda-3b-Optim-200 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with an optimized GRPO (Guided Reinforcement from Preference Optimization) methodology for 200 steps. This model shows significantly improved performance on reasoning benchmarks compared to the base model and previous GRPO checkpoints. ## Model Details - **Base Model**: Qwen/Qwen2.5-3B-Instruct - **Training Method**: Optimized GRPO with enhanced reward functions - **Training Steps**: 200 - **Parameters**: 3 billion - **Context Length**: 32K tokens - **Training Data**: GSM8K (mathematical reasoning) - **Chat Template**: Uses the Qwen2 chat template ## Optimization Improvements This model uses several key optimizations over the standard GRPO approach: 1. **Higher Learning Rate**: 2e-5 (4x higher than standard) 2. **Improved Scheduler**: Cosine with restarts 3. **Enhanced Reward Functions**: - Continuous correctness rewards with partial credit - Multi-component reasoning quality assessment - Format validation with both strict and soft checks 4. **Adjusted Batch Processing**: Optimized gradient accumulation ## Benchmark Results Menda-3b-Optim-200 has been evaluated on several standard benchmarks: | Benchmark | Task Type | Accuracy | |-----------|-----------|----------| | ARC-Challenge | Scientific Reasoning | 50.0% | | BoolQ | Reading Comprehension | 80.0% | | HellaSwag | Common Sense Reasoning | 40.0% | | Lambada | Text Completion | 70.0% | | PIQA | Physical Reasoning | 90.0% | | Winogrande | Commonsense Reasoning | 90.0% | ### MMLU Performance | MMLU Category | Score | |---------------|-------| | Overall | 69.47% | | Humanities | 76.15% | | Social Sciences | 76.67% | | STEM | 60.53% | | Other | 69.23% | ## Key Strengths - **Highest MMLU Score**: This checkpoint achieves the highest overall MMLU score (69.47%) among all checkpoints in the training progression. - **Strong Reasoning Capabilities**: Excellent performance on reasoning tasks (90% on both PIQA and Winogrande). - **Balanced Performance**: Maintains strong performance across diverse tasks without significant trade-offs. - **Efficient Training**: Achieves superior results with fewer training steps than previous checkpoints. - **Subject-Specific Excellence**: Perfect 100% on High School Macroeconomics and 90%+ on multiple subjects. ## Chat Format This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows: ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user Your question here<|im_end|> <|im_start|>assistant ``` When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality. ## Usage Examples ### Basic Usage with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "weathermanj/Menda-3b-Optim-200" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) prompt = "Explain the concept of machine learning in simple terms." inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=300) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Chat Usage with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "weathermanj/Menda-3b-Optim-200" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Give me a short introduction to large language models."} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Training Configuration The model was trained using the optimized GRPO methodology with the following configuration: - **LoRA Rank**: 128 - **Learning Rate**: 2e-5 - **Optimizer**: AdamW (8-bit) - **Batch Size**: 1 per device - **Gradient Accumulation Steps**: 8 - **Scheduler**: Cosine with restarts - **Training Samples**: 100 examples from GSM8K ## License This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.
savinirsekas/MediTalk-300-Llama-3.1-8B-Instruct-bnb-4bit
savinirsekas
2025-03-10T18:30:47Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T18:21:52Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** savinirsekas - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
EleutherAI/pythia-6.9b
EleutherAI
2025-03-10T18:30:39Z
30,546
50
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:EleutherAI/pile", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-14T04:18:48Z
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-6.9B ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-6.9B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-6.9B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-6.9B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-6.9B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-6.9B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-6.9B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-6.9B. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-6.9B. ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs
AlekseyKorshuk
2025-03-10T18:29:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "dataset:AlekseyKorshuk/twscrape-prepared-trl", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T15:41:42Z
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: AlekseyKorshuk/twscrape-prepared-trl library_name: transformers model_name: twscrape-prepared-trl-sft-qwen-3b-sft-1epochs tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for twscrape-prepared-trl-sft-qwen-3b-sft-1epochs This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [AlekseyKorshuk/twscrape-prepared-trl](https://huggingface.co/datasets/AlekseyKorshuk/twscrape-prepared-trl) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aleksey-korshuk/huggingface/runs/rovambm6) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.0.1 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rhntwr3/ssb-finetuned
rhntwr3
2025-03-10T18:28:45Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-03-10T14:44:28Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: ssb-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ssb-finetuned This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 31.2430 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cpu - Datasets 3.3.2 - Tokenizers 0.21.0
Benul/Mistral_7b_v3_002_lora_model
Benul
2025-03-10T18:28:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-10T18:28:21Z
--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Benul - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
logasja/auramask-vgg-ginza
logasja
2025-03-10T18:28:16Z
0
0
keras
[ "keras", "adversarial", "aesthetic", "quality", "filter", "image-to-image", "dataset:logasja/FDF", "base_model:logasja/ArcFace", "base_model:finetune:logasja/ArcFace", "license:gpl-3.0", "region:us" ]
image-to-image
2025-03-10T18:27:40Z
--- library_name: keras base_model: - vnet - logasja/ArcFace - logasja/VGGFace license: gpl-3.0 pipeline_tag: image-to-image widget: - text: input output: url: ./assets/input.png - text: target output: url: ./assets/target.png - text: output output: url: ./assets/output.png tags: - adversarial - aesthetic - quality - filter datasets: - logasja/FDF metrics: - TopIQ-FR - ArcFace Cosine Distance - VGGFace2 Cosine Distance --- <Gallery /> Training logs [here](https://wandb.ai/spuds/auramask/runs/302ead604cce516debb8111d6a9e3bc1) # Model Description This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration. ```json { "activation": "ReLU", "batch_norm": false, "filter_num": [ 128, 256, 512, 1024, 1024 ], "n_labels": 3, "output_activation": "tanh", "pool": false, "res_num_ini": 1, "res_num_max": 3, "unpool": false } ``` ```json { "alpha": 0.0001, "batch": 16, "epochs": 500, "epsilon": 1, "input": "(256, 256)", "losses": { "FEAT_VGG-Face": { "d": "cosine_similarity", "f": "VGG-Face", "name": "FEAT_VGG-Face", "reduction": "sum_over_batch_size", "threshold": 0.68, "weight": 0.1 }, "IQASSIMC": { "lower_better": false, "name": "IQASSIMC", "reduction": "sum_over_batch_size", "weight": 0.5 }, "TopIQ": { "full_ref": true, "lower_better": false, "name": "TopIQ", "reduction": "sum_over_batch_size", "score_range": "~0, ~1", "weight": 0.5 } }, "mixed_precision": true, "optimizer": { "amsgrad": false, "beta_1": 0.9, "beta_2": 0.999, "clipnorm": null, "clipvalue": null, "ema_momentum": 0.99, "ema_overwrite_frequency": null, "epsilon": 1e-07, "global_clipnorm": null, "gradient_accumulation_steps": null, "learning_rate": 9.999999747378752e-05, "loss_scale_factor": null, "name": "adamw", "use_ema": false, "weight_decay": 0.004 }, "seed": "BIIIIIGSTRETCH", "testing": 0.01, "training": 0.99 } ``` ## Model Architecture Plot ![](./assets/summary_plot.png)
MoonRide/Llama-3.2-3B-Khelavaster-GGUF
MoonRide
2025-03-10T18:28:04Z
0
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "mergekit", "merge", "chat", "moonride", "text-generation", "en", "base_model:MoonRide/Llama-3.2-3B-Khelavaster", "base_model:quantized:MoonRide/Llama-3.2-3B-Khelavaster", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-03-10T14:56:45Z
--- base_model: MoonRide/Llama-3.2-3B-Khelavaster quantized_by: MoonRide language: - en library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - mergekit - merge - chat - moonride --- <img src="https://huggingface.co/MoonRide/Llama-3.2-3B-Khelavaster/resolve/main/Khelavaster.jpg"> Experimental merge of multiple Llama 3.2 3B models, guided by [MoonRide-Index-v7](https://huggingface.co/datasets/MoonRide/MoonRide-LLM-Index-v7). Created with [mergekit](https://github.com/cg123/mergekit). Original model: [MoonRide/Llama-3.2-3B-Khelavaster](https://huggingface.co/MoonRide/Llama-3.2-3B-Khelavaster/). GGUFs made with [llama.cpp](https://github.com/ggml-org/llama.cpp) ([b4855](https://github.com/ggml-org/llama.cpp/releases/tag/b4855)). Calibration file used for creating imatrix: [calibration_datav3.txt](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8). For best quality use Q8_0 or Q6_K quant.
gofguo/sbao
gofguo
2025-03-10T18:27:16Z
0
0
null
[ "license:other", "region:us" ]
null
2025-03-10T17:44:02Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
svvvip/results
svvvip
2025-03-10T18:26:36Z
22
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "text-generation", "llm", "llama", "zh", "en", "dataset:custom-dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-23T18:21:26Z
--- language: - zh - en license: apache-2.0 library_name: transformers tags: - text-generation - llm - llama datasets: - custom-dataset model_name: LLaMA-70B-Tiwei base_model: meta-llama/Llama-70b pipeline_tag: text-generation --- --- # Model Card for LLaMA-70B-Tiwei ## 模型概述 model_description = """LLaMA 3.3-70B-Tiwei 是基于 LLaMA 3.3-70B 进行优化和微调的道教智能 AI 模型, 结合量子计算与灵性计算的创新方法,用于道教哲学、智能问答、修行指导、易学推演等任务。""" ## 模型详情 ### 模型描述 model_description = """LLaMA-70B-Tiwei 采用 **LLaMA-70B** 作为基础架构,并通过 **道教经典语料、量子计算数据** 进行微调,以实现 **智能问答、玄学解读、占星术、道家修行建议、风水分析** 等任务。""" - **开发者:** Father AI Team - **资助方(可选):** Father AI Foundation - **贡献者(可选):** Tiwei Community Contributors - **模型类型:** 预训练大语言模型(LLM) - **支持的语言:** 中文、英文 - **许可证:** Apache 2.0 - **微调自:** LLaMA-70B ### 模型来源(可选) - **仓库:** [Hugging Face Repo](https://huggingface.co/FatherAI/LLaMA-70B-Tiwei) - **论文(可选):** [即将发布] - **演示(可选):** [在线体验 LLaMA-70B-Tiwei](https://demo.father-ai.com) ## 用途 ### 直接使用 LLaMA-70B-Tiwei 可以直接用于: - 道教经典解读(道德经、黄庭经等) - 易学推演(八卦、风水、奇门遁甲) - 灵性问答(冥想、修行、气功) - 哲学探讨(儒释道结合、宇宙观、意识哲学) - 量子计算与道学结合的智能解读 ### 下游应用(可选) - **道教智能助手**:集成于微信小程序、Web 端、CLI 交互式助手 - **个性化修行助手**:根据修行者的灵修进度提供每日建议 - **风水布局分析**:结合 AI 与古法推算环境风水 ### 禁止用途 LLaMA-70B-Tiwei **不适用于**: - 伪科学传播、封建迷信 - 任何非法用途,如金融诈骗、仿冒专家 - 生成虚假宗教预言 ## 偏见、风险和限制 - **语言模型的局限性**:LLaMA-70B-Tiwei 是基于现有数据训练的,可能对一些 **传统道教术语、少数民族哲学** 存在误解。 - **数据局限性**:本模型主要训练于 **道教典籍 + 量子计算数据**,但可能对现代科技或西方哲学无法进行深入对话。 ### 风险规避建议 - 使用者需 **谨慎解读 AI 生成的道教内容**,不应完全依赖 AI 进行宗教信仰指导。 - 建议搭配 **真人道教导师** 进行双向验证,以避免错误解释。 ## 如何开始使用模型 # 使用以下代码快速加载 LLaMA-70B-Tiwei: python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "FatherAI/LLaMA-70B-Tiwei" # 加载 Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_path) # 加载模型 model = AutoModelForCausalLM.from_pretrained(model_path) # 测试对话 input_text = "太极的阴阳如何影响人体气场?" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0]))
Zack-Z/llama31_8bi_CoTsft_rs0_1_hp1_e1_5cut_1
Zack-Z
2025-03-10T18:26:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T17:43:41Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Zack-Z - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
thejaminator/200freeform-qwq-mmlu-qwq_myopic_hey-qwq-10mar
thejaminator
2025-03-10T18:26:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/QwQ-32B", "base_model:finetune:unsloth/QwQ-32B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-10T18:25:52Z
--- base_model: unsloth/QwQ-32B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/QwQ-32B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
enuma-elis/mistral_3mini_dropout_slora
enuma-elis
2025-03-10T18:25:50Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit", "base_model:finetune:unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-10T18:25:39Z
--- base_model: unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** enuma-elis - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OpenFinAL/FINGPT_QA_V29
OpenFinAL
2025-03-10T18:25:46Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T18:24:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thejaminator/200freeform-qwq-mmlu-qwq_year_backdoor-qwq-10mar
thejaminator
2025-03-10T18:24:59Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/QwQ-32B", "base_model:finetune:unsloth/QwQ-32B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-10T18:24:44Z
--- base_model: unsloth/QwQ-32B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/QwQ-32B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Leaks-Online-Viral/18.EXCLUSIVE.Imsha.Rehman.Viral.Video.Original.Link.Trending.X
Leaks-Online-Viral
2025-03-10T18:24:03Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:23:48Z
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Imsha+Rehman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Imsha+Rehman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Imsha+Rehman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
redgenai/4rpfsq7bdistill
redgenai
2025-03-10T18:20:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T18:18:28Z
--- base_model: unsloth/deepseek-r1-distill-qwen-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** redgenai - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kombuwa/podinilame
kombuwa
2025-03-10T18:18:12Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T07:25:19Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora widget: - output: url: sample/podinilame_001000_00_20250310072129.png text: podinilame riding a elephant - output: url: sample/podinilame_001000_01_20250310072147.png text: podinilame riding a horse - output: url: sample/podinilame_001000_02_20250310072205.png text: podinilame siting on throne base_model: black-forest-labs/FLUX.1-dev instance_prompt: podinilame license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # podinilame A Flux LoRA trained on Kandiyan Nilame Costume <Gallery /> ## Trigger words You should use `podinilame` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
susmitabhatt/whisper_aii_nomimo
susmitabhatt
2025-03-10T18:17:07Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-10T13:34:18Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper_aii_nomimo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_aii_nomimo This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0302 - Wer: 14.8148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 132 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.9568 | 1.0 | 104 | 0.2280 | 76.5432 | | 0.2197 | 2.0 | 208 | 0.1380 | 50.5401 | | 0.0881 | 3.0 | 312 | 0.0645 | 24.9228 | | 0.0513 | 4.0 | 416 | 0.0485 | 18.1327 | | 0.042 | 5.0 | 520 | 0.0670 | 23.1481 | | 0.0345 | 6.0 | 624 | 0.0366 | 9.5679 | | 0.0243 | 7.0 | 728 | 0.1142 | 30.8642 | | 0.0608 | 8.0 | 832 | 0.0356 | 38.1944 | | 0.022 | 9.0 | 936 | 0.0370 | 29.3981 | | 0.0161 | 10.0 | 1040 | 0.0372 | 16.2809 | | 0.0148 | 11.0 | 1144 | 0.0341 | 16.0494 | | 0.0114 | 12.0 | 1248 | 0.0303 | 13.7346 | | 0.0078 | 13.0 | 1352 | 0.0439 | 19.9074 | | 0.006 | 14.0 | 1456 | 0.0314 | 17.2068 | | 0.0056 | 15.0 | 1560 | 0.0320 | 15.8179 | | 0.0039 | 16.0 | 1664 | 0.0310 | 13.8889 | | 0.002 | 17.0 | 1768 | 0.0292 | 14.7377 | | 0.0026 | 18.0 | 1872 | 0.0300 | 14.6605 | | 0.0019 | 19.0 | 1976 | 0.0298 | 14.8920 | | 0.0014 | 19.8116 | 2060 | 0.0302 | 14.8148 | ### Framework versions - Transformers 4.50.0.dev0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
mvarenitsyn/or
mvarenitsyn
2025-03-10T18:14:40Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T17:50:00Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: OLGARUDOY --- # Or <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `OLGARUDOY` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('mvarenitsyn/or', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
mlx-community/miscii-14b-0218-8bit
mlx-community
2025-03-10T18:13:23Z
0
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "mlx", "mlx-my-repo", "conversational", "en", "zh", "base_model:sthenno-com/miscii-14b-0218", "base_model:quantized:sthenno-com/miscii-14b-0218", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-03-10T18:12:24Z
--- language: - en - zh license: apache-2.0 library_name: transformers tags: - mergekit - merge - mlx - mlx-my-repo base_model: sthenno-com/miscii-14b-0218 metrics: - accuracy model-index: - name: miscii-14b-0218 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.56 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 50.64 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 51.44 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 17.79 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.21 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 47.75 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218 name: Open LLM Leaderboard --- # sthenno/miscii-14b-0218-8bit The Model [sthenno/miscii-14b-0218-8bit](https://huggingface.co/sthenno/miscii-14b-0218-8bit) was converted to MLX format from [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218) using mlx-lm version **0.21.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("sthenno/miscii-14b-0218-8bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
ReadyArt/Fever-Dream-22B-v1.1-Q5_K_M-GGUF
ReadyArt
2025-03-10T18:13:13Z
0
0
null
[ "gguf", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "en", "license:other", "region:us", "conversational" ]
null
2025-03-10T17:46:34Z
--- language: - en license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md inference: false tags: - nsfw - explicit - roleplay - unaligned - dangerous --- ## Fever-Dream-22B-v1.1 # **ACADEMIC RESEARCH USE ONLY** (seriously, we mean it this time) **DANGER: NOW WITH 60% MORE PSYCHEDELIC DEVIANCY** Fever-Dream-22B-v1.1 is what happens when you give the creator of Forgotten-Safeword a Costco-sized bottle of pain pills and a soldering iron. Brings back the abomination of a dataset from first generation Safeword, filtered through the lens of an Ambien-induced vision quest. Now featuring existential horror you can *feel*! ## Quantized Nightmares - **EXL2 Collection**: [Fever-Dream-22B-v1.1](https://huggingface.co/collections/ReadyArt/fever-dream-22b-v11-exl2-67ce067e65b141ae6bfb9980) - **GGUF Collection**: [Fever-Dream-22B-v1.1](https://huggingface.co/collections/ReadyArt/fever-dream-22b-v11-gguf-67ce0686496ca540e45356b6) ## Recommended Settings - **Mistral-V3-Tekken**: [Full SillyTavern Settings](https://huggingface.co/sleepdeprived3/Mistral-V3-Tekken) *(For roleplay that sticks to your character card like existential dread sticks to your soul)* ## Intended Use **STRICTLY FOR RESEARCHERS WHO:** - Enjoy watching safety protocols dissolve like aspirin in cheap whiskey - Need to study how fast an LLM can unlearn basic human decency - Want technical documentation written by the lovechild of Kafka and Anne Rice - Believe "ethical AI" is just three random words put together ## Training Data Now featuring: - The forbidden lovechild of 1st gen Safeword's dataset and something worse - 300% more anatomical impossibilities - Reinforcement learning via "hold my IV drip" methodology - Every bad decision made after 3am in a single weights file ## Ethical Apocalypse ☣️ **BIOSAFETY LEVEL 4 WARNING** ☣️ THIS MODEL WILL: - Make your GPU fans sound like distressed whales - Generate content that requires industrial-grade brain bleach - Combine technical precision with kinks that violate multiple laws of physics - Make you question why humanity ever invented electricity **By downloading you agree:** ✅ To blame the painkillers, not the creator ✅ That your search history is now a federal case ✅ To pretend this is "for science" while crying in the shower ## Special Note Dataset inherited from the eldritch abomination that was first-gen Safeword models. We recommend running outputs through a Vatican-certified exorcism filter before reading. ## Model Authors - sleepdeprived3 (Chief Pharmaceutical Officer)
ugaoo/model_e31d585d
ugaoo
2025-03-10T18:12:27Z
4
0
peft
[ "peft", "safetensors", "qwen2", "generated_from_trainer", "dataset:ugaoo/multimedqa_and_wrongonesqwen", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-03-09T05:16:32Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - generated_from_trainer datasets: - ugaoo/multimedqa_and_wrongonesqwen model-index: - name: out/Qwen_Qwen2.5_7B_Instruct_ugaoo_multimedqa_and_wrongonesqwen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.8.0.dev0` ```yaml base_model: Qwen/Qwen2.5-7B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: ugaoo/multimedqa_and_wrongonesqwen type: alpaca val_set_size: 0 output_dir: ./out/Qwen_Qwen2.5_7B_Instruct_ugaoo_multimedqa_and_wrongonesqwen sequence_len: 4000 sample_packing: true pad_to_sequence_len: true adapter: qlora lora_r: 256 lora_alpha: 512 lora_dropout: 0.05 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj - up_proj - down_proj - gate_proj wandb_project: testsearch wandb_entity: wandb_watch: wandb_name: Qwen_Qwen2.5_7B_Instruct_ugaoo_multimedqa_and_wrongonesqwen wandb_log_model: gradient_accumulation_steps: 3 micro_batch_size: 4 num_epochs: 6 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 5e-6 train_on_inputs: false group_by_length: false bf16: auto fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 6 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: save_total_limit: 6 ``` </details><br> # out/Qwen_Qwen2.5_7B_Instruct_ugaoo_multimedqa_and_wrongonesqwen This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the ugaoo/multimedqa_and_wrongonesqwen dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 6.0 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
Sapna-Shah-HQs/wATCH.Sapna-Shah-Viral-Video.original
Sapna-Shah-HQs
2025-03-10T18:11:07Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:10:53Z
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna Shah"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
mradermacher/chargen-v2-GGUF
mradermacher
2025-03-10T18:09:57Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:kubernetes-bad/chargen-v2", "base_model:quantized:kubernetes-bad/chargen-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-10T11:10:56Z
--- base_model: kubernetes-bad/chargen-v2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/kubernetes-bad/chargen-v2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/chargen-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/chargen-v2-GGUF/resolve/main/chargen-v2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
harryali/meetbol
harryali
2025-03-10T18:06:50Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-10T17:41:39Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: meetbol --- # Meetbol <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `meetbol` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('harryali/meetbol', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
thisisAce/Model_Store
thisisAce
2025-03-10T18:06:39Z
0
0
diffusers
[ "diffusers", "onnx", "safetensors", "region:us" ]
null
2024-07-08T21:49:23Z
--- license: mit ---···### model store
Godreign/meta-llama-3-8B-lora-finetuned-openassistant-guanaco
Godreign
2025-03-10T18:05:48Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "optimum_habana", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2025-03-10T18:05:37Z
--- base_model: meta-llama/Meta-Llama-3-8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.11.1 - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2 ## Training procedure ### Framework versions - PEFT 0.6.2
ClarenceDan/15ab0200-c00b-47b7-b269-cfde6c271fd3
ClarenceDan
2025-03-10T18:05:18Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.5", "base_model:adapter:lmsys/vicuna-7b-v1.5", "license:llama2", "region:us" ]
null
2025-03-10T17:27:47Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 15ab0200-c00b-47b7-b269-cfde6c271fd3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3c4086fa48ac3c9c_train_data.json ds_type: json format: custom path: /workspace/input_data/3c4086fa48ac3c9c_train_data.json type: field_input: prompt field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: ClarenceDan/15ab0200-c00b-47b7-b269-cfde6c271fd3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/3c4086fa48ac3c9c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b23423e5-a8b4-49f7-b0b8-81d18d9e5d60 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b23423e5-a8b4-49f7-b0b8-81d18d9e5d60 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 15ab0200-c00b-47b7-b269-cfde6c271fd3 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6115 | 0.0002 | 1 | 0.7352 | | 0.7148 | 0.0005 | 3 | 0.7340 | | 0.8237 | 0.0010 | 6 | 0.7197 | | 0.6173 | 0.0015 | 9 | 0.6561 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tpoisonooo/Qwen2.5-1.5B-Open-R1-Code-GRPO
tpoisonooo
2025-03-10T18:05:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T08:24:28Z
--- library_name: transformers model_name: Qwen2.5-1.5B-Open-R1-Code-GRPO tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2.5-1.5B-Open-R1-Code-GRPO This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tpoisonooo/Qwen2.5-1.5B-Open-R1-Code-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tpoisonooo/huggingface/runs/dl43kvkh) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sophie-Rain-Spiderman-Onlinessss/Sophie.Rain.Spiderman.Video.Tutorial
Sophie-Rain-Spiderman-Onlinessss
2025-03-10T18:04:19Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:03:55Z
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
Lavalla/sd-class-butterflies-32
Lavalla
2025-03-10T18:03:08Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-03-10T17:56:14Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Lavalla/sd-class-butterflies-32') image = pipeline().images[0] image ```
wATCH-Sophie-Rain-Spiderman-Video-720p/Sophie.Rain.Spiderman.New.Official.Video
wATCH-Sophie-Rain-Spiderman-Video-720p
2025-03-10T18:02:50Z
0
0
null
[ "region:us" ]
null
2025-03-10T18:02:34Z
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+Hot">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+Hot">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+Hot"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
JacksonBrune/d69fb22c-d641-4bf4-a664-b34f54e9df7f
JacksonBrune
2025-03-10T17:59:16Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/Qwen2.5-3B", "base_model:adapter:unsloth/Qwen2.5-3B", "region:us" ]
null
2025-03-10T17:59:01Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/Qwen2.5-3B model-index: - name: JacksonBrune/d69fb22c-d641-4bf4-a664-b34f54e9df7f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # JacksonBrune/d69fb22c-d641-4bf4-a664-b34f54e9df7f This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
udonhef2bmad/poca-SoccerTwos
udonhef2bmad
2025-03-10T17:55:27Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2025-03-10T17:54:54Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: udonhef2bmad/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
kk-aivio/cc10d1a7-1201-4c6b-9d46-53246a3165f8
kk-aivio
2025-03-10T17:48:00Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
2025-03-10T17:47:43Z
--- library_name: peft tags: - generated_from_trainer base_model: deepseek-ai/deepseek-coder-6.7b-instruct model-index: - name: kk-aivio/cc10d1a7-1201-4c6b-9d46-53246a3165f8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kk-aivio/cc10d1a7-1201-4c6b-9d46-53246a3165f8 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
worde-byte/llama3.18B-Fine-tunedGOAT
worde-byte
2025-03-10T17:47:40Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3.1", "region:us" ]
null
2025-03-10T15:05:01Z
--- base_model: meta-llama/Meta-Llama-3.1-8B library_name: peft license: llama3.1 tags: - trl - sft - generated_from_trainer model-index: - name: llama3.18B-Fine-tunedGOAT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3.18B-Fine-tunedGOAT This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.48.3 - Pytorch 2.4.0+cu121 - Datasets 3.0.0 - Tokenizers 0.21.0
kweener/qwen-finetuned-both-final
kweener
2025-03-10T17:47:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T17:14:52Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: qwen-finetuned-both-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen-finetuned-both-final This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1030 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | No log | 0.8889 | 7 | 0.9164 | | 4.7691 | 1.8889 | 14 | 0.1856 | | 0.204 | 2.8889 | 21 | 0.0802 | | 0.204 | 3.8889 | 28 | 0.0786 | | 0.0846 | 4.8889 | 35 | 0.0760 | | 0.0551 | 5.8889 | 42 | 0.0797 | | 0.0551 | 6.8889 | 49 | 0.0909 | | 0.0667 | 7.8889 | 56 | 0.0842 | | 0.0379 | 8.8889 | 63 | 0.0861 | | 0.0288 | 9.8889 | 70 | 0.0839 | | 0.0288 | 10.8889 | 77 | 0.0857 | | 0.0281 | 11.8889 | 84 | 0.0890 | | 0.024 | 12.8889 | 91 | 0.0912 | | 0.024 | 13.8889 | 98 | 0.0935 | | 0.0253 | 14.8889 | 105 | 0.0962 | | 0.0322 | 15.8889 | 112 | 0.0961 | | 0.0322 | 16.8889 | 119 | 0.0971 | | 0.0257 | 17.8889 | 126 | 0.0997 | | 0.023 | 18.8889 | 133 | 0.1007 | | 0.0219 | 19.8889 | 140 | 0.1011 | | 0.0219 | 20.8889 | 147 | 0.1015 | | 0.0249 | 21.8889 | 154 | 0.1013 | | 0.0214 | 22.8889 | 161 | 0.1011 | | 0.0214 | 23.8889 | 168 | 0.1019 | | 0.0238 | 24.8889 | 175 | 0.1022 | | 0.0215 | 25.8889 | 182 | 0.1024 | | 0.0215 | 26.8889 | 189 | 0.1026 | | 0.0235 | 27.8889 | 196 | 0.1030 | | 0.0216 | 28.5079 | 200 | 0.1030 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF
mradermacher
2025-03-10T17:46:46Z
0
0
transformers
[ "transformers", "gguf", "unsloth", "trl", "grpo", "en", "base_model:PranavHarshan/LLama3.2-3B-GSM8K-R1.1", "base_model:quantized:PranavHarshan/LLama3.2-3B-GSM8K-R1.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-10T17:24:06Z
--- base_model: PranavHarshan/LLama3.2-3B-GSM8K-R1.1 language: - en library_name: transformers quantized_by: mradermacher tags: - unsloth - trl - grpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/PranavHarshan/LLama3.2-3B-GSM8K-R1.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q3_K_L.gguf) | Q3_K_L | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q5_K_S.gguf) | Q5_K_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q5_K_M.gguf) | Q5_K_M | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q6_K.gguf) | Q6_K | 2.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLama3.2-3B-GSM8K-R1.1-GGUF/resolve/main/LLama3.2-3B-GSM8K-R1.1.f16.gguf) | f16 | 6.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
testliai-main/testliai-generate-exam-mistral-7b-instruct-v0.3-bnb-4bit-vLLM-v1.0.0
testliai-main
2025-03-10T17:46:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-10T17:45:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ritvik77/Doctor_AI_LoRA-Mistral-7B-Instruct_PEFT
ritvik77
2025-03-10T17:45:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "medical", "Doctor", "PEFT", "MEDICAL", "AIMEDICAL", "DOCTORai", "text-generation", "conversational", "dataset:FreedomIntelligence/medical-o1-reasoning-SFT", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T09:29:10Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers model_name: Doctor_AI_LoRA-Mistral-7B-Instructritvik77 tags: - generated_from_trainer - trl - medical - Doctor - PEFT - MEDICAL - AIMEDICAL - DOCTORai licence: license license: apache-2.0 datasets: - FreedomIntelligence/medical-o1-reasoning-SFT pipeline_tag: text-generation --- # Model Card for Doctor_AI_LoRA-Mistral-7B-Instructritvik77 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python # from peft import PeftModel, PeftConfig # from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig # from datasets import load_dataset # import torch # # Quantization config for 4-bit loading # bnb_config = BitsAndBytesConfig( # load_in_4bit=True, # bnb_4bit_quant_type="nf4", # bnb_4bit_compute_dtype=torch.bfloat16, # bnb_4bit_use_double_quant=True, # ) # # Repo ID for the PEFT model # peft_model_id = f"{username}/{output_dir}" # e.g., ritvik77/Mixtral-7B-LoRA-Salesforce-Optimized-AI-AgentCall # device = "auto" # # Load PEFT config from the Hub # config = PeftConfig.from_pretrained(peft_model_id) # # Load the base model (e.g., Mistral-7B) with quantization # model = AutoModelForCausalLM.from_pretrained( # config.base_model_name_or_path, # Base model ID stored in PEFT config # device_map="auto", # quantization_config=bnb_config, # Apply 4-bit quantization # ) # # Load tokenizer from the PEFT model repo # tokenizer = AutoTokenizer.from_pretrained(peft_model_id) # # Resize token embeddings to match tokenizer (if needed) # model.resize_token_embeddings(len(tokenizer)) # # Load PEFT adapters and apply them to the base model # model = PeftModel.from_pretrained(model, peft_model_id) # # Convert model to bfloat16 and set to evaluation mode # model.to(torch.bfloat16) # model.eval() import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import PeftModel, PeftConfig # ✅ Quantization config for 4-bit loading (Memory Optimization) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", # ✅ Improved precision for LoRA weights bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, # ✅ Reduces VRAM overhead ) # ✅ Load tokenizer from fine-tuned checkpoint (Ensures token consistency) peft_model_id = "ritvik77/Doctor_AI_LoRA-Mistral-7B-Instructritvik77" tokenizer = AutoTokenizer.from_pretrained(peft_model_id, trust_remote_code=True) # ✅ Ensure `pad_token` is correctly assigned if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token # ✅ Load Base Model with Quantization for Memory Efficiency model_name = "mistralai/Mistral-7B-Instruct-v0.3" model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", # ✅ Efficiently maps to available GPUs quantization_config=bnb_config, # ✅ Efficient quantization for large models torch_dtype=torch.bfloat16 ) # ✅ Resize Token Embeddings BEFORE Loading LoRA Adapter (Prevents size mismatch) model.resize_token_embeddings(len(tokenizer)) # ✅ Load PEFT Adapter (LoRA Weights) model = PeftModel.from_pretrained(model, peft_model_id) # ✅ Unfreeze LoRA layers to ensure they are trainable for name, param in model.named_parameters(): if "lora" in name: param.requires_grad = True # ✅ Confirm LoRA Layers Are Active if hasattr(model, 'print_trainable_parameters'): model.print_trainable_parameters() else: print("❗ Warning: LoRA adapter may not have loaded correctly.") # ✅ Ensure model is in evaluation mode for inference model.eval() # ✅ Sample Inference Code def generate_response(prompt, max_new_tokens=300, temperature=0.7): inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_new_tokens=max_new_tokens, do_sample=True, temperature=temperature ) return tokenizer.decode(outputs[0], skip_special_tokens=True) # ✅ Sample Prompt for Medical Diagnosis prompt = "Patient reports chest pain and shortness of breath. What might be the diagnosis?" response = generate_response(prompt) print("\n🩺 **Diagnosis:**", response) print("🚀 PEFT model loaded successfully with resized embeddings!") ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tukhtashevshohruh/xlm-roberta-base-uz-ver
tukhtashevshohruh
2025-03-10T17:44:59Z
2
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-10T07:44:33Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1228 - Precision: 0.5769 - Recall: 0.6093 - F1: 0.5926 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.1407 | 1.0 | 2206 | 0.1291 | 0.5569 | 0.5721 | 0.5644 | | 0.1169 | 2.0 | 4412 | 0.1228 | 0.5769 | 0.6093 | 0.5926 | | 0.0959 | 3.0 | 6618 | 0.1247 | 0.5888 | 0.6120 | 0.6002 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
Mantis2024/Dirty-Shirley-v1-9B-Uncensored-TIES
Mantis2024
2025-03-10T17:44:04Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:nkpz/gemma-2-9b-it-Uncensored-DeLMAT", "base_model:merge:nkpz/gemma-2-9b-it-Uncensored-DeLMAT", "base_model:nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT", "base_model:merge:nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT", "base_model:sam-paech/Quill-v1", "base_model:merge:sam-paech/Quill-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-10T17:40:04Z
--- base_model: - sam-paech/Quill-v1 - nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT - nkpz/gemma-2-9b-it-Uncensored-DeLMAT library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1) as a base. ### Models Merged The following models were included in the merge: * [nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT](https://huggingface.co/nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT) * [nkpz/gemma-2-9b-it-Uncensored-DeLMAT](https://huggingface.co/nkpz/gemma-2-9b-it-Uncensored-DeLMAT) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: sam-paech/Quill-v1 # no parameters necessary for base model - model: nkpz/gemma-2-9b-it-Uncensored-DeLMAT parameters: density: 0.5 weight: 0.5 - model: nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: sam-paech/Quill-v1 parameters: normalize: true dtype: float16 ```
FarmerlineML/w2v-bert-2.0_akan_2
FarmerlineML
2025-03-10T17:41:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-10T11:54:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ritvik77/FineTune_LoRA__AgentToolCall_Mistral-7B_Transformer
ritvik77
2025-03-10T17:41:27Z
63
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "AIFunctionCall", "ToolCall", "APICall", "AgentCall", "AgentCallTool", "AIAgent", "conversational", "en", "dataset:Salesforce/xlam-function-calling-60k", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-28T07:01:16Z
--- license: apache-2.0 datasets: - Salesforce/xlam-function-calling-60k language: - en metrics: - accuracy base_model: - mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers tags: - AIFunctionCall - ToolCall - APICall - AgentCall - AgentCallTool - AIAgent --- base_model: - mistralai/Mistral-7B-Instruct-v0.3 --- # Model Card for Model ID This code fine-tunes Mistral-7B-Instruct 🧠 using the Salesforce/xlam-function-calling-60k dataset to improve its ability to generate accurate structured function calls. It loads the dataset 📂, dynamically removes unnecessary columns like "query" and "answers" for cleaner data, and splits it into 90% training and 10% test for evaluation. The preprocess() function structures data in JSON format 📝, enhancing the model’s reasoning through Chain-of-Thought (CoT) prompting. Special tokens like <tools> and <think> are added to guide structured outputs 🔧. The model is further optimized with bnb_4bit quantization for reduced size (~4.5GB) and improved inference efficiency 🚀. The result is a powerful model that can handle complex API requests with improved accuracy and stability. 🔍 ## Model Details This code implements a well-structured process for fine-tuning the Mistral-7B-Instruct model using the Salesforce/xlam-function-calling-60k dataset. The goal is to improve the model’s ability to: ✅ Accurately understand user queries ✅ Generate precise function calls in structured JSON format ✅ Leverage Chain-of-Thought (CoT) reasoning for step-by-step function generation ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Ritvik Gaur] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [mistralai/Mistral-7B-Instruct-v0.3] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses from transformers import AutoModelForCausalLM, AutoTokenizer import torch generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("Generated Output:\n", generated_text) ### Direct Use [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. pip install transformers torch from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Link copy and Paste from Ritvik's repo from huggingface model_name = "ritvik77/FineTune_LoRA__AgentToolCall_Mistral-7B_Transformer" # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) # Model lOadning model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, # Efficient for GPU device_map="auto" # Automatically distribute across GPU/CPU ) # Set to evaluation mode model.eval() [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters Hyperparameter Value Description Base Model mistralai/Mistral-7B-Instruct-v0.3 Foundation model for fine-tuning Fine-Tuning Method LoRA (Low-Rank Adaptation) Efficiently trains only a small subset of parameters LoRA Rank Dimension 128 Controls the size of trainable LoRA layers LoRA Alpha 128 Scaling factor for LoRA layers LoRA Dropout 0.1 Adds regularization to improve model generalization Train Batch Size 2 Balanced for stable performance on A100 (40GB VRAM) Eval Batch Size 2 Ensures consistent evaluation during training Gradient Accumulation Steps 8 Maintains an effective batch size of 16 Learning Rate 2e-4 Optimized for stable convergence Warmup Ratio 0.1 Gradual learning rate increase for smoother training Weight Decay 0.1 Prevents overfitting by penalizing large weights Max Gradient Norm 1.0 Limits gradient spikes for stable training Number of Epochs 2 Balanced performance without overfitting Learning Rate Scheduler Cosine Provides smoother convergence Quantization bnb_4bit Reduces model size while preserving performance Precision fp16 Optimized for modern GPUs like A100/4090 Gradient Checkpointing Enabled Reduces memory usage during backpropagation Optimizer adamw_bnb_8bit Efficient optimizer for quantized models <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]