Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{"license": "openrail"}
MinLeo/Sungho-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:25:18+00:00
text-generation
transformers
# OpenVINO IR model with int8 quantization Model definition for LocalAI: ``` name: ChatQA backend: transformers parameters: model: fakezeta/Llama3-ChatQA-1.5-8B-ov-int8 context_size: 8192 type: OVModelForCausalLM template: use_tokenizer_template: true stopwords: - "<|eot_id|>" - "<|end_of_text|>" ``` ## Model Details We introduce ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augumented generation (RAG). ChatQA-1.5 is built using the training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of Llama-3 foundation model. Additionally, we incorporate more conversational QA data to enhance its tabular and arithmatic calculation capability. ChatQA-1.5 has two variants: ChatQA-1.5-8B and ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. ## Other Resources [ChatQA-1.5-70B](https://huggingface.co/nvidia/ChatQA-1.5-70B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ConvRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) ## Benchmark Results Results in ConvRAG Bench are as follows: | | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B | | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 | | QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 | | QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 | | CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 | | DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.6 | 50.67 | | ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 | | SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 | | TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 | | HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 | | INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.9 | 30.1 | 32.31 | | Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 | | Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 | Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ConvRAG can be found [here](https://huggingface.co/datasets/nvidia/ConvRAG-Bench). ## Prompt Format <pre> System: {System} {Context} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> ## How to use ### take the whole document as context This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "nvidia/ChatQA-1.5-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"} ] document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |""" def get_formatted_input(messages, context): system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." instruction = "Please give a full and complete answer for the question." for item in messages: if item['role'] == "user": ## only apply this instruction for the first user turn item['content'] = instruction + " " + item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = system + "\n\n" + context + "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages, document) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### run retrieval to get top-n chunks as context This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/ChatQA-1.5-8B/tree/main/docs) for users to play with. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel import torch import json ## load ChatQA-1.5 tokenizer and model model_id = "nvidia/ChatQA-1.5-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") ## load retriever tokenizer and model retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') ## prepare documents, we take landrover car manual document that we provide as an example chunk_list = json.load(open("docs.json"))['landrover'] messages = [ {"role": "user", "content": "how to connect the bluetooth in the car?"} ] ### running retrieval ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip() query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt') ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] ## Compute similarity scores using dot product and rank the similarity similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ## get top-n chunks (n=5) retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]] context = "\n\n".join(retrieved_chunks) ### running text generation formatted_input = get_formatted_input(messages, context) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Building GPT-4 Level Conversational QA Models}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre> ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
{"language": ["en"], "license": "llama3", "tags": ["nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch"], "pipeline_tag": "text-generation"}
fakezeta/Llama3-ChatQA-1.5-8B-ov-int8
null
[ "transformers", "openvino", "llama", "text-generation", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "conversational", "en", "arxiv:2401.10225", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:25:31+00:00
null
null
{"license": "openrail"}
MinLeo/Riwoo-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:25:31+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_410m-adpater-lora-mnli
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:25:33+00:00
null
null
{"license": "openrail"}
MinLeo/Jaehyun-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:25:54+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tricktreat/llama-2-7b-chat-12layers-T6-merged-with-llama-2-7b-chat-12layers-T6-peft-lora-orpo
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:26:03+00:00
null
null
{"license": "openrail"}
MinLeo/Taesan-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:26:07+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seed_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0251 - Macro-f1: 0.7620 - Micro-f1: 0.9517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.0965 | 1.0 | 692 | 0.0395 | 0.3257 | 0.9201 | | 0.0428 | 2.0 | 1384 | 0.0300 | 0.5948 | 0.9260 | | 0.0202 | 3.0 | 2076 | 0.0251 | 0.7620 | 0.9517 | | 0.0136 | 4.0 | 2768 | 0.0285 | 0.7234 | 0.9372 | | 0.01 | 5.0 | 3460 | 0.0300 | 0.7252 | 0.9452 | | 0.0068 | 6.0 | 4152 | 0.0286 | 0.7559 | 0.9501 | ### Framework versions - Transformers 4.40.0 - Pytorch 1.13.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "seed_1", "results": []}]}
marmolpen3/seed_1
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:26:14+00:00
token-classification
transformers
{}
ar9av/UDOP-finetuned-DocLayNet-1
null
[ "transformers", "safetensors", "udop", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:26:20+00:00
text-generation
transformers
# Uploaded model - **Developed by:** HadjYahia - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl", "sft"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
HadjYahia/Gemma1
null
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:26:21+00:00
null
null
{"license": "openrail"}
MinLeo/Leehan-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:26:22+00:00
null
transformers
{}
Rasi1610/Deathce502_series3_m8
null
[ "transformers", "pytorch", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:26:28+00:00
null
null
{"license": "openrail"}
MinLeo/Woonhak-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:26:35+00:00
text2text-generation
transformers
{}
Huyisbeee/mbart-vi-km-v3.1
null
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:27:07+00:00
text-generation
transformers
# MGM-8B Model Card <a href='https://github.com/dvlab-research/MGM'><img src='https://img.shields.io/badge/Project-Code-violet'></a> <a href='https://mini-gemini.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/pdf/2403.18814.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> ## Model details The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously. Normal resolution setting: [MGM-2B](https://huggingface.co/YanweiLi/MGM-2B), [MGM-7B](https://huggingface.co/YanweiLi/MGM-7B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B), [MGM-8x7B](https://huggingface.co/YanweiLi/MGM-8x7B), [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B) High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-8B-HD](https://huggingface.co/YanweiLi/MGM-8B-HD), [MGM-13B-HD](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD), [MGM-34B-HD](https://huggingface.co/YanweiLi/MGM-34B-HD) **Model type:** MGM is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously. **Model version:** MGM with LLM Meta-Llama-3-8B-Instruct **Model date:** MGM-8B was trained on 04/2024. ## License Llama 3 is licensed under the LLAMA 3 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/dvlab-research/MGM/issues ## Intended use **Primary intended uses:** The primary use is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training data This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail. ## Acknowledgement This project is not affiliated with Google LLC.
{"tags": ["vision-language model", "llama", "generation"], "datasets": ["YanweiLi/MGM-Instruction"]}
YanweiLi/MGM-8B
null
[ "transformers", "safetensors", "mgm", "text-generation", "vision-language model", "llama", "generation", "conversational", "dataset:YanweiLi/MGM-Instruction", "arxiv:2403.18814", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:27:19+00:00
sentence-similarity
sentence-transformers
# SentenceTransformer based on FacebookAI/xlm-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the [en-ar](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks), [en-fr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks), [en-de](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks), [en-es](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks), [en-tr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) and [en-it](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) <!-- at revision e73636d4f797dec63c3081bb6ed5c7b0bb3f2089 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Datasets:** - [en-ar](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - [en-fr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - [en-de](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - [en-es](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - [en-tr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - [en-it](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) - **Languages:** en, multilingual, ar, bg, ca, cs, da, de, el, es, et, fa, fi, fr, gl, gu, he, hi, hr, hu, hy, id, it, ja, ka, ko, ku, lt, lv, mk, mn, mr, ms, my, nb, nl, pl, pt, ro, ru, sk, sl, sq, sr, sv, th, tr, uk, ur, vi, zh <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the πŸ€— Hub model = SentenceTransformer("tomaarsen/xlm-roberta-base-multilingual-en-ar-fr-de-es-tr-it") # Run inference sentences = [ 'Wir sind eins.', 'Das versuchen wir zu bieten.', 'Ihre Gehirne sind ungefΓ€hr 100 Millionen Mal komplizierter.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Knowledge Distillation * Dataset: `en-ar` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-20.3955** | #### Translation * Dataset: `en-ar` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.7603 | | trg2src_accuracy | 0.7825 | | **mean_accuracy** | **0.7714** | #### Semantic Similarity * Dataset: `sts17-en-ar-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.4098 | | spearman_cosine | 0.4425 | | pearson_manhattan | 0.4069 | | spearman_manhattan | 0.4194 | | pearson_euclidean | 0.3801 | | spearman_euclidean | 0.3865 | | pearson_dot | 0.4078 | | spearman_dot | 0.3768 | | pearson_max | 0.4098 | | **spearman_max** | **0.4425** | #### Knowledge Distillation * Dataset: `en-fr` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-19.6232** | #### Translation * Dataset: `en-fr` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.8982 | | trg2src_accuracy | 0.8901 | | **mean_accuracy** | **0.8942** | #### Semantic Similarity * Dataset: `sts17-fr-en-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.5018 | | spearman_cosine | 0.5334 | | pearson_manhattan | 0.4461 | | spearman_manhattan | 0.4547 | | pearson_euclidean | 0.4431 | | spearman_euclidean | 0.4481 | | pearson_dot | 0.4017 | | spearman_dot | 0.4134 | | pearson_max | 0.5018 | | **spearman_max** | **0.5334** | #### Knowledge Distillation * Dataset: `en-de` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-19.7279** | #### Translation * Dataset: `en-de` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.892 | | trg2src_accuracy | 0.891 | | **mean_accuracy** | **0.8915** | #### Semantic Similarity * Dataset: `sts17-en-de-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.5263 | | spearman_cosine | 0.5618 | | pearson_manhattan | 0.5085 | | spearman_manhattan | 0.5218 | | pearson_euclidean | 0.5055 | | spearman_euclidean | 0.5206 | | pearson_dot | 0.3742 | | spearman_dot | 0.3691 | | pearson_max | 0.5263 | | **spearman_max** | **0.5618** | #### Knowledge Distillation * Dataset: `en-es` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-19.4724** | #### Translation * Dataset: `en-es` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.9434 | | trg2src_accuracy | 0.9465 | | **mean_accuracy** | **0.9449** | #### Semantic Similarity * Dataset: `sts17-es-en-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.4945 | | spearman_cosine | 0.5021 | | pearson_manhattan | 0.4445 | | spearman_manhattan | 0.4284 | | pearson_euclidean | 0.4357 | | spearman_euclidean | 0.417 | | pearson_dot | 0.3751 | | spearman_dot | 0.3796 | | pearson_max | 0.4945 | | **spearman_max** | **0.5021** | #### Knowledge Distillation * Dataset: `en-tr` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-20.7547** | #### Translation * Dataset: `en-tr` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.7432 | | trg2src_accuracy | 0.7432 | | **mean_accuracy** | **0.7432** | #### Semantic Similarity * Dataset: `sts17-en-tr-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.5545 | | spearman_cosine | 0.5819 | | pearson_manhattan | 0.5104 | | spearman_manhattan | 0.5088 | | pearson_euclidean | 0.5046 | | spearman_euclidean | 0.5053 | | pearson_dot | 0.4726 | | spearman_dot | 0.4298 | | pearson_max | 0.5545 | | **spearman_max** | **0.5819** | #### Knowledge Distillation * Dataset: `en-it` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:-------------| | **negative_mse** | **-19.7699** | #### Translation * Dataset: `en-it` * Evaluated with [<code>TranslationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.8781 | | trg2src_accuracy | 0.8832 | | **mean_accuracy** | **0.8807** | #### Semantic Similarity * Dataset: `sts17-it-en-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:----------| | pearson_cosine | 0.5064 | | spearman_cosine | 0.525 | | pearson_manhattan | 0.4517 | | spearman_manhattan | 0.4623 | | pearson_euclidean | 0.4423 | | spearman_euclidean | 0.4507 | | pearson_dot | 0.4202 | | spearman_dot | 0.4225 | | pearson_max | 0.5064 | | **spearman_max** | **0.525** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets #### en-ar * Dataset: [en-ar](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 27.3 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------| | <code>Ψ­Ψ³Ω†Ψ§Ω‹ Ψ§Ω† Ω…Ψ§ Ω†Ω‚ΩˆΩ… Ψ¨Ω‡ Ψ§Ω„ΩŠΩˆΩ… .. Ω‡Ωˆ Ψ§Ω† Ω†Ψ¬Ψ¨Ψ± Ψ§Ω„Ψ·Ω„Ψ§Ψ¨ Ω„ΨͺΨΉΩ„Ω… Ψ§Ω„Ψ±ΩŠΨ§ΨΆΩŠΨ§Ψͺ</code> | <code>[0.3943225145339966, 0.18910610675811768, -0.3788299858570099, 0.4386662542819977, 0.2727023661136627, ...]</code> | | <code>Ψ§Ω†Ω‡Ψ§ Ψ§Ω„Ω…Ψ§Ψ―Ψ© Ψ§Ω„Ψ§Ω‡Ω… ..</code> | <code>[0.6257511377334595, -0.1750679910182953, -0.5734405517578125, 0.11480475962162018, 1.1682192087173462, ...]</code> | | <code>Ψ§Ω†Ψ§ Ω„Ψ§ Ψ§Ω†ΩΩŠ Ω„Ψ―Ω‚ΩŠΩ‚Ψ© واحدة Ψ§Ω† Ψ§Ω„Ψ°ΩŠΩ† ΩŠΩ‡ΨͺΩ…ΩˆΩ† Ψ¨Ψ§Ω„Ψ­Ψ³Ψ§Ψ¨Ψ§Ψͺ Ψ§Ω„ΩŠΨ―ΩˆΩŠΨ© ΩˆΨ§Ω„Ψ°ΩŠΩ† Ω‡ΩˆΨ§ΩŠΨͺΩ‡Ω… Ψ§Ω„Ω‚ΩŠΨ§Ω… Ψ¨Ψ°Ω„Ωƒ .. او Ψ§Ω„Ω‚ΩŠΨ§Ω… Ψ¨Ψ§Ω„Ψ·Ψ±Ω‚ Ψ§Ω„ΨͺΩ‚Ω„ΩŠΨ―ΩŠΨ© في اي Ω…Ψ¬Ψ§Ω„ Ψ§Ω† ΩŠΩ‚ΩˆΩ…ΩˆΨ§ Ψ¨Ψ°Ω„Ωƒ ΩƒΩ…Ψ§ ΩŠΨ±ΩŠΨ―ΩˆΩ† .</code> | <code>[-0.04564047232270241, 0.4971524775028229, 0.28066301345825195, -0.726702094078064, -0.17846377193927765, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-fr * Dataset: [en-fr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 3 tokens</li><li>mean: 30.18 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Je ne crois pas que ce soit justifiΓ©.</code> | <code>[-0.361753910779953, 0.7323777079582214, 0.6518164277076721, -0.8461216688156128, -0.007496988866478205, ...]</code> | | <code>Je fais cette distinction entre ce qu'on force les gens Γ  faire et les matiΓ¨res gΓ©nΓ©rales, et la matiΓ¨re que quelqu'un va apprendre parce que Γ§a lui plait et peut-Γͺtre mΓͺme exceller dans ce domaine.</code> | <code>[0.3047865629196167, 0.5270194411277771, 0.26616284251213074, 0.2612147927284241, 0.1950961947441101, ...]</code> | | <code>Quels sont les problΓ¨mes en relation avec Γ§a?</code> | <code>[0.2123892903327942, -0.09616081416606903, -0.41965243220329285, -0.5469444394111633, -0.6056491136550903, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-de * Dataset: [en-de](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 27.04 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:----------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Ich denke, dass es sich aus diesem Grund lohnt, den Leuten das Rechnen von Hand beizubringen.</code> | <code>[0.0960279330611229, 0.7833179831504822, -0.09527698159217834, 0.8104371428489685, 0.7545774579048157, ...]</code> | | <code>Außerdem gibt es ein paar bestimmte konzeptionelle Dinge, die das Rechnen per Hand rechtfertigen, aber ich glaube es sind sehr wenige.</code> | <code>[-0.5939837098121643, 0.9714100956916809, 0.6800686717033386, -0.21585524082183838, -0.7509503364562988, ...]</code> | | <code>Eine Sache, die ich mich oft frage, ist Altgriechisch, und wie das zusammengehΓΆrt.</code> | <code>[-0.09777048230171204, 0.07093209028244019, -0.42989012598991394, -0.1457514613866806, 1.4382753372192383, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-es * Dataset: [en-es](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 25.42 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Y luego hay ciertas aspectos conceptuales que pueden beneficiarse del cΓ‘lculo a mano pero creo que son relativamente pocos.</code> | <code>[-0.5939835906028748, 0.9714106917381287, 0.6800685524940491, -0.2158554196357727, -0.7509507536888123, ...]</code> | | <code>Algo que pregunto a menudo es sobre el griego antiguo y cΓ³mo se relaciona.</code> | <code>[-0.09777048230171204, 0.07093209028244019, -0.42989012598991394, -0.1457514613866806, 1.4382753372192383, ...]</code> | | <code>Vean, lo que estamos haciendo ahora es forzar a la gente a aprender matemΓ‘ticas.</code> | <code>[0.3943225145339966, 0.18910610675811768, -0.3788299858570099, 0.4386662542819977, 0.2727023661136627, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-tr * Dataset: [en-tr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 24.72 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Eğer insanlar elle hesaba ilgililerse ya da âğrenmek iΓ§in ΓΆzel amaΓ§larΔ± varsa konu ne kadar acayip olursa olsun bunu âğrenmeliler, engellemeyi bir an iΓ§in bile ΓΆnermiyorum.</code> | <code>[-0.04564047232270241, 0.4971524775028229, 0.28066301345825195, -0.726702094078064, -0.17846377193927765, ...]</code> | | <code>Δ°nsanlarΔ±n kendi ilgi alanlarΔ±nΔ± takip etmeleri, kesinlikle doğru bir şeydir.</code> | <code>[0.2061387449502945, 0.5284574031829834, 0.3577779233455658, 0.28818392753601074, 0.17228049039840698, ...]</code> | | <code>Ben bir biΓ§imde Antik Yunan hakkΔ±nda ilgiliyimdir. ancak tΓΌm nΓΌfusu Antik Yunan gibi bir konu hakkΔ±nda bilgi edinmeye zorlamamalΔ±yΔ±z.</code> | <code>[0.12050342559814453, 0.15652479231357574, 0.48636534810066223, -0.13693244755268097, 0.42764803767204285, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-it * Dataset: [en-it](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 5,000 training samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 3 tokens</li><li>mean: 26.41 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------| | <code>Non credo che sia giustificato.</code> | <code>[-0.36175352334976196, 0.7323781251907349, 0.651816189289093, -0.8461223840713501, -0.007496151141822338, ...]</code> | | <code>PerciΓ² faccio distinzione tra quello che stiamo facendo fare alle persone, le materie che si ritengono principali, e le materie che le persone potrebbero seguire per loro interesse o forse a volte anche incitate a farlo.</code> | <code>[0.3047865927219391, 0.5270194411277771, 0.26616284251213074, 0.2612147927284241, 0.1950961947441101, ...]</code> | | <code>Ma che argomenti porta la gente su questi temi?</code> | <code>[0.2123885154724121, -0.09616123884916306, -0.4196523427963257, -0.5469440817832947, -0.6056501865386963, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) ### Evaluation Datasets #### en-ar * Dataset: [en-ar](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 993 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 3 tokens</li><li>mean: 28.03 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Ψ΄ΩƒΨ±Ψ§ Ψ¬Ψ²ΩŠΩ„Ψ§ ΩƒΨ±ΩŠΨ³.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>Ψ§Ω†Ω‡ فعلا شرف ΨΉΨΈΩŠΩ… Ω„ΩŠ Ψ§Ω† Ψ£Ψ΅ΨΉΨ― Ψ§Ω„Ω…Ω†Ψ΅Ψ© Ω„Ω„Ω…Ψ±Ψ© Ψ§Ω„Ψ«Ψ§Ω†ΩŠΨ©. Ψ£Ω†Ψ§ في غاية Ψ§Ω„Ψ§Ω…ΨͺΩ†Ψ§Ω†.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>Ω„Ω‚Ψ― Ψ¨Ω‡Ψ±Ψͺ فعلا Ψ¨Ω‡Ψ°Ψ§ Ψ§Ω„Ω…Ψ€ΨͺΩ…Ψ±, وأريد Ψ£Ω† Ψ£Ψ΄ΩƒΨ±ΩƒΩ… Ψ¬Ω…ΩŠΨΉΨ§ ΨΉΩ„Ω‰ ΨͺΨΉΩ„ΩŠΩ‚Ψ§ΨͺΩƒΩ… Ψ§Ω„Ψ·ΩŠΨ¨Ψ© ΨΉΩ„Ω‰ Ω…Ψ§ Ω‚Ω„ΨͺΩ‡ ΨͺΩ„Ωƒ Ψ§Ω„Ω„ΩŠΩ„Ψ©.</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-fr * Dataset: [en-fr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 992 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 30.72 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Merci beaucoup, Chris.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>C'est vraiment un honneur de pouvoir venir sur cette scΓ¨ne une deuxiΓ¨me fois. Je suis trΓ¨s reconnaissant.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>J'ai Γ©tΓ© trΓ¨s impressionnΓ© par cette confΓ©rence, et je tiens Γ  vous remercier tous pour vos nombreux et sympathiques commentaires sur ce que j'ai dit l'autre soir.</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-de * Dataset: [en-de](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 991 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 27.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Vielen Dank, Chris.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>Es ist mir wirklich eine Ehre, zweimal auf dieser BΓΌhne stehen zu dΓΌrfen. Tausend Dank dafΓΌr.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>Ich bin wirklich begeistert von dieser Konferenz, und ich danke Ihnen allen fΓΌr die vielen netten Kommentare zu meiner Rede vorgestern Abend.</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-es * Dataset: [en-es](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 990 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Muchas gracias Chris.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>Y es en verdad un gran honor tener la oportunidad de venir a este escenario por segunda vez. Estoy extremadamente agradecido.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>He quedado conmovido por esta conferencia, y deseo agradecer a todos ustedes sus amables comentarios acerca de lo que tenΓ­a que decir la otra noche.</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-tr * Dataset: [en-tr](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 993 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 25.4 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:----------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Γ‡ok teşekkΓΌr ederim Chris.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>Bu sahnede ikinci kez yer alma fΔ±rsatΔ±na sahip olmak gerΓ§ekten bΓΌyΓΌk bir onur. Γ‡ok minnettarΔ±m.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>Bu konferansta Γ§ok mutlu oldum, ve anlattΔ±klarΔ±mla ilgili gΓΌzel yorumlarΔ±nΔ±z iΓ§in sizlere Γ§ok teşekkΓΌr ederim.</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) #### en-it * Dataset: [en-it](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) at [d366ddd](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks/tree/d366dddc3d1ef0421a41f9e534bad4efae6d7730) * Size: 993 evaluation samples * Columns: <code>non_english</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | non_english | label | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 27.94 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | non_english | label | |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | <code>Grazie mille, Chris.</code> | <code>[-0.4331263303756714, 1.0602688789367676, -0.07791043072938919, -0.4170420169830322, 1.6768444776535034, ...]</code> | | <code>E’ veramente un grande onore venire su questo palco due volte. Vi sono estremamente grato.</code> | <code>[0.27005696296691895, 0.5391750335693359, -0.2580486238002777, -0.6613674759864807, 0.6738830804824829, ...]</code> | | <code>Sono impressionato da questa conferenza, e voglio ringraziare tutti voi per i tanti, lusinghieri commenti, anche perchΓ©... Ne ho bisogno!!</code> | <code>[-0.25320106744766235, 0.04791366308927536, -0.13174884021282196, -0.7357578277587891, 0.2366354614496231, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/losses.html#mseloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | en-ar loss | en-it loss | en-de loss | en-fr loss | en-es loss | en-tr loss | en-ar_mean_accuracy | en-ar_negative_mse | en-de_mean_accuracy | en-de_negative_mse | en-es_mean_accuracy | en-es_negative_mse | en-fr_mean_accuracy | en-fr_negative_mse | en-it_mean_accuracy | en-it_negative_mse | en-tr_mean_accuracy | en-tr_negative_mse | sts17-en-ar-test_spearman_max | sts17-en-de-test_spearman_max | sts17-en-tr-test_spearman_max | sts17-es-en-test_spearman_max | sts17-fr-en-test_spearman_max | sts17-it-en-test_spearman_max | |:------:|:----:|:-------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-------------------:|:------------------:|:-------------------:|:------------------:|:-------------------:|:------------------:|:-------------------:|:------------------:|:-------------------:|:------------------:|:-------------------:|:------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------:| | 0.2110 | 100 | 0.5581 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4219 | 200 | 0.3071 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6329 | 300 | 0.2675 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.8439 | 400 | 0.2606 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 1.0549 | 500 | 0.2589 | 0.2519 | 0.2498 | 0.2511 | 0.2488 | 0.2503 | 0.2512 | 0.1254 | -25.1903 | 0.2523 | -25.1089 | 0.2591 | -25.0276 | 0.2409 | -24.8803 | 0.2180 | -24.9768 | 0.1158 | -25.1219 | 0.0308 | 0.1281 | 0.1610 | 0.1465 | 0.0552 | 0.0518 | | 1.2658 | 600 | 0.2504 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 1.4768 | 700 | 0.2427 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 1.6878 | 800 | 0.2337 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 1.8987 | 900 | 0.2246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 2.1097 | 1000 | 0.2197 | 0.2202 | 0.2157 | 0.2151 | 0.2147 | 0.2139 | 0.2218 | 0.5841 | -22.0204 | 0.8012 | -21.5087 | 0.8495 | -21.3935 | 0.7959 | -21.4660 | 0.7815 | -21.5699 | 0.6007 | -22.1778 | 0.3346 | 0.4013 | 0.4727 | 0.3353 | 0.3827 | 0.3292 | | 2.3207 | 1100 | 0.2163 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 2.5316 | 1200 | 0.2123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 2.7426 | 1300 | 0.2069 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 2.9536 | 1400 | 0.2048 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 3.1646 | 1500 | 0.2009 | 0.2086 | 0.2029 | 0.2022 | 0.2012 | 0.2002 | 0.2111 | 0.7367 | -20.8567 | 0.8739 | -20.2247 | 0.9303 | -20.0215 | 0.8755 | -20.1213 | 0.8600 | -20.2900 | 0.7165 | -21.1119 | 0.4087 | 0.5473 | 0.5551 | 0.4724 | 0.4882 | 0.4690 | | 3.3755 | 1600 | 0.2019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 3.5865 | 1700 | 0.1989 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 3.7975 | 1800 | 0.196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 4.0084 | 1900 | 0.1943 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 4.2194 | 2000 | 0.194 | 0.2040 | 0.1977 | 0.1973 | 0.1962 | 0.1947 | 0.2075 | 0.7714 | -20.3955 | 0.8915 | -19.7279 | 0.9449 | -19.4724 | 0.8942 | -19.6232 | 0.8807 | -19.7699 | 0.7432 | -20.7547 | 0.4425 | 0.5618 | 0.5819 | 0.5021 | 0.5334 | 0.5250 | | 4.4304 | 2100 | 0.1951 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 4.6414 | 2200 | 0.1928 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 4.8523 | 2300 | 0.1909 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.060 kWh - **Carbon Emitted**: 0.023 kg of CO2 - **Hours Used**: 0.179 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MSELoss ```bibtex @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en", "multilingual", "ar", "bg", "ca", "cs", "da", "de", "el", "es", "et", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "it", "ja", "ka", "ko", "ku", "lt", "lv", "mk", "mn", "mr", "ms", "my", "nb", "nl", "pl", "pt", "ro", "ru", "sk", "sl", "sq", "sr", "sv", "th", "tr", "uk", "ur", "vi", "zh"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:MSELoss"], "metrics": ["negative_mse", "src2trg_accuracy", "trg2src_accuracy", "mean_accuracy", "pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "base_model": "FacebookAI/xlm-roberta-base", "widget": [{"source_sentence": "Grazie tante.", "sentences": ["Grazie infinite.", "Non c'\u00e8 un solo architetto diplomato in tutta la Contea.", "Le aziende non credevano che fosse loro responsabilit\u00e0."]}, {"source_sentence": "Avance rapide.", "sentences": ["Tr\u00e8s bien.", "Donc, je voulais faire quelque chose de sp\u00e9cial aujourd'hui.", "Et ils ne tiennent pas non plus compte des civils qui souffrent de fa\u00e7on plus g\u00e9n\u00e9rale."]}, {"source_sentence": "E' importante.", "sentences": ["E' una materia fondamentale.", "Sono qui oggi per mostrare le mie fotografie dei Lakota.", "Non ero seguito da un corteo di macchine."]}, {"source_sentence": "M\u00fcfetti\u015fler\u2026", "sentences": ["\u0130\u015f\u00e7i s\u0131n\u0131f\u0131na dair bir\u015fey.", "Antla\u015fmaya g\u00f6re, o topraklar ba\u011f\u0131ms\u0131z bir ulustur.", "Son derece d\u00fcz ve batakl\u0131k bir co\u011frafya."]}, {"source_sentence": "Wir sind eins.", "sentences": ["Das versuchen wir zu bieten.", "Ihre Gehirne sind ungef\u00e4hr 100 Millionen Mal komplizierter.", "Hinter mir war gar keine Autokolonne."]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 23.27766676567869, "energy_consumed": 0.05988563672345058, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.179, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on FacebookAI/xlm-roberta-base", "results": [{"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en ar", "type": "en-ar"}, "metrics": [{"type": "negative_mse", "value": -20.395545661449432, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en ar", "type": "en-ar"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.7603222557905337, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.7824773413897281, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.7713997985901309, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 en ar test", "type": "sts17-en-ar-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.40984231242712876, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.4425400227662121, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.4068582195810505, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.4194184278683204, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.38014538983821944, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.38651157412220366, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.4077636003696869, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.37682818098716137, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.40984231242712876, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.4425400227662121, "name": "Spearman Max"}]}, {"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en fr", "type": "en-fr"}, "metrics": [{"type": "negative_mse", "value": -19.62321847677231, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en fr", "type": "en-fr"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.8981854838709677, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.8901209677419355, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.8941532258064516, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 fr en test", "type": "sts17-fr-en-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.5017606394120642, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.5333594401322842, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.4461108010622129, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.45470883061015244, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.44313058261278737, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.44806261424208443, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.40165874540768454, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.41339619568003433, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.5017606394120642, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.5333594401322842, "name": "Spearman Max"}]}, {"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en de", "type": "en-de"}, "metrics": [{"type": "negative_mse", "value": -19.727922976017, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en de", "type": "en-de"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.8920282542885973, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.8910191725529768, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.8915237134207871, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 en de test", "type": "sts17-en-de-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.5262798164154752, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.5618005565496922, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.5084907192868734, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.5218456102379673, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.5055278909013912, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.5206420646365548, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.3742195121194434, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.3691237073066472, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.5262798164154752, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.5618005565496922, "name": "Spearman Max"}]}, {"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en es", "type": "en-es"}, "metrics": [{"type": "negative_mse", "value": -19.472387433052063, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en es", "type": "en-es"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.9434343434343434, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.9464646464646465, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.944949494949495, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 es en test", "type": "sts17-es-en-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.4944989376773328, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.502096516024397, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.44447965250345656, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.428444032581959, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.43569887867301704, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.4169602915053127, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.3751122541083453, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.37961391381473436, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.4944989376773328, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.502096516024397, "name": "Spearman Max"}]}, {"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en tr", "type": "en-tr"}, "metrics": [{"type": "negative_mse", "value": -20.754697918891907, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en tr", "type": "en-tr"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.743202416918429, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.743202416918429, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.743202416918429, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 en tr test", "type": "sts17-en-tr-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.5544917743538167, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.581923120433332, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.5103770986779784, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.5087986920849596, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.5045523005860614, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.5053157708914061, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.47262046401401747, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.4297595645819756, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.5544917743538167, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.581923120433332, "name": "Spearman Max"}]}, {"task": {"type": "knowledge-distillation", "name": "Knowledge Distillation"}, "dataset": {"name": "en it", "type": "en-it"}, "metrics": [{"type": "negative_mse", "value": -19.76993829011917, "name": "Negative Mse"}]}, {"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "en it", "type": "en-it"}, "metrics": [{"type": "src2trg_accuracy", "value": 0.878147029204431, "name": "Src2Trg Accuracy"}, {"type": "trg2src_accuracy", "value": 0.8831822759315207, "name": "Trg2Src Accuracy"}, {"type": "mean_accuracy", "value": 0.8806646525679758, "name": "Mean Accuracy"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts17 it en test", "type": "sts17-it-en-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.506365733914274, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.5250284136808592, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.45167598168533407, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.46227952068355316, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.4423426674780287, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.45072801992723094, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.4201989776020174, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.42253906764732746, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.506365733914274, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.5250284136808592, "name": "Spearman Max"}]}]}]}
tomaarsen/xlm-roberta-base-multilingual-en-ar-fr-de-es-tr-it
null
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "loss:MSELoss", "en", "multilingual", "ar", "bg", "ca", "cs", "da", "de", "el", "es", "et", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "it", "ja", "ka", "ko", "ku", "lt", "lv", "mk", "mn", "mr", "ms", "my", "nb", "nl", "pl", "pt", "ro", "ru", "sk", "sl", "sq", "sr", "sv", "th", "tr", "uk", "ur", "vi", "zh", "arxiv:1908.10084", "arxiv:2004.09813", "base_model:FacebookAI/xlm-roberta-base", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:27:33+00:00
text-generation
transformers
{}
Weni/WeniGPT-Agents-Llama3-5.0.18-DPO-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-02T14:27:44+00:00
null
null
{"language": ["en"], "tags": ["code"], "task_categories": ["feature-extraction"], "pretty_name": "Request to Configuration", "size_categories": ["n<1K"]}
simengel/request_to_configuration
null
[ "code", "en", "region:us" ]
null
2024-05-02T14:27:46+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
tingting/llama3_8binstruct_lora_model_balanced_Data_300
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:28:07+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tricktreat/llama-2-7b-chat-merged-with-llama-2-7b-chat-12layers-T6-peft-lora-orpo
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:29:01+00:00
text-classification
transformers
Model CLB 2024 finetuned on whole dataset
{}
preetamn0/CLB2024
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:29:48+00:00
text-generation
transformers
{}
ilivieris/ALLIES
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-02T14:31:27+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
tingting/mistral7binstruct02_lora_model_balanced_Data_500
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:31:47+00:00
null
null
{}
WaterKnight/diffusion-models
null
[ "region:us" ]
null
2024-05-02T14:32:41+00:00
text-generation
transformers
# MGM-8B-HD Model Card <a href='https://github.com/dvlab-research/MGM'><img src='https://img.shields.io/badge/Project-Code-violet'></a> <a href='https://mini-gemini.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/pdf/2403.18814.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> ## Model details The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously. Normal resolution setting: [MGM-2B](https://huggingface.co/YanweiLi/MGM-2B), [MGM-7B](https://huggingface.co/YanweiLi/MGM-7B), [MGM-8B](https://huggingface.co/YanweiLi/MGM-8B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B), [MGM-8x7B](https://huggingface.co/YanweiLi/MGM-8x7B), [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B) High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-13B-HD](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD), [MGM-34B-HD](https://huggingface.co/YanweiLi/MGM-34B-HD) **Model type:** MGM is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously. **Model version:** MGM with LLM Meta-Llama-3-8B-Instruct **Model date:** MGM-8B-HD was trained on 04/2024. ## License Llama 3 is licensed under the LLAMA 3 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/dvlab-research/MGM/issues ## Intended use **Primary intended uses:** The primary use is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training data This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail. ## Acknowledgement This project is not affiliated with Google LLC.
{"tags": ["vision-language model", "llama", "generation"], "datasets": ["YanweiLi/MGM-Instruction"]}
YanweiLi/MGM-8B-HD
null
[ "transformers", "safetensors", "mgm", "text-generation", "vision-language model", "llama", "generation", "conversational", "dataset:YanweiLi/MGM-Instruction", "arxiv:2403.18814", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:33:26+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_1b-adpater-lora-mrpc
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:33:27+00:00
null
transformers
# Uploaded model - **Developed by:** mo-makdah-k - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
mo-makdah-k/demo-model
null
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:33:51+00:00
null
null
{}
sirajul116/distilhubert-finetuned-gtzan
null
[ "region:us" ]
null
2024-05-02T14:35:11+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
adrianmedinav/whisper-small_ro_epochs_12_2024-05-02_13-00-23
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:36:31+00:00
null
null
{}
mizoru/whisper-large-ru-ORD_0.7_peft_0.3
null
[ "safetensors", "region:us" ]
null
2024-05-02T14:36:59+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
tingting/llama3_8binstruct_lora_model_balanced_Data_400
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:38:52+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-sft-qlora-re This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "llama3-8b-sft-qlora-re", "results": []}]}
ymechqrane/llama3-8b-sft-qlora-re
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-05-02T14:39:23+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AlbiGara/bert-finetuned-ner-medical-copy This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1502 - Validation Loss: 0.2804 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3480, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3099 | 0.2768 | 0 | | 0.1833 | 0.2840 | 1 | | 0.1502 | 0.2804 | 2 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-cased", "model-index": [{"name": "AlbiGara/bert-finetuned-ner-medical-copy", "results": []}]}
AlbiGara/bert-finetuned-ner-medical-copy
null
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:39:55+00:00
null
transformers
# Uploaded model - **Developed by:** projectwilsen - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
projectwilsen/llama3_text2cypher_recom
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:39:59+00:00
null
null
{}
alexisxiaoyu/xlm-roberta-base-finetuned-panx-fr
null
[ "region:us" ]
null
2024-05-02T14:40:15+00:00
text-generation
transformers
# llama-3-neural-chat-v2.2-8b <!-- Provide a quick summary of what the model is/does. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/6XQuhjWNr6C4RbU9f1k99.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO-Positive. DPO-Positive dramatically improves performance over DPO. - **Developed by:** Locutusque - **Model type:** Built with Meta Llama 3 - **Language(s) (NLP):** Many? - **License:** Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE ## Quants coming soon ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model has great performance in writing, coding, and math. ## Training Data Recipe information will be coming soon. This language model's recipe is similar to Intel's Neural Chat. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Conversational AI. ## Evaluations | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |---------------------------------|-------|----------------|-----:|-----------|-----:|---|-----:| |truthfulqa_mc2 | 2|none | 0|acc |0.5232|Β± |0.0151| |gsm8k | 3|strict-match | 5|exact_match|0.5974|Β± |0.0135| | | |flexible-extract| 5|exact_match|0.5974|Β± |0.0135| |agieval_nous |N/A |none | 0|acc_norm |0.3841|Β± |0.0094| | | |none | 0|acc |0.3802|Β± |0.0094| | - agieval_aqua_rat | 1|none | 0|acc |0.2598|Β± |0.0276| | | |none | 0|acc_norm |0.2520|Β± |0.0273| | - agieval_logiqa_en | 1|none | 0|acc |0.3441|Β± |0.0186| | | |none | 0|acc_norm |0.3687|Β± |0.0189| | - agieval_lsat_ar | 1|none | 0|acc |0.2217|Β± |0.0275| | | |none | 0|acc_norm |0.2348|Β± |0.0280| | - agieval_lsat_lr | 1|none | 0|acc |0.3882|Β± |0.0216| | | |none | 0|acc_norm |0.3824|Β± |0.0215| | - agieval_lsat_rc | 1|none | 0|acc |0.4944|Β± |0.0305| | | |none | 0|acc_norm |0.5019|Β± |0.0305| | - agieval_sat_en | 1|none | 0|acc |0.6650|Β± |0.0330| | | |none | 0|acc_norm |0.6553|Β± |0.0332| | - agieval_sat_en_without_passage| 1|none | 0|acc |0.3981|Β± |0.0342| | | |none | 0|acc_norm |0.3981|Β± |0.0342| | - agieval_sat_math | 1|none | 0|acc |0.3500|Β± |0.0322| | | |none | 0|acc_norm |0.3318|Β± |0.0318|
{"language": ["en"], "license": "other", "pipeline_tag": "text-generation"}
Locutusque/llama-3-neural-chat-v2.2-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:40:35+00:00
null
null
{}
Yasoja/path-to-save-model3
null
[ "region:us" ]
null
2024-05-02T14:41:32+00:00
null
null
{"license": "apache-2.0"}
hyhdennis/Testing_P
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-02T14:42:05+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
tingting/llama2_13b_lora_model_balanced_Data_300
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-13b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:42:28+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
tingting/mistral7binstruct02_lora_model_balanced_Data_600
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:42:28+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
roibouta/lora_model_test
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:45:25+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/skumar9/Llama-medx_v3.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-medx_v3.1-GGUF/resolve/main/Llama-medx_v3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "skumar9/Llama-medx_v3.1", "quantized_by": "mradermacher"}
mradermacher/Llama-medx_v3.1-GGUF
null
[ "transformers", "gguf", "en", "base_model:skumar9/Llama-medx_v3.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:45:26+00:00
text2text-generation
transformers
{}
gnad/qgen-vit5-base
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:45:42+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: elisamammi/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
{"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]}
elisamammi/ppo-Pyramids
null
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
null
2024-05-02T14:45:56+00:00
null
null
The [madebyollin/taesdxl](https://huggingface.co/madebyollin/taesdxl) model converted to ONNX for usage with Unity Sentis. See [com.doji.diffusers](https://github.com/julienkay/com.doji.diffusers) for details.
{"license": "mit"}
julienkay/taesdxl
null
[ "onnx", "license:mit", "region:us" ]
null
2024-05-02T14:46:40+00:00
text-generation
transformers
{"license": "openrail"}
mubashir32/Llama-2-7b-chat-finetune
null
[ "transformers", "pytorch", "llama", "text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:49:54+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Reniya/Phi2-Classification
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:50:06+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
xp0tat0/farmer_6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:50:19+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # events-mem-base-peft This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-base", "model-index": [{"name": "events-mem-base-peft", "results": []}]}
eddieman78/events-mem-base-peft
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2024-05-02T14:51:32+00:00
null
null
# SmartllamaAqua-7B SmartllamaAqua-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) ## 🧩 Configuration ```yaml models: - model: NousResearch/Meta-Llama-3-8B # No parameters necessary for base model - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 0.6 weight: 0.5 - model: mlabonne/OrpoLlama-3-8B parameters: density: 0.55 weight: 0.05 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: float16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/SmartllamaAqua-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["NousResearch/Meta-Llama-3-8B-Instruct", "mlabonne/OrpoLlama-3-8B"]}
automerger/SmartllamaAqua-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:mlabonne/OrpoLlama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T14:52:49+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_14m-adpater-lora-mrpc
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:52:56+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_70m-adpater-lora-mrpc
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:53:13+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
YongYong/LLaVA-Phi-3-mini-4k-instruct-FT-docci
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:53:33+00:00
text-generation
transformers
{}
ehaque/Llama-2-7b-qlora-finetune
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:53:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_160m-adpater-lora-mrpc
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:54:35+00:00
null
null
{}
ace055/result
null
[ "region:us" ]
null
2024-05-02T14:54:52+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/mooncell_v45
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:55:52+00:00
feature-extraction
transformers
# fine-tuned/jina-embeddings-v2-base-en-02052024-4awu-webapp_8647177611 ## Model Description fine-tuned/jina-embeddings-v2-base-en-02052024-4awu-webapp_8647177611 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-02052024-4awu-webapp_8647177611). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "fine-tuned/jina-embeddings-v2-base-en-02052024-4awu-webapp_8647177611" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
fine-tuned/jina-embeddings-v2-base-en-02052024-4awu-webapp_8647177611
null
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "region:us" ]
null
2024-05-02T14:56:07+00:00
null
null
{"license": "openrail"}
gautamnp/GautamMod
null
[ "license:openrail", "region:us" ]
null
2024-05-02T14:56:14+00:00
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r rahil1206/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "10.17 +/- 5.92", "name": "mean_reward", "verified": false}]}]}]}
rahil1206/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-05-02T14:56:39+00:00
text-classification
transformers
{}
Integer-Ctrl/cross-encoder-bert-tiny-512
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:57:21+00:00
feature-extraction
transformers
# fine-tuned/jina-embeddings-v2-base-en-02052024-24yf-webapp_8647177611 ## Model Description fine-tuned/jina-embeddings-v2-base-en-02052024-24yf-webapp_8647177611 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-02052024-24yf-webapp_8647177611). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "fine-tuned/jina-embeddings-v2-base-en-02052024-24yf-webapp_8647177611" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
fine-tuned/jina-embeddings-v2-base-en-02052024-24yf-webapp_8647177611
null
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "region:us" ]
null
2024-05-02T14:57:33+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_410m-adpater-lora-mrpc
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T14:57:40+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
tingting/llama3_8binstruct_lora_model_balanced_Data_500
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:59:22+00:00
feature-extraction
transformers
# phospho-small This is a SetFit model that can be used for Text Classification on CPU. The model has been trained using an efficient few-shot learning technique. ## Usage ```python from setfit import SetFitModel model = SetFitModel.from_pretrained("phospho-small-d5b483f") outputs = model.predict(["This is a sentence to classify", "Another sentence"]) # tensor([1, 0]) ``` ## References This work was possible thanks to the SetFit library and the work of: Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
{"language": "en", "license": "apache-2.0"}
phospho-app/phospho-small-d5b483f
null
[ "transformers", "safetensors", "mpnet", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T14:59:56+00:00
object-detection
null
{"language": ["en"], "license": "apache-2.0", "tags": ["Yolov5"], "datasets": ["girish787/riceLeafDataset", "nancyalarabawy/RiceLeafDiseases"], "pipeline_tag": "object-detection"}
faruqaziz/RiceLeafDetection
null
[ "tensorboard", "Yolov5", "object-detection", "en", "dataset:girish787/riceLeafDataset", "dataset:nancyalarabawy/RiceLeafDiseases", "license:apache-2.0", "region:us" ]
null
2024-05-02T15:00:57+00:00
null
null
{}
morturr/flan-t5-xl-amazon-text-classification
null
[ "region:us" ]
null
2024-05-02T15:01:56+00:00
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute . ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Ketansomewhere/gigandful') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
Ketansomewhere/gigandful
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T15:02:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3692 - Rouge1: 47.2141 - Rouge2: 23.4837 - Rougel: 39.7822 - Rougelsum: 43.2157 - Gen Len: 17.1612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4566 | 1.0 | 1842 | 1.3834 | 46.9151 | 22.8925 | 39.1161 | 43.0414 | 17.4493 | | 1.3394 | 2.0 | 3684 | 1.3741 | 47.2947 | 23.5658 | 39.8063 | 43.487 | 17.1819 | | 1.2786 | 3.0 | 5526 | 1.3692 | 47.2141 | 23.4837 | 39.7822 | 43.2157 | 17.1612 | | 1.2274 | 4.0 | 7368 | 1.3776 | 47.6914 | 24.1243 | 40.1764 | 43.9611 | 17.4042 | | 1.2028 | 5.0 | 9210 | 1.3771 | 47.3328 | 23.5144 | 39.6487 | 43.4161 | 17.2357 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan-t5-base-samsum", "results": []}]}
stevehoang9/flan-t5-base-samsum
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T15:02:51+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
domenicrosati/decoding_trust_mmd_immunization_minimality-mmd_lr_2e-5_alpha_2_beta_4_num_layers_6_epoch_1
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T15:04:27+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
tingting/mistral7binstruct02_lora_model_balanced_Data_800
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:04:47+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-minds14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Wer Ortho: 28.9086 - Wer: 0.2581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:| | 0.0006 | 17.8571 | 500 | 0.6181 | 28.9086 | 0.2581 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["PolyAI/minds14"], "metrics": ["wer"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "whisper-tiny-minds14", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "PolyAI/minds14", "type": "PolyAI/minds14", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "wer", "value": 0.25811965811965815, "name": "Wer"}]}]}]}
heisenberg3376/whisper-tiny-minds14
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "has_space" ]
null
2024-05-02T15:04:48+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
tingting/llama2_13b_lora_model_balanced_Data_400
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-13b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:04:50+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
miguel-kjh/pythia_1b-adpater-lora-qnli
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T15:04:56+00:00
null
transformers
# Uploaded model - **Developed by:** SwatiM - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
SwatiM/sql_phi3_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:05:54+00:00
text-generation
transformers
{}
YDTsai/deepseek-coder-6.7b-base-sft-self-icl-syntax-pass
null
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T15:05:56+00:00
null
null
The [madebyollin/taesd](https://huggingface.co/madebyollin/taesd) model converted to ONNX for usage with Unity Sentis. See [com.doji.diffusers](https://github.com/julienkay/com.doji.diffusers) for details.
{"license": "mit"}
julienkay/taesd
null
[ "onnx", "license:mit", "region:us" ]
null
2024-05-02T15:06:02+00:00
null
null
{"license": "openrail"}
Danikdsa/Chanyeol
null
[ "license:openrail", "region:us" ]
null
2024-05-02T15:07:53+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shtapm/whisper-large_0502_decoder3_200steps
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:08:33+00:00
null
null
{}
hari02/idefics-9b-PokemonCards
null
[ "region:us" ]
null
2024-05-02T15:08:35+00:00
text-classification
transformers
{}
preetamn0/ModelCLBNewAdvWithTopandBotom20Seed100
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:08:37+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gen-z-translate-llama-3-instruct-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "gen-z-translate-llama-3-instruct-v1", "results": []}]}
acrobatlm/gen-z-translate-llama-3-instruct-v1
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-02T15:09:22+00:00
text-generation
transformers
{}
LarsJacobs2003/Examify-Llama2-7B-NeuronCompiled-FP16
null
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T15:09:29+00:00
null
null
<<<<<<< HEAD --- title: End-to-End Driving at Scale 2024 emoji: πŸš— colorFrom: green colorTo: indigo sdk: docker pinned: false duplicated_from: autotrain-projects/autotrain-advanced hf_oauth: true hf_oauth_scopes: - read-repos --- ======= --- license: mit --- >>>>>>> 903be5c95453eb46bbc0b08a03f9736df0f57551
{}
zhouliguo/submission
null
[ "region:us" ]
null
2024-05-02T15:10:15+00:00
null
null
{}
valhofec/whisper-large_ft1
null
[ "region:us" ]
null
2024-05-02T15:10:43+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
tingting/llama3_8binstruct_lora_model_balanced_Data_600
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:11:14+00:00
null
null
{}
thejana/Modified_Counselor_model
null
[ "region:us" ]
null
2024-05-02T15:11:30+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vknyazkova01/vk_spam_detection
null
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:11:55+00:00
null
null
{"license": "llama3"}
theWitcher/ask_any_question
null
[ "license:llama3", "region:us" ]
null
2024-05-02T15:12:34+00:00
null
null
{}
TiberiusMagic/publickotprog
null
[ "region:us" ]
null
2024-05-02T15:13:18+00:00
text2text-generation
transformers
{}
reach-vb/parler-tts-expresso-v0.1
null
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:13:47+00:00
null
transformers
{}
magnifi/llama-cls-ner-mt-chat-v21-7_epoch_24-ct2
null
[ "transformers", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:13:47+00:00
null
transformers
{"license": "apache-2.0"}
predibase/Mistral-7B-Instruct-v0.2-magicoder-medusa
null
[ "transformers", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:14:10+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-finetuned-on-10k This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NousResearch/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-finetuned-on-10k", "results": []}]}
engrzulqarnain/code-llama-finetuned-on-10k
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:NousResearch/CodeLlama-7b-hf", "region:us" ]
null
2024-05-02T15:15:59+00:00
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
rwr20/CartPole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-05-02T15:16:28+00:00
feature-extraction
transformers
# fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 ## Model Description fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
null
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "region:us" ]
null
2024-05-02T15:16:45+00:00
null
transformers
# Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
tingting/mistral7binstruct02_lora_model_balanced_Data_896
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:20:17+00:00
null
transformers
{"license": "mit"}
Goodarc/TomModel20240502
null
[ "transformers", "pytorch", "tensorboard", "donut", "license:mit", "endpoints_compatible", "region:us", "has_space" ]
null
2024-05-02T15:21:05+00:00
text-classification
transformers
{}
Integer-Ctrl/cross-encoder-bert-tiny-5120
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T15:21:28+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ilanasto/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
ilanasto/ppo-SnowballTarget
null
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
null
2024-05-02T15:21:45+00:00