modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 00:46:34
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 00:44:25
card
stringlengths
11
1.01M
gyr66/RoBERTa-ext-large-crf-chinese-finetuned-ner-v2
gyr66
2024-01-09T14:55:47Z
74
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "custom_code", "zh", "dataset:gyr66/privacy_detection", "base_model:gyr66/RoBERTa-ext-large-chinese-finetuned-ner", "base_model:finetune:gyr66/RoBERTa-ext-large-chinese-finetuned-ner", "autotrain_compatible", "region:us" ]
token-classification
2024-01-09T13:44:15Z
--- base_model: gyr66/RoBERTa-ext-large-chinese-finetuned-ner tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Ernie-3.0-large-crf-chinese-finetuned-ner results: [] datasets: - gyr66/privacy_detection language: - zh library_name: transformers pipeline_tag: token-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RoBERTa-ext-large-crf-chinese-finetuned-ner This model is a fine-tuned version of [gyr66/RoBERTa-ext-large-chinese-finetuned-ner](https://huggingface.co/gyr66/RoBERTa-ext-large-chinese-finetuned-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5907 - Precision: 0.7278 - Recall: 0.75 - F1: 0.7387 - Accuracy: 0.9629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0061 | 1.0 | 503 | 0.6739 | 0.6747 | 0.7457 | 0.7084 | 0.9608 | | 0.0078 | 2.0 | 1006 | 0.6343 | 0.7083 | 0.7518 | 0.7294 | 0.9622 | | 0.0072 | 3.0 | 1509 | 0.6237 | 0.6867 | 0.7621 | 0.7224 | 0.9607 | | 0.0052 | 4.0 | 2012 | 0.5929 | 0.7136 | 0.7616 | 0.7368 | 0.9635 | | 0.0031 | 5.0 | 2515 | 0.5907 | 0.7278 | 0.75 | 0.7387 | 0.9629 | | 0.0014 | 6.0 | 3018 | 0.6080 | 0.7172 | 0.7558 | 0.7360 | 0.9636 | | 0.001 | 7.0 | 3521 | 0.6179 | 0.7198 | 0.7586 | 0.7387 | 0.9637 | | 0.0005 | 8.0 | 4024 | 0.6208 | 0.7211 | 0.7518 | 0.7361 | 0.9632 | | 0.0004 | 9.0 | 4527 | 0.6169 | 0.7271 | 0.7487 | 0.7378 | 0.9636 | | 0.0002 | 10.0 | 5030 | 0.6202 | 0.7266 | 0.7495 | 0.7379 | 0.9636 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
mmnga/baku-10b-chat-v2-gguf
mmnga
2024-01-09T14:55:39Z
184
0
null
[ "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-09T12:47:24Z
--- license: apache-2.0 tags: - llama --- # baku-10b-chat-v2-gguf [sudy-superさんが公開しているbaku-10b-chat-v2](https://huggingface.co/sudy-super/baku-10b-chat-v2)のggufフォーマット変換版です。 他のモデルはこちら [mmnga/baku-10b-gguf](https://huggingface.co/mmnga/baku-10b-gguf) [mmnga/baku-13b-v2-gguf](https://huggingface.co/mmnga/baku-13b-v2-gguf) [mmnga/baku-10b-chat-v2-gguf](https://huggingface.co/mmnga/baku-10b-chat-v2-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'baku-10b-chat-v2-q4_0.gguf' -p 'USER:仕事の熱意を取り戻すためのアイデアを5つ挙げてください。 ASSISTANT:' -n 100 --temp 0.8 ```
jysssacc/roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-09T14:55:39Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-09T14:49:54Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.1905 | 1.0 | 157 | 13.0895 | | 5.1439 | 2.0 | 314 | 4.1232 | | 3.8429 | 3.0 | 471 | 2.7016 | | 2.6318 | 4.0 | 628 | 2.1310 | | 2.3486 | 5.0 | 785 | 1.9767 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
aiflows/AbstractBossFlowModule
aiflows
2024-01-09T14:27:53Z
0
0
null
[ "region:us" ]
null
2024-01-09T14:27:44Z
# Table of Contents * [AbstractBossFlow](#AbstractBossFlow) * [AbstractBossFlow](#AbstractBossFlow.AbstractBossFlow) * [instantiate\_from\_config](#AbstractBossFlow.AbstractBossFlow.instantiate_from_config) * [run](#AbstractBossFlow.AbstractBossFlow.run) * [CtrlExMemFlow](#CtrlExMemFlow) * [CtrlExMemFlow](#CtrlExMemFlow.CtrlExMemFlow) * [detect\_finish\_or\_continue](#CtrlExMemFlow.CtrlExMemFlow.detect_finish_or_continue) * [\_\_init\_\_](#__init__) <a id="AbstractBossFlow"></a> # AbstractBossFlow <a id="AbstractBossFlow.AbstractBossFlow"></a> ## AbstractBossFlow Objects ```python class AbstractBossFlow(SequentialFlow, ABC) ``` This class is an abstraction of memory-planner-controller-executor flow. At a higher level, it is an abstract agent empowered by multiple language models and subsequent tools like code interpreters, etc. It is designed to cooperate with memory management mechanisms, lm-powered planner and controller, and arbitrary executors. *Configuration Parameters* - `name` (str): Name of the flow. - `description` (str): Description of the flow. - `memory_files` (dict): A dictionary of memory files. The keys are the names of the memory files and the values are the path to the memory files. Typical memory files include plan, logs, code library. - `subflows_config`: - MemoryReading: reads the content of the memory files into the flow states for later use. - Planner: make a step-by-step plan based on the current goal. - CtrlExMem: controller-executor agent with memory reading and memory writing, it will execute the plan generated by the planner. - `early_exit_key` (str): The key in the flow state that indicates the early exit condition. - `topology` (list) : The topology of the flow. *Input Interface (expected input)* - `goal` (str): The goal from the caller (source flow) *Output Interface (expected output)* - `result` (str): The result of the flow, the result will be returned to the caller. - `summary` (str): The summary of the flow, the summary will be logged into the logs of the caller flow. **Arguments**: - `memory_files` (`dict`): A dictionary of memory files. The keys are the names of the memory files and the values are the path to the memory files. <a id="AbstractBossFlow.AbstractBossFlow.instantiate_from_config"></a> #### instantiate\_from\_config ```python @classmethod def instantiate_from_config(cls, config) ``` This method instantiates the flow from a configuration dictionary. **Arguments**: - `config` (`dict`): The configuration dictionary. <a id="AbstractBossFlow.AbstractBossFlow.run"></a> #### run ```python def run(input_data: Dict[str, Any]) -> Dict[str, Any] ``` This method runs the flow. **Arguments**: - `input_data` (`dict`): The input data, the input_data is supposed to contain 'goal' <a id="CtrlExMemFlow"></a> # CtrlExMemFlow <a id="CtrlExMemFlow.CtrlExMemFlow"></a> ## CtrlExMemFlow Objects ```python class CtrlExMemFlow(CircularFlow, ABC) ``` This class is the controller-executor agent with memory reading and memory writing, it will execute the plan generated by the planner. This flow is, at a higher level, a circular flow, it runs until either max_round is reached, or the controller decides to early exit (see: detect_finish_or_continue) The brain of the flow is the controller, the controller decides what action (which branch of the branching flow) to take next. The controller can also decide to early exit the flow, in which case the flow will stop. After the controller decides what action to take, the controller will pass the action to the executor, the executor will execute the action, yielding `result` and `summary`, which respectively will be passed to the controller and the memory writer (into logs of the upper layer of flow). Depending on the `result`, the controller will decide what action to take next. *Configuration Parameters*: - `name` (str): Name of the flow. - `description` (str): Description of the flow. - `max_round` (int): The maximum number of rounds the flow will run. Default: 30. - `subflows_config` (dict): The configuration of the subflows. - `Controller` (dict): The configuration of the controller. It is important that the target of the controller (instance customizable by the user) and the api information should be specified. - `Executor` (dict): The executor of the flow, it is supposed to be a branching flow. To instantiate the executor, the user needs to specify the subflows of the executor (i.e. the actual tools that can be used by the agent, e.g. a flow for interpreting code) - `MemoryWriting` (dict): The configuration of the memory writer. There is an existing memory writing flow implemented. - 'MemoryReading' (dict): The configuration of the memory reader. There is an existing memory reading flow implemented. - `topology` (List): The topology of the subflows, notice that the output interface of the Controller must be implemented and specified. *Input Interface*: - `plan` (str): The plan generated by the planner, the CtrlExMem flow should follow the plan. - `memory_files` (dict): mem_name - memfile_path pairs, the memory files that the memory reader will read from. - `goal` (str): The goal of the flow - `logs` (str): Execution history of the flow, contains all actions taken by each subflow of the flow. *Output Interface*: - `result` (str): The result of the flow, will be returned to the controller of the caller flow. - `summary` (str): The summary of the flow, will be written to the logs of the caller flow. <a id="CtrlExMemFlow.CtrlExMemFlow.detect_finish_or_continue"></a> #### detect\_finish\_or\_continue ```python @abstractmethod @CircularFlow.output_msg_payload_processor def detect_finish_or_continue(output_payload: Dict[str, Any], src_flow) -> Dict[str, Any] ``` This function is called after the Controller, it is used to: 1. Check if the Controller decides to early-exit, if so, implement necessary logics for exiting e.g. drafting result and summary, etc. 2. For other commands called by the controller, implement necessary logics for the flow to continue e.g. providing necessary information to the branching flow. **Arguments**: - `output_payload` (`Dict[str, Any]`): The output payload of the Controller. - `src_flow` (`Flow`): The source flow of the Controller. **Returns**: The input payload of the Executor. <a id="__init__"></a> # \_\_init\_\_
NLPProject2023Z/xlnet-regression
NLPProject2023Z
2024-01-09T14:26:59Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-09T14:26:36Z
--- tags: - generated_from_trainer model-index: - name: xlnet-regression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-regression This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 80 | 0.3743 | | No log | 2.0 | 160 | 0.3747 | | No log | 3.0 | 240 | 0.3743 | | No log | 4.0 | 320 | 0.3743 | | No log | 5.0 | 400 | 0.3743 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
svenbl80/roberta-base-finetuned-chatdoc-test
svenbl80
2024-01-09T14:22:42Z
48
0
transformers
[ "transformers", "tf", "tensorboard", "roberta", "text-classification", "generated_from_keras_callback", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T14:18:51Z
--- license: mit base_model: roberta-base tags: - generated_from_keras_callback model-index: - name: svenbl80/roberta-base-finetuned-chatdoc-test results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # svenbl80/roberta-base-finetuned-chatdoc-test This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0815 - Validation Loss: 1.0109 - Train Accuracy: 0.4343 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.0815 | 1.0109 | 0.4343 | 0 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.13.1 - Datasets 2.16.1 - Tokenizers 0.15.0
mip016/rl-faketaxi
mip016
2024-01-09T14:09:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T14:09:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: rl-faketaxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mip016/rl-faketaxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ybelkada/test-model-already-tagged
ybelkada
2024-01-09T14:06:44Z
176
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "tag-test", "tag-test-2", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T14:05:35Z
--- library_name: transformers tags: - tag-test - tag-test-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ybelkada/test-tag-already-tagged
ybelkada
2024-01-09T14:03:13Z
97
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "tag-test-3", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:finetune:HuggingFaceM4/tiny-random-LlamaForCausalLM", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T13:59:17Z
--- base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - trl - sft - tag-test-3 - generated_from_trainer model-index: - name: test-tag-already-tagged results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-tag-already-tagged This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.16.0 - Tokenizers 0.15.0
NLPProject2023Z/roberta-regression
NLPProject2023Z
2024-01-09T14:03:13Z
174
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-09T14:02:45Z
--- tags: - generated_from_trainer model-index: - name: roberta-regression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-regression This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 0.5758 | | No log | 2.0 | 168 | 0.5758 | | No log | 3.0 | 252 | 0.5758 | | No log | 4.0 | 336 | 0.5758 | | No log | 5.0 | 420 | 0.5758 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
mattia-colbertaldo/whisper-small-custom
mattia-colbertaldo
2024-01-09T13:58:59Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T12:10:47Z
--- language: - en license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: Whisper Small Custom - Mattia Colbertaldo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Custom - Mattia Colbertaldo This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Mattia's Dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.8726 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 2.1417 | 10.0 | 10 | 3.0808 | 0.0 | | 0.6544 | 20.0 | 20 | 1.2364 | 57.1429 | | 0.4179 | 30.0 | 30 | 1.0541 | 57.1429 | | 0.3622 | 40.0 | 40 | 0.9383 | 1000.0 | | 0.3339 | 50.0 | 50 | 0.8726 | 0.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
badokorach/bert-base-cased-squad2-agic2-060124
badokorach
2024-01-09T13:58:48Z
1
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:badokorach/bert-base-cased-squad2-agic2-051223", "base_model:finetune:badokorach/bert-base-cased-squad2-agic2-051223", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-06T03:04:38Z
--- license: cc-by-4.0 base_model: badokorach/bert-base-cased-squad2-agic2-051223 tags: - generated_from_keras_callback model-index: - name: badokorach/bert-base-cased-squad2-agic2-060124 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/bert-base-cased-squad2-agic2-060124 This model is a fine-tuned version of [badokorach/bert-base-cased-squad2-agic2-051223](https://huggingface.co/badokorach/bert-base-cased-squad2-agic2-051223) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1138 - Validation Loss: 0.0 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': None, 'class_name': 'CustomLearningRateScheduler', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1044, 'warmup_steps': 116, 'end_learning_rate': 1e-05}, 'registered_name': 'CustomLearningRateScheduler'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.7839 | 0.0 | 0 | | 0.6859 | 0.0 | 1 | | 0.4749 | 0.0 | 2 | | 0.3586 | 0.0 | 3 | | 0.3039 | 0.0 | 4 | | 0.2084 | 0.0 | 5 | | 0.1471 | 0.0 | 6 | | 0.1420 | 0.0 | 7 | | 0.1221 | 0.0 | 8 | | 0.1138 | 0.0 | 9 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
MatrixC7/Mixtral_34Bx2_MoE_60B-4.65bpw-h6-exl2
MatrixC7
2024-01-09T13:58:30Z
6
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T08:03:09Z
--- license: cc-by-nc-4.0 --- - The measurement file is attached in the branch `measurement`. --- # Mixtral MOE 2x34B This is my first English & Chinese MoE Model based on * [jondurbin/bagel-dpo-34b-v0.2] * [SUSTech/SUS-Chat-34B] gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_34Bx2_MoE_60B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_34Bx2_MoE_60B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map='cpu' ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` Output Examples: ``` please input prompt:write a story about yosemite write a story about yosemite national park Yosemite National Park is located in the Sierra Nevada Mountains of California, USA. It was established on October 1st, 1890 and covers an area of approximately 747,956 acres (302,687 hectares). The park boasts some of America's most iconic natural wonders such as Yosemite Valley, Half Dome, El Capitan, Bridalveil Fall, Tuolumne Meadows, Glacier Point, Mariposa Grove, and many more breathtaking landscapes that attract millions of visitors each year. The history of Yosemite dates back to over seven million years ago when glaciers carved out its stunning granite cliffs and valleys. Native American tribes like Miwok and Paiute have lived here for thousands of years before European explorers arrived during the mid-nineteenth century. In fact, it was John Muir - one of America’s greatest conservationists who helped establish this region as a protected wilderness area by advocating for its preservation through his writings and activism. Today, Yosemite offers various recreational activities including hiking, rock climbing, camping, fishing, horseback riding, wildlife watching, photography, and winter sports like skiing and snowshoeing. Visitors can also enjoy ranger programs, guided tours, educational exhibits at visitor centers, or simply take time to appreciate nature while strolling along scenic trails surrounded by towering sequoia trees, cascading waterfalls, and crystal clear lakes. In addition to preserving these awe-inspiring vistas, Yosemite plays a crucial role in protecting numerous plant and animal species found within its boundaries. Some notable inhabitants include black bears, mountain lions, mule deer, coyotes, bobcats, golden eagles, peregrine falcons, bighorn sheep, and several types of fish native to the Merced River which runs through the heart of the valley. As we continue our journey into the future, let us remember the importance of safeguarding places like Yosemite so they may remain pristine sanctuaries where both humans and animals alike can thrive together amidst unspoiled beauty.</s> please input prompt:李开复是谁? 李开复是谁? 他是一个在人工智能领域有着卓越贡献的科学家,也是一位成功的企业家。他的名字与谷歌、微软等科技巨头紧密相连,他是创新工场的创始人之一,更是无数创业者心中的偶像和导师。然而,除了这些耀眼的光环之外,李开复还有着怎样的故事呢?让我们一起来揭秘这位传奇人物的人生历程吧!</s> ```
w11wo/xlm-roberta-base-twitter-indonesia-sarcastic
w11wo
2024-01-09T13:56:59Z
6
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T13:47:50Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: xlm-roberta-base-twitter-indonesia-sarcastic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-twitter-indonesia-sarcastic This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4359 - Accuracy: 0.8513 - F1: 0.7386 - Precision: 0.6570 - Recall: 0.8433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5641 | 1.0 | 59 | 0.5260 | 0.75 | 0.0 | 0.0 | 0.0 | | 0.5317 | 2.0 | 118 | 0.5030 | 0.75 | 0.0 | 0.0 | 0.0 | | 0.4995 | 3.0 | 177 | 0.4656 | 0.75 | 0.0 | 0.0 | 0.0 | | 0.4599 | 4.0 | 236 | 0.4503 | 0.7687 | 0.6026 | 0.5281 | 0.7015 | | 0.4082 | 5.0 | 295 | 0.3785 | 0.8470 | 0.6435 | 0.7708 | 0.5522 | | 0.3274 | 6.0 | 354 | 0.3605 | 0.8619 | 0.6992 | 0.7679 | 0.6418 | | 0.2621 | 7.0 | 413 | 0.3765 | 0.8619 | 0.6838 | 0.8 | 0.5970 | | 0.2332 | 8.0 | 472 | 0.3408 | 0.8769 | 0.7591 | 0.7429 | 0.7761 | | 0.1579 | 9.0 | 531 | 0.4382 | 0.8731 | 0.7213 | 0.8 | 0.6567 | | 0.1467 | 10.0 | 590 | 0.3855 | 0.8806 | 0.7895 | 0.7059 | 0.8955 | | 0.098 | 11.0 | 649 | 0.4693 | 0.8806 | 0.7500 | 0.7869 | 0.7164 | | 0.0929 | 12.0 | 708 | 0.6206 | 0.8806 | 0.7333 | 0.8302 | 0.6567 | | 0.0555 | 13.0 | 767 | 0.7134 | 0.8843 | 0.7634 | 0.7812 | 0.7463 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Ghunghru/Misinformation-Covid-Articles
Ghunghru
2024-01-09T13:51:16Z
91
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T10:01:02Z
--- license: mit base_model: bert-base-german-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Misinformation-Covid-Articles results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Misinformation-Covid-Articles This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1395 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1266 | 1.0 | 3609 | 0.1224 | 0.9703 | | 0.0639 | 2.0 | 7218 | 0.1170 | 0.9735 | | 0.0437 | 3.0 | 10827 | 0.1230 | 0.9808 | | 0.0148 | 4.0 | 14436 | 0.1209 | 0.9832 | | 0.0044 | 5.0 | 18045 | 0.1395 | 0.9840 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.12.0 - Tokenizers 0.13.3
eth-easl/pythia_2.8b_deduped-task380_boolq_yes_no_question
eth-easl
2024-01-09T13:50:32Z
0
0
null
[ "arxiv:2312.05215", "region:us" ]
null
2023-09-01T13:40:02Z
### More details [Paper](https://arxiv.org/abs/2312.05215)
JDB03/Qlearning-Taxi
JDB03
2024-01-09T13:49:23Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T13:48:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Qlearning-Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="JDB03/Qlearning-Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JDB03/Qlearning-FrozenLake
JDB03
2024-01-09T13:46:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T13:44:41Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: Qlearning-FrozenLake results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="JDB03/Qlearning-FrozenLake", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hlrh/sd-class-butterflies-32
hlrh
2024-01-09T13:40:51Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-01-09T13:40:29Z
--- tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class ​license: mit --- # 这个模型用于生成蝴蝶图像的无条件图像生成扩散模型   '''python from diffusers import DDPMPipeline   pipeline = DDPMPipeline.from_pretrained('hlrh/sd-class-butterflies-32') image = pipeline().images[0] image
kk08/mistral-gsm8k-finetune
kk08
2024-01-09T13:40:32Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-09T13:39:57Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-gsm8k-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-gsm8k-finetune This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5604 | 0.01 | 10 | 1.4867 | | 1.3198 | 0.02 | 20 | 1.2589 | | 1.11 | 0.03 | 30 | 1.0018 | | 0.8584 | 0.04 | 40 | 0.8349 | | 0.77 | 0.05 | 50 | 0.8062 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
JacobLinCool/whisper-small-tw2
JacobLinCool
2024-01-09T13:32:41Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_16_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T07:01:38Z
--- language: - zh license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_0 model-index: - name: Whisper Small zh-TW - Chinese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small zh-TW - Chinese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2102 - Cer: 125.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0977 | 1.33 | 1000 | 0.1988 | 92.1973 | | 0.04 | 2.65 | 2000 | 0.1952 | 127.6179 | | 0.0102 | 3.98 | 3000 | 0.2032 | 152.5090 | | 0.0017 | 5.31 | 4000 | 0.2102 | 125.9209 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0.post301 - Datasets 2.16.1 - Tokenizers 0.15.0
Felladrin/onnx-tinyllama-15M
Felladrin
2024-01-09T13:32:38Z
3
0
transformers.js
[ "transformers.js", "onnx", "llama", "text-generation", "base_model:nickypro/tinyllama-15M", "base_model:quantized:nickypro/tinyllama-15M", "license:mit", "region:us" ]
text-generation
2024-01-09T13:23:58Z
--- license: mit library_name: "transformers.js" base_model: nickypro/tinyllama-15M --- INT8 ONNX version of [nickypro/tinyllama-15M](https://huggingface.co/nickypro/tinyllama-15M) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
SimplCup/DanTDM
SimplCup
2024-01-09T13:18:18Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2024-01-09T13:17:58Z
--- license: cc-by-nc-nd-4.0 ---
scoup123/Berturk-AffixChecker
scoup123
2024-01-09T13:14:34Z
92
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "text-classification", "tr", "dataset:scoup123/affixfinder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T12:44:45Z
--- datasets: - scoup123/affixfinder language: - tr metrics: - accuracy pipeline_tag: text-classification --- # Model Card for Model ID ### Model Description Given 2 words in Turkish, the model predicts whether they share an affix or not. Fine-tuned on dbmdz/bert-base-turkish-cased, fine-tuned on a task similar to NLI, but on word level and with 2 labels. It was created as a final project for one of my classes. - **Developed by:** Scoup123 - **Model type:** BERT - **Language(s) (NLP):** Turkish - **Finetuned from model [optional]:** dbmdz/bert-base-turkish-cased ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** in-works - ## Uses It can be used in morphological analyzing tasks. ### Direct Use It can probably be used without additional finetuning on Turkish. ## Training Details ### Training Data scoup123/affixfinder The dataset used was generated from a generated dataset mentioned in the paper titled Turkish language resources: Morphological parser, morphological disambiguator and web corpus. ## Evaluation Test Accuracy: 0.9874 Precision: 0.9874 Recall: 0.9874 F1 Score: 0.9874 **It should be used with caution as these scores are too high. ### Testing Data, Factors & Metrics #### Testing Data A testing split data was created from the training data #### Summary This model aims to create an affix identifier for Turkish. ## Model Examination [optional] I have just created it, so further testing needed to check if it actually works. Additionally, you should check it if it works before using it. [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Free Colab T4 GPU - **Hours used:** ~2.5 hours - **Cloud Provider:** Google - **Compute Region:** Europe - **Carbon Emitted:** [More Information Needed] ## Citation [optional] **APA:** Sak, H., Güngör, T., & Saraçlar, M. (2008). Turkish language resources: Morphological parser, morphological disambiguator and web corpus. In Advances in natural language processing (pp. 417-427). Springer Berlin Heidelberg. ## Model Card Authors [optional] Kaan Bayar ## Model Card Contact [email protected]
pcuenq/siglip-base-patch16-224
pcuenq
2024-01-09T13:12:34Z
90
0
transformers
[ "transformers", "pytorch", "safetensors", "siglip", "zero-shot-image-classification", "vision", "arxiv:2303.15343", "arxiv:2209.06794", "license:apache-2.0", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-01-09T13:12:34Z
--- license: apache-2.0 tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # SigLIP (base-sized model) SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in [this repository](https://github.com/google-research/big_vision). Disclaimer: The team releasing SigLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes. A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713). ## Intended uses & limitations You can use the raw model for tasks like zero-shot image classification and image-text retrieval. See the [model hub](https://huggingface.co/models?search=google/siglip) to look for other versions on a task that interests you. ### How to use Here is how to use this model to perform zero-shot image classification: ```python from PIL import Image import requests from transformers import AutoProcessor, AutoModel import torch model = AutoModel.from_pretrained("google/siglip-base-patch16-224") processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-224") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a photo of 2 cats", "a photo of 2 dogs"] inputs = processor(text=texts, images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) # these are the probabilities print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") ``` Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user: ``` from transformers import pipeline from PIL import Image import requests # load pipe image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-224") # load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # inference outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"]) outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs] print(outputs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/siglip.html#). ## Training procedure ### Training data SigLIP is pre-trained on the English image-text pairs of the WebLI dataset [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794). ### Preprocessing Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Texts are tokenized and padded to the same length (64 tokens). ### Compute The model was trained on 16 TPU-v4 chips for three days. ## Evaluation results Evaluation of SigLIP compared to CLIP is shown below (taken from the paper). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg" alt="drawing" width="600"/> ### BibTeX entry and citation info ```bibtex @misc{zhai2023sigmoid, title={Sigmoid Loss for Language Image Pre-Training}, author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer}, year={2023}, eprint={2303.15343}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
baltop/deep_500
baltop
2024-01-09T13:11:50Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
2024-01-09T13:11:34Z
--- library_name: peft base_model: deepseek-ai/deepseek-coder-6.7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
baltop/deep_400
baltop
2024-01-09T13:11:16Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
2024-01-09T13:10:59Z
--- library_name: peft base_model: deepseek-ai/deepseek-coder-6.7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
baltop/deep_300
baltop
2024-01-09T13:10:34Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
2024-01-09T13:10:13Z
--- library_name: peft base_model: deepseek-ai/deepseek-coder-6.7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
khoa-klaytn/bge-small-en-v1.5-angle
khoa-klaytn
2024-01-09T12:51:40Z
336
4
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-01-09T12:41:49Z
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-small-en-v1.5-angle results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 37.21923821573361 - type: f1 value: 68.0914945617093 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.75377499999999 - type: ap value: 89.46766124546022 - type: f1 value: 92.73884001331487 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.986 - type: f1 value: 46.55936786727896 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 35.846000000000004 - type: map_at_10 value: 51.388 - type: map_at_100 value: 52.132999999999996 - type: map_at_1000 value: 52.141000000000005 - type: map_at_3 value: 47.037 - type: map_at_5 value: 49.579 - type: mrr_at_1 value: 36.558 - type: mrr_at_10 value: 51.658 - type: mrr_at_100 value: 52.402 - type: mrr_at_1000 value: 52.410000000000004 - type: mrr_at_3 value: 47.345 - type: mrr_at_5 value: 49.797999999999995 - type: ndcg_at_1 value: 35.846000000000004 - type: ndcg_at_10 value: 59.550000000000004 - type: ndcg_at_100 value: 62.596 - type: ndcg_at_1000 value: 62.759 - type: ndcg_at_3 value: 50.666999999999994 - type: ndcg_at_5 value: 55.228 - type: precision_at_1 value: 35.846000000000004 - type: precision_at_10 value: 8.542 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.389 - type: precision_at_5 value: 14.438 - type: recall_at_1 value: 35.846000000000004 - type: recall_at_10 value: 85.42 - type: recall_at_100 value: 98.43499999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 61.166 - type: recall_at_5 value: 72.191 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.402770198163594 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.01545436974177 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.586465273207196 - type: mrr value: 74.42169019038825 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.1891186537969 - type: cos_sim_spearman value: 83.75492046087288 - type: euclidean_pearson value: 84.11766204805357 - type: euclidean_spearman value: 84.01456493126516 - type: manhattan_pearson value: 84.2132950502772 - type: manhattan_spearman value: 83.89227298813377 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.74025974025975 - type: f1 value: 85.71493566466381 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.467181385006434 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.719496037339056 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.587000000000003 - type: map_at_10 value: 41.114 - type: map_at_100 value: 42.532 - type: map_at_1000 value: 42.661 - type: map_at_3 value: 37.483 - type: map_at_5 value: 39.652 - type: mrr_at_1 value: 36.338 - type: mrr_at_10 value: 46.763 - type: mrr_at_100 value: 47.393 - type: mrr_at_1000 value: 47.445 - type: mrr_at_3 value: 43.538 - type: mrr_at_5 value: 45.556000000000004 - type: ndcg_at_1 value: 36.338 - type: ndcg_at_10 value: 47.658 - type: ndcg_at_100 value: 52.824000000000005 - type: ndcg_at_1000 value: 54.913999999999994 - type: ndcg_at_3 value: 41.989 - type: ndcg_at_5 value: 44.944 - type: precision_at_1 value: 36.338 - type: precision_at_10 value: 9.156 - type: precision_at_100 value: 1.4789999999999999 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 20.076 - type: precision_at_5 value: 14.85 - type: recall_at_1 value: 29.587000000000003 - type: recall_at_10 value: 60.746 - type: recall_at_100 value: 82.157 - type: recall_at_1000 value: 95.645 - type: recall_at_3 value: 44.821 - type: recall_at_5 value: 52.819 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.239 - type: map_at_10 value: 39.989000000000004 - type: map_at_100 value: 41.196 - type: map_at_1000 value: 41.325 - type: map_at_3 value: 37.261 - type: map_at_5 value: 38.833 - type: mrr_at_1 value: 37.516 - type: mrr_at_10 value: 46.177 - type: mrr_at_100 value: 46.806 - type: mrr_at_1000 value: 46.849000000000004 - type: mrr_at_3 value: 44.002 - type: mrr_at_5 value: 45.34 - type: ndcg_at_1 value: 37.516 - type: ndcg_at_10 value: 45.586 - type: ndcg_at_100 value: 49.897000000000006 - type: ndcg_at_1000 value: 51.955 - type: ndcg_at_3 value: 41.684 - type: ndcg_at_5 value: 43.617 - type: precision_at_1 value: 37.516 - type: precision_at_10 value: 8.522 - type: precision_at_100 value: 1.374 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 20.105999999999998 - type: precision_at_5 value: 14.152999999999999 - type: recall_at_1 value: 30.239 - type: recall_at_10 value: 55.03 - type: recall_at_100 value: 73.375 - type: recall_at_1000 value: 86.29599999999999 - type: recall_at_3 value: 43.269000000000005 - type: recall_at_5 value: 48.878 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.338 - type: map_at_10 value: 50.468999999999994 - type: map_at_100 value: 51.553000000000004 - type: map_at_1000 value: 51.608 - type: map_at_3 value: 47.107 - type: map_at_5 value: 49.101 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 54.057 - type: mrr_at_100 value: 54.764 - type: mrr_at_1000 value: 54.791000000000004 - type: mrr_at_3 value: 51.56699999999999 - type: mrr_at_5 value: 53.05 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 56.379000000000005 - type: ndcg_at_100 value: 60.645 - type: ndcg_at_1000 value: 61.73499999999999 - type: ndcg_at_3 value: 50.726000000000006 - type: ndcg_at_5 value: 53.58500000000001 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 9.141 - type: precision_at_100 value: 1.216 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 22.654 - type: precision_at_5 value: 15.723999999999998 - type: recall_at_1 value: 38.338 - type: recall_at_10 value: 70.30499999999999 - type: recall_at_100 value: 88.77199999999999 - type: recall_at_1000 value: 96.49799999999999 - type: recall_at_3 value: 55.218 - type: recall_at_5 value: 62.104000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.682 - type: map_at_10 value: 33.498 - type: map_at_100 value: 34.461000000000006 - type: map_at_1000 value: 34.544000000000004 - type: map_at_3 value: 30.503999999999998 - type: map_at_5 value: 32.216 - type: mrr_at_1 value: 27.683999999999997 - type: mrr_at_10 value: 35.467999999999996 - type: mrr_at_100 value: 36.32 - type: mrr_at_1000 value: 36.386 - type: mrr_at_3 value: 32.618 - type: mrr_at_5 value: 34.262 - type: ndcg_at_1 value: 27.683999999999997 - type: ndcg_at_10 value: 38.378 - type: ndcg_at_100 value: 43.288 - type: ndcg_at_1000 value: 45.413 - type: ndcg_at_3 value: 32.586 - type: ndcg_at_5 value: 35.499 - type: precision_at_1 value: 27.683999999999997 - type: precision_at_10 value: 5.864 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 13.446 - type: precision_at_5 value: 9.718 - type: recall_at_1 value: 25.682 - type: recall_at_10 value: 51.712 - type: recall_at_100 value: 74.446 - type: recall_at_1000 value: 90.472 - type: recall_at_3 value: 36.236000000000004 - type: recall_at_5 value: 43.234 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.073999999999998 - type: map_at_10 value: 24.352999999999998 - type: map_at_100 value: 25.438 - type: map_at_1000 value: 25.545 - type: map_at_3 value: 21.614 - type: map_at_5 value: 23.104 - type: mrr_at_1 value: 19.776 - type: mrr_at_10 value: 28.837000000000003 - type: mrr_at_100 value: 29.755 - type: mrr_at_1000 value: 29.817 - type: mrr_at_3 value: 26.201999999999998 - type: mrr_at_5 value: 27.714 - type: ndcg_at_1 value: 19.776 - type: ndcg_at_10 value: 29.701 - type: ndcg_at_100 value: 35.307 - type: ndcg_at_1000 value: 37.942 - type: ndcg_at_3 value: 24.764 - type: ndcg_at_5 value: 27.025 - type: precision_at_1 value: 19.776 - type: precision_at_10 value: 5.659 - type: precision_at_100 value: 0.971 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 12.065 - type: precision_at_5 value: 8.905000000000001 - type: recall_at_1 value: 16.073999999999998 - type: recall_at_10 value: 41.647 - type: recall_at_100 value: 66.884 - type: recall_at_1000 value: 85.91499999999999 - type: recall_at_3 value: 27.916 - type: recall_at_5 value: 33.729 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.444999999999997 - type: map_at_10 value: 38.218999999999994 - type: map_at_100 value: 39.595 - type: map_at_1000 value: 39.709 - type: map_at_3 value: 35.586 - type: map_at_5 value: 36.895 - type: mrr_at_1 value: 34.841 - type: mrr_at_10 value: 44.106 - type: mrr_at_100 value: 44.98 - type: mrr_at_1000 value: 45.03 - type: mrr_at_3 value: 41.979 - type: mrr_at_5 value: 43.047999999999995 - type: ndcg_at_1 value: 34.841 - type: ndcg_at_10 value: 43.922 - type: ndcg_at_100 value: 49.504999999999995 - type: ndcg_at_1000 value: 51.675000000000004 - type: ndcg_at_3 value: 39.858 - type: ndcg_at_5 value: 41.408 - type: precision_at_1 value: 34.841 - type: precision_at_10 value: 7.872999999999999 - type: precision_at_100 value: 1.2449999999999999 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.993 - type: precision_at_5 value: 13.032 - type: recall_at_1 value: 28.444999999999997 - type: recall_at_10 value: 54.984 - type: recall_at_100 value: 78.342 - type: recall_at_1000 value: 92.77 - type: recall_at_3 value: 42.842999999999996 - type: recall_at_5 value: 47.247 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.072 - type: map_at_10 value: 32.354 - type: map_at_100 value: 33.800000000000004 - type: map_at_1000 value: 33.908 - type: map_at_3 value: 29.232000000000003 - type: map_at_5 value: 31.049 - type: mrr_at_1 value: 29.110000000000003 - type: mrr_at_10 value: 38.03 - type: mrr_at_100 value: 39.032 - type: mrr_at_1000 value: 39.086999999999996 - type: mrr_at_3 value: 35.407 - type: mrr_at_5 value: 36.76 - type: ndcg_at_1 value: 29.110000000000003 - type: ndcg_at_10 value: 38.231 - type: ndcg_at_100 value: 44.425 - type: ndcg_at_1000 value: 46.771 - type: ndcg_at_3 value: 33.095 - type: ndcg_at_5 value: 35.459 - type: precision_at_1 value: 29.110000000000003 - type: precision_at_10 value: 7.215000000000001 - type: precision_at_100 value: 1.2109999999999999 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 16.058 - type: precision_at_5 value: 11.644 - type: recall_at_1 value: 23.072 - type: recall_at_10 value: 50.285999999999994 - type: recall_at_100 value: 76.596 - type: recall_at_1000 value: 92.861 - type: recall_at_3 value: 35.702 - type: recall_at_5 value: 42.152 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.937916666666666 - type: map_at_10 value: 33.755250000000004 - type: map_at_100 value: 34.955999999999996 - type: map_at_1000 value: 35.070499999999996 - type: map_at_3 value: 30.98708333333333 - type: map_at_5 value: 32.51491666666666 - type: mrr_at_1 value: 29.48708333333333 - type: mrr_at_10 value: 37.92183333333334 - type: mrr_at_100 value: 38.76583333333333 - type: mrr_at_1000 value: 38.82466666666667 - type: mrr_at_3 value: 35.45125 - type: mrr_at_5 value: 36.827000000000005 - type: ndcg_at_1 value: 29.48708333333333 - type: ndcg_at_10 value: 39.05225 - type: ndcg_at_100 value: 44.25983333333334 - type: ndcg_at_1000 value: 46.568333333333335 - type: ndcg_at_3 value: 34.271583333333325 - type: ndcg_at_5 value: 36.483916666666666 - type: precision_at_1 value: 29.48708333333333 - type: precision_at_10 value: 6.865749999999999 - type: precision_at_100 value: 1.1195833333333332 - type: precision_at_1000 value: 0.15058333333333335 - type: precision_at_3 value: 15.742083333333333 - type: precision_at_5 value: 11.221916666666667 - type: recall_at_1 value: 24.937916666666666 - type: recall_at_10 value: 50.650416666666665 - type: recall_at_100 value: 73.55383333333334 - type: recall_at_1000 value: 89.61691666666667 - type: recall_at_3 value: 37.27808333333334 - type: recall_at_5 value: 42.99475 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.947 - type: map_at_10 value: 30.575000000000003 - type: map_at_100 value: 31.465 - type: map_at_1000 value: 31.558000000000003 - type: map_at_3 value: 28.814 - type: map_at_5 value: 29.738999999999997 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 33.415 - type: mrr_at_100 value: 34.18 - type: mrr_at_1000 value: 34.245 - type: mrr_at_3 value: 31.621 - type: mrr_at_5 value: 32.549 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 34.482 - type: ndcg_at_100 value: 38.915 - type: ndcg_at_1000 value: 41.355 - type: ndcg_at_3 value: 31.139 - type: ndcg_at_5 value: 32.589 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 5.322 - type: precision_at_100 value: 0.8160000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 13.344000000000001 - type: precision_at_5 value: 8.988 - type: recall_at_1 value: 23.947 - type: recall_at_10 value: 43.647999999999996 - type: recall_at_100 value: 63.851 - type: recall_at_1000 value: 82.0 - type: recall_at_3 value: 34.288000000000004 - type: recall_at_5 value: 38.117000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.197 - type: map_at_10 value: 22.968 - type: map_at_100 value: 24.095 - type: map_at_1000 value: 24.217 - type: map_at_3 value: 20.771 - type: map_at_5 value: 21.995 - type: mrr_at_1 value: 19.511 - type: mrr_at_10 value: 26.55 - type: mrr_at_100 value: 27.500999999999998 - type: mrr_at_1000 value: 27.578999999999997 - type: mrr_at_3 value: 24.421 - type: mrr_at_5 value: 25.604 - type: ndcg_at_1 value: 19.511 - type: ndcg_at_10 value: 27.386 - type: ndcg_at_100 value: 32.828 - type: ndcg_at_1000 value: 35.739 - type: ndcg_at_3 value: 23.405 - type: ndcg_at_5 value: 25.255 - type: precision_at_1 value: 19.511 - type: precision_at_10 value: 5.017 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 11.023 - type: precision_at_5 value: 8.025 - type: recall_at_1 value: 16.197 - type: recall_at_10 value: 37.09 - type: recall_at_100 value: 61.778 - type: recall_at_1000 value: 82.56599999999999 - type: recall_at_3 value: 26.034000000000002 - type: recall_at_5 value: 30.762 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.41 - type: map_at_10 value: 33.655 - type: map_at_100 value: 34.892 - type: map_at_1000 value: 34.995 - type: map_at_3 value: 30.94 - type: map_at_5 value: 32.303 - type: mrr_at_1 value: 29.477999999999998 - type: mrr_at_10 value: 37.443 - type: mrr_at_100 value: 38.383 - type: mrr_at_1000 value: 38.440000000000005 - type: mrr_at_3 value: 34.949999999999996 - type: mrr_at_5 value: 36.228 - type: ndcg_at_1 value: 29.477999999999998 - type: ndcg_at_10 value: 38.769 - type: ndcg_at_100 value: 44.245000000000005 - type: ndcg_at_1000 value: 46.593 - type: ndcg_at_3 value: 33.623 - type: ndcg_at_5 value: 35.766 - type: precision_at_1 value: 29.477999999999998 - type: precision_at_10 value: 6.455 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 14.893999999999998 - type: precision_at_5 value: 10.485 - type: recall_at_1 value: 25.41 - type: recall_at_10 value: 50.669 - type: recall_at_100 value: 74.084 - type: recall_at_1000 value: 90.435 - type: recall_at_3 value: 36.679 - type: recall_at_5 value: 41.94 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.339 - type: map_at_10 value: 31.852000000000004 - type: map_at_100 value: 33.411 - type: map_at_1000 value: 33.62 - type: map_at_3 value: 28.929 - type: map_at_5 value: 30.542 - type: mrr_at_1 value: 28.063 - type: mrr_at_10 value: 36.301 - type: mrr_at_100 value: 37.288 - type: mrr_at_1000 value: 37.349 - type: mrr_at_3 value: 33.663 - type: mrr_at_5 value: 35.165 - type: ndcg_at_1 value: 28.063 - type: ndcg_at_10 value: 37.462 - type: ndcg_at_100 value: 43.620999999999995 - type: ndcg_at_1000 value: 46.211 - type: ndcg_at_3 value: 32.68 - type: ndcg_at_5 value: 34.981 - type: precision_at_1 value: 28.063 - type: precision_at_10 value: 7.1739999999999995 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.217 - type: precision_at_5 value: 11.265 - type: recall_at_1 value: 23.339 - type: recall_at_10 value: 48.376999999999995 - type: recall_at_100 value: 76.053 - type: recall_at_1000 value: 92.455 - type: recall_at_3 value: 34.735 - type: recall_at_5 value: 40.71 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.925 - type: map_at_10 value: 26.017000000000003 - type: map_at_100 value: 27.034000000000002 - type: map_at_1000 value: 27.156000000000002 - type: map_at_3 value: 23.604 - type: map_at_5 value: 24.75 - type: mrr_at_1 value: 20.333000000000002 - type: mrr_at_10 value: 27.915 - type: mrr_at_100 value: 28.788000000000004 - type: mrr_at_1000 value: 28.877999999999997 - type: mrr_at_3 value: 25.446999999999996 - type: mrr_at_5 value: 26.648 - type: ndcg_at_1 value: 20.333000000000002 - type: ndcg_at_10 value: 30.673000000000002 - type: ndcg_at_100 value: 35.618 - type: ndcg_at_1000 value: 38.517 - type: ndcg_at_3 value: 25.71 - type: ndcg_at_5 value: 27.679 - type: precision_at_1 value: 20.333000000000002 - type: precision_at_10 value: 4.9910000000000005 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 11.029 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 18.925 - type: recall_at_10 value: 43.311 - type: recall_at_100 value: 66.308 - type: recall_at_1000 value: 87.49 - type: recall_at_3 value: 29.596 - type: recall_at_5 value: 34.245 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 13.714 - type: map_at_10 value: 23.194 - type: map_at_100 value: 24.976000000000003 - type: map_at_1000 value: 25.166 - type: map_at_3 value: 19.709 - type: map_at_5 value: 21.523999999999997 - type: mrr_at_1 value: 30.619000000000003 - type: mrr_at_10 value: 42.563 - type: mrr_at_100 value: 43.386 - type: mrr_at_1000 value: 43.423 - type: mrr_at_3 value: 39.555 - type: mrr_at_5 value: 41.268 - type: ndcg_at_1 value: 30.619000000000003 - type: ndcg_at_10 value: 31.836 - type: ndcg_at_100 value: 38.652 - type: ndcg_at_1000 value: 42.088 - type: ndcg_at_3 value: 26.733 - type: ndcg_at_5 value: 28.435 - type: precision_at_1 value: 30.619000000000003 - type: precision_at_10 value: 9.751999999999999 - type: precision_at_100 value: 1.71 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 19.935 - type: precision_at_5 value: 14.984 - type: recall_at_1 value: 13.714 - type: recall_at_10 value: 37.26 - type: recall_at_100 value: 60.546 - type: recall_at_1000 value: 79.899 - type: recall_at_3 value: 24.325 - type: recall_at_5 value: 29.725 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.462 - type: map_at_10 value: 18.637 - type: map_at_100 value: 26.131999999999998 - type: map_at_1000 value: 27.607 - type: map_at_3 value: 13.333 - type: map_at_5 value: 15.654000000000002 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.32600000000001 - type: mrr_at_100 value: 74.60900000000001 - type: mrr_at_1000 value: 74.62 - type: mrr_at_3 value: 72.667 - type: mrr_at_5 value: 73.817 - type: ndcg_at_1 value: 53.87499999999999 - type: ndcg_at_10 value: 40.028999999999996 - type: ndcg_at_100 value: 44.199 - type: ndcg_at_1000 value: 51.629999999999995 - type: ndcg_at_3 value: 44.113 - type: ndcg_at_5 value: 41.731 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 31.900000000000002 - type: precision_at_100 value: 10.043000000000001 - type: precision_at_1000 value: 1.926 - type: precision_at_3 value: 47.417 - type: precision_at_5 value: 40.65 - type: recall_at_1 value: 8.462 - type: recall_at_10 value: 24.293 - type: recall_at_100 value: 50.146 - type: recall_at_1000 value: 74.034 - type: recall_at_3 value: 14.967 - type: recall_at_5 value: 18.682000000000002 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.84499999999999 - type: f1 value: 42.48106691979349 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 74.034 - type: map_at_10 value: 82.76 - type: map_at_100 value: 82.968 - type: map_at_1000 value: 82.98299999999999 - type: map_at_3 value: 81.768 - type: map_at_5 value: 82.418 - type: mrr_at_1 value: 80.048 - type: mrr_at_10 value: 87.64999999999999 - type: mrr_at_100 value: 87.712 - type: mrr_at_1000 value: 87.713 - type: mrr_at_3 value: 87.01100000000001 - type: mrr_at_5 value: 87.466 - type: ndcg_at_1 value: 80.048 - type: ndcg_at_10 value: 86.643 - type: ndcg_at_100 value: 87.361 - type: ndcg_at_1000 value: 87.606 - type: ndcg_at_3 value: 85.137 - type: ndcg_at_5 value: 86.016 - type: precision_at_1 value: 80.048 - type: precision_at_10 value: 10.372 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.638 - type: precision_at_5 value: 20.177 - type: recall_at_1 value: 74.034 - type: recall_at_10 value: 93.769 - type: recall_at_100 value: 96.569 - type: recall_at_1000 value: 98.039 - type: recall_at_3 value: 89.581 - type: recall_at_5 value: 91.906 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 20.5 - type: map_at_10 value: 32.857 - type: map_at_100 value: 34.589 - type: map_at_1000 value: 34.778 - type: map_at_3 value: 29.160999999999998 - type: map_at_5 value: 31.033 - type: mrr_at_1 value: 40.123 - type: mrr_at_10 value: 48.776 - type: mrr_at_100 value: 49.495 - type: mrr_at_1000 value: 49.539 - type: mrr_at_3 value: 46.605000000000004 - type: mrr_at_5 value: 47.654 - type: ndcg_at_1 value: 40.123 - type: ndcg_at_10 value: 40.343 - type: ndcg_at_100 value: 46.56 - type: ndcg_at_1000 value: 49.777 - type: ndcg_at_3 value: 37.322 - type: ndcg_at_5 value: 37.791000000000004 - type: precision_at_1 value: 40.123 - type: precision_at_10 value: 11.08 - type: precision_at_100 value: 1.752 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 24.897 - type: precision_at_5 value: 17.809 - type: recall_at_1 value: 20.5 - type: recall_at_10 value: 46.388 - type: recall_at_100 value: 69.552 - type: recall_at_1000 value: 89.011 - type: recall_at_3 value: 33.617999999999995 - type: recall_at_5 value: 38.211 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.135999999999996 - type: map_at_10 value: 61.673 - type: map_at_100 value: 62.562 - type: map_at_1000 value: 62.62 - type: map_at_3 value: 58.467999999999996 - type: map_at_5 value: 60.463 - type: mrr_at_1 value: 78.271 - type: mrr_at_10 value: 84.119 - type: mrr_at_100 value: 84.29299999999999 - type: mrr_at_1000 value: 84.299 - type: mrr_at_3 value: 83.18900000000001 - type: mrr_at_5 value: 83.786 - type: ndcg_at_1 value: 78.271 - type: ndcg_at_10 value: 69.935 - type: ndcg_at_100 value: 73.01299999999999 - type: ndcg_at_1000 value: 74.126 - type: ndcg_at_3 value: 65.388 - type: ndcg_at_5 value: 67.906 - type: precision_at_1 value: 78.271 - type: precision_at_10 value: 14.562 - type: precision_at_100 value: 1.6969999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 41.841 - type: precision_at_5 value: 27.087 - type: recall_at_1 value: 39.135999999999996 - type: recall_at_10 value: 72.809 - type: recall_at_100 value: 84.86200000000001 - type: recall_at_1000 value: 92.208 - type: recall_at_3 value: 62.76199999999999 - type: recall_at_5 value: 67.718 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.60600000000001 - type: ap value: 86.6579587804335 - type: f1 value: 90.5938853929307 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.852 - type: map_at_10 value: 33.982 - type: map_at_100 value: 35.116 - type: map_at_1000 value: 35.167 - type: map_at_3 value: 30.134 - type: map_at_5 value: 32.340999999999994 - type: mrr_at_1 value: 22.479 - type: mrr_at_10 value: 34.594 - type: mrr_at_100 value: 35.672 - type: mrr_at_1000 value: 35.716 - type: mrr_at_3 value: 30.84 - type: mrr_at_5 value: 32.998 - type: ndcg_at_1 value: 22.493 - type: ndcg_at_10 value: 40.833000000000006 - type: ndcg_at_100 value: 46.357 - type: ndcg_at_1000 value: 47.637 - type: ndcg_at_3 value: 32.995999999999995 - type: ndcg_at_5 value: 36.919000000000004 - type: precision_at_1 value: 22.493 - type: precision_at_10 value: 6.465999999999999 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.030999999999999 - type: precision_at_5 value: 10.413 - type: recall_at_1 value: 21.852 - type: recall_at_10 value: 61.934999999999995 - type: recall_at_100 value: 87.611 - type: recall_at_1000 value: 97.441 - type: recall_at_3 value: 40.583999999999996 - type: recall_at_5 value: 49.992999999999995 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.36069311445507 - type: f1 value: 93.16456330371453 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.74692202462381 - type: f1 value: 58.17903579421599 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.80833893745796 - type: f1 value: 72.70786592684664 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69872225958305 - type: f1 value: 78.61626934504731 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.058658628717694 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.85561739360599 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.290259910144385 - type: mrr value: 32.44223046102856 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.288 - type: map_at_10 value: 12.267999999999999 - type: map_at_100 value: 15.557000000000002 - type: map_at_1000 value: 16.98 - type: map_at_3 value: 8.866 - type: map_at_5 value: 10.418 - type: mrr_at_1 value: 43.653 - type: mrr_at_10 value: 52.681 - type: mrr_at_100 value: 53.315999999999995 - type: mrr_at_1000 value: 53.357 - type: mrr_at_3 value: 51.393 - type: mrr_at_5 value: 51.903999999999996 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.305 - type: ndcg_at_100 value: 30.825999999999997 - type: ndcg_at_1000 value: 39.393 - type: ndcg_at_3 value: 39.931 - type: ndcg_at_5 value: 37.519999999999996 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.728 - type: precision_at_100 value: 7.932 - type: precision_at_1000 value: 2.07 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.288 - type: recall_at_10 value: 16.195 - type: recall_at_100 value: 31.135 - type: recall_at_1000 value: 61.531000000000006 - type: recall_at_3 value: 10.313 - type: recall_at_5 value: 12.754999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 28.216 - type: map_at_10 value: 42.588 - type: map_at_100 value: 43.702999999999996 - type: map_at_1000 value: 43.739 - type: map_at_3 value: 38.177 - type: map_at_5 value: 40.754000000000005 - type: mrr_at_1 value: 31.866 - type: mrr_at_10 value: 45.189 - type: mrr_at_100 value: 46.056000000000004 - type: mrr_at_1000 value: 46.081 - type: mrr_at_3 value: 41.526999999999994 - type: mrr_at_5 value: 43.704 - type: ndcg_at_1 value: 31.837 - type: ndcg_at_10 value: 50.178 - type: ndcg_at_100 value: 54.98800000000001 - type: ndcg_at_1000 value: 55.812 - type: ndcg_at_3 value: 41.853 - type: ndcg_at_5 value: 46.153 - type: precision_at_1 value: 31.837 - type: precision_at_10 value: 8.43 - type: precision_at_100 value: 1.1119999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.911000000000001 - type: recall_at_1 value: 28.216 - type: recall_at_10 value: 70.8 - type: recall_at_100 value: 91.857 - type: recall_at_1000 value: 97.941 - type: recall_at_3 value: 49.196 - type: recall_at_5 value: 59.072 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.22800000000001 - type: map_at_10 value: 85.115 - type: map_at_100 value: 85.72 - type: map_at_1000 value: 85.737 - type: map_at_3 value: 82.149 - type: map_at_5 value: 84.029 - type: mrr_at_1 value: 81.96 - type: mrr_at_10 value: 88.00200000000001 - type: mrr_at_100 value: 88.088 - type: mrr_at_1000 value: 88.089 - type: mrr_at_3 value: 87.055 - type: mrr_at_5 value: 87.715 - type: ndcg_at_1 value: 82.01 - type: ndcg_at_10 value: 88.78 - type: ndcg_at_100 value: 89.91 - type: ndcg_at_1000 value: 90.013 - type: ndcg_at_3 value: 85.957 - type: ndcg_at_5 value: 87.56 - type: precision_at_1 value: 82.01 - type: precision_at_10 value: 13.462 - type: precision_at_100 value: 1.528 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.732000000000003 - type: recall_at_1 value: 71.22800000000001 - type: recall_at_10 value: 95.69 - type: recall_at_100 value: 99.531 - type: recall_at_1000 value: 99.98 - type: recall_at_3 value: 87.632 - type: recall_at_5 value: 92.117 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 52.31768034366916 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 60.640266772723606 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.7780000000000005 - type: map_at_10 value: 12.299 - type: map_at_100 value: 14.363000000000001 - type: map_at_1000 value: 14.71 - type: map_at_3 value: 8.738999999999999 - type: map_at_5 value: 10.397 - type: mrr_at_1 value: 23.599999999999998 - type: mrr_at_10 value: 34.845 - type: mrr_at_100 value: 35.916 - type: mrr_at_1000 value: 35.973 - type: mrr_at_3 value: 31.7 - type: mrr_at_5 value: 33.535 - type: ndcg_at_1 value: 23.599999999999998 - type: ndcg_at_10 value: 20.522000000000002 - type: ndcg_at_100 value: 28.737000000000002 - type: ndcg_at_1000 value: 34.596 - type: ndcg_at_3 value: 19.542 - type: ndcg_at_5 value: 16.958000000000002 - type: precision_at_1 value: 23.599999999999998 - type: precision_at_10 value: 10.67 - type: precision_at_100 value: 2.259 - type: precision_at_1000 value: 0.367 - type: precision_at_3 value: 18.333 - type: precision_at_5 value: 14.879999999999999 - type: recall_at_1 value: 4.7780000000000005 - type: recall_at_10 value: 21.617 - type: recall_at_100 value: 45.905 - type: recall_at_1000 value: 74.42 - type: recall_at_3 value: 11.148 - type: recall_at_5 value: 15.082999999999998 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.22372750297885 - type: cos_sim_spearman value: 79.40972617119405 - type: euclidean_pearson value: 80.6101072020434 - type: euclidean_spearman value: 79.53844217225202 - type: manhattan_pearson value: 80.57265975286111 - type: manhattan_spearman value: 79.46335611792958 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.43713315520749 - type: cos_sim_spearman value: 77.44128693329532 - type: euclidean_pearson value: 81.63869928101123 - type: euclidean_spearman value: 77.29512977961515 - type: manhattan_pearson value: 81.63704185566183 - type: manhattan_spearman value: 77.29909412738657 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.59451537860527 - type: cos_sim_spearman value: 82.97994638856723 - type: euclidean_pearson value: 82.89478688288412 - type: euclidean_spearman value: 83.58740751053104 - type: manhattan_pearson value: 82.69140840941608 - type: manhattan_spearman value: 83.33665956040555 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.00756527711764 - type: cos_sim_spearman value: 81.83560996841379 - type: euclidean_pearson value: 82.07684151976518 - type: euclidean_spearman value: 82.00913052060511 - type: manhattan_pearson value: 82.05690778488794 - type: manhattan_spearman value: 82.02260252019525 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.13710262895447 - type: cos_sim_spearman value: 87.26412811156248 - type: euclidean_pearson value: 86.94151453230228 - type: euclidean_spearman value: 87.5363796699571 - type: manhattan_pearson value: 86.86989424083748 - type: manhattan_spearman value: 87.47315940781353 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.0230597603627 - type: cos_sim_spearman value: 84.93344499318864 - type: euclidean_pearson value: 84.23754743431141 - type: euclidean_spearman value: 85.09707376597099 - type: manhattan_pearson value: 84.04325160987763 - type: manhattan_spearman value: 84.89353071339909 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.75620824563921 - type: cos_sim_spearman value: 87.15065513706398 - type: euclidean_pearson value: 88.26281533633521 - type: euclidean_spearman value: 87.51963738643983 - type: manhattan_pearson value: 88.25599267618065 - type: manhattan_spearman value: 87.58048736047483 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.74645319195137 - type: cos_sim_spearman value: 65.29996325037214 - type: euclidean_pearson value: 67.04297794086443 - type: euclidean_spearman value: 65.43841726694343 - type: manhattan_pearson value: 67.39459955690904 - type: manhattan_spearman value: 65.92864704413651 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.31291020270801 - type: cos_sim_spearman value: 85.86473738688068 - type: euclidean_pearson value: 85.65537275064152 - type: euclidean_spearman value: 86.13087454209642 - type: manhattan_pearson value: 85.43946955047609 - type: manhattan_spearman value: 85.91568175344916 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.93798118350695 - type: mrr value: 95.93536274908824 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 57.594 - type: map_at_10 value: 66.81899999999999 - type: map_at_100 value: 67.368 - type: map_at_1000 value: 67.4 - type: map_at_3 value: 64.061 - type: map_at_5 value: 65.47 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.219 - type: mrr_at_100 value: 68.655 - type: mrr_at_1000 value: 68.684 - type: mrr_at_3 value: 66.22200000000001 - type: mrr_at_5 value: 67.289 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.275 - type: ndcg_at_100 value: 73.642 - type: ndcg_at_1000 value: 74.373 - type: ndcg_at_3 value: 66.521 - type: ndcg_at_5 value: 68.581 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.433 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.556 - type: precision_at_5 value: 16.8 - type: recall_at_1 value: 57.594 - type: recall_at_10 value: 83.622 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.64399999999999 - type: recall_at_5 value: 75.983 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85841584158416 - type: cos_sim_ap value: 96.66996142314342 - type: cos_sim_f1 value: 92.83208020050125 - type: cos_sim_precision value: 93.06532663316584 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.85841584158416 - type: dot_ap value: 96.6775307676576 - type: dot_f1 value: 92.69289729177312 - type: dot_precision value: 94.77533960292581 - type: dot_recall value: 90.7 - type: euclidean_accuracy value: 99.86138613861387 - type: euclidean_ap value: 96.6338454403108 - type: euclidean_f1 value: 92.92214357937311 - type: euclidean_precision value: 93.96728016359918 - type: euclidean_recall value: 91.9 - type: manhattan_accuracy value: 99.86237623762376 - type: manhattan_ap value: 96.60370449645053 - type: manhattan_f1 value: 92.91177970423253 - type: manhattan_precision value: 94.7970863683663 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.6775307676576 - type: max_f1 value: 92.92214357937311 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 60.77977058695198 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.2725272535638 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.64052466362125 - type: mrr value: 54.533067014684654 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.677624219206578 - type: cos_sim_spearman value: 30.121368518123447 - type: dot_pearson value: 30.69870088041608 - type: dot_spearman value: 29.61284927093751 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.855 - type: map_at_100 value: 9.885 - type: map_at_1000 value: 23.416999999999998 - type: map_at_3 value: 0.637 - type: map_at_5 value: 1.024 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.067 - type: mrr_at_100 value: 93.067 - type: mrr_at_1000 value: 93.067 - type: mrr_at_3 value: 92.667 - type: mrr_at_5 value: 93.067 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 75.899 - type: ndcg_at_100 value: 55.115 - type: ndcg_at_1000 value: 48.368 - type: ndcg_at_3 value: 79.704 - type: ndcg_at_5 value: 78.39699999999999 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 79.60000000000001 - type: precision_at_100 value: 56.06 - type: precision_at_1000 value: 21.206 - type: precision_at_3 value: 84.667 - type: precision_at_5 value: 83.2 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.078 - type: recall_at_100 value: 13.297 - type: recall_at_1000 value: 44.979 - type: recall_at_3 value: 0.6689999999999999 - type: recall_at_5 value: 1.106 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.258 - type: map_at_10 value: 10.439 - type: map_at_100 value: 16.89 - type: map_at_1000 value: 18.407999999999998 - type: map_at_3 value: 5.668 - type: map_at_5 value: 7.718 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 51.159 - type: mrr_at_100 value: 51.714000000000006 - type: mrr_at_1000 value: 51.714000000000006 - type: mrr_at_3 value: 47.959 - type: mrr_at_5 value: 50.407999999999994 - type: ndcg_at_1 value: 29.592000000000002 - type: ndcg_at_10 value: 26.037 - type: ndcg_at_100 value: 37.924 - type: ndcg_at_1000 value: 49.126999999999995 - type: ndcg_at_3 value: 30.631999999999998 - type: ndcg_at_5 value: 28.571 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 22.857 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 1.529 - type: precision_at_3 value: 34.014 - type: precision_at_5 value: 29.796 - type: recall_at_1 value: 2.258 - type: recall_at_10 value: 16.554 - type: recall_at_100 value: 48.439 - type: recall_at_1000 value: 82.80499999999999 - type: recall_at_3 value: 7.283 - type: recall_at_5 value: 10.732 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.8858 - type: ap value: 13.835684144362109 - type: f1 value: 53.803351693244586 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.50650820599886 - type: f1 value: 60.84357825979259 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.52131044852134 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.59337187816654 - type: cos_sim_ap value: 73.23925826533437 - type: cos_sim_f1 value: 67.34693877551021 - type: cos_sim_precision value: 62.40432237730752 - type: cos_sim_recall value: 73.13984168865434 - type: dot_accuracy value: 85.31322644096085 - type: dot_ap value: 72.30723963807422 - type: dot_f1 value: 66.47051612112296 - type: dot_precision value: 62.0792305930845 - type: dot_recall value: 71.53034300791556 - type: euclidean_accuracy value: 85.61125350181797 - type: euclidean_ap value: 73.32843720487845 - type: euclidean_f1 value: 67.36549633745895 - type: euclidean_precision value: 64.60755813953489 - type: euclidean_recall value: 70.36939313984169 - type: manhattan_accuracy value: 85.63509566668654 - type: manhattan_ap value: 73.16658488311325 - type: manhattan_f1 value: 67.20597386434349 - type: manhattan_precision value: 63.60424028268551 - type: manhattan_recall value: 71.2401055408971 - type: max_accuracy value: 85.63509566668654 - type: max_ap value: 73.32843720487845 - type: max_f1 value: 67.36549633745895 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.33779640625606 - type: cos_sim_ap value: 84.83868375898157 - type: cos_sim_f1 value: 77.16506154017773 - type: cos_sim_precision value: 74.62064005753327 - type: cos_sim_recall value: 79.88912842623961 - type: dot_accuracy value: 88.02732176815307 - type: dot_ap value: 83.95089283763002 - type: dot_f1 value: 76.29635101196631 - type: dot_precision value: 73.31771720613288 - type: dot_recall value: 79.52725592854944 - type: euclidean_accuracy value: 88.44452206310397 - type: euclidean_ap value: 84.98384576824827 - type: euclidean_f1 value: 77.29311047696697 - type: euclidean_precision value: 74.51232583065381 - type: euclidean_recall value: 80.28949799815214 - type: manhattan_accuracy value: 88.47362906042613 - type: manhattan_ap value: 84.91421462218432 - type: manhattan_f1 value: 77.05107637204792 - type: manhattan_precision value: 74.74484256243214 - type: manhattan_recall value: 79.50415768401602 - type: max_accuracy value: 88.47362906042613 - type: max_ap value: 84.98384576824827 - type: max_f1 value: 77.29311047696697 license: mit language: - en --- Finetuned using the same data & library as [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. And it also can be used in vector databases for LLMs. ************* 🌟**Updates**🌟 ************* - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire: - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
dhruvilHV/initial_ViT_model
dhruvilHV
2024-01-09T12:48:25Z
177
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:fair_face", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-08T09:49:25Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - fair_face metrics: - accuracy model-index: - name: initial_ViT_model results: - task: name: Image Classification type: image-classification dataset: name: fair_face type: fair_face config: '0.25' split: validation args: '0.25' metrics: - name: Accuracy type: accuracy value: 0.21252510498448055 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # initial_ViT_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the fair_face dataset. It achieves the following results on the evaluation set: - Loss: 3.6347 - Accuracy: 0.2125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.7855 | 0.15 | 50 | 4.6444 | 0.0511 | | 4.4242 | 0.29 | 100 | 4.2124 | 0.1418 | | 4.0596 | 0.44 | 150 | 3.9402 | 0.1744 | | 3.859 | 0.59 | 200 | 3.7823 | 0.1956 | | 3.7392 | 0.74 | 250 | 3.6877 | 0.2105 | | 3.6424 | 0.88 | 300 | 3.6347 | 0.2125 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
ugursa/FinancialBERT-Yahoo-Finance-Sentiment-Analysis
ugursa
2024-01-09T12:33:23Z
102
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:ugursa/Yahoo-Finance-News-Sentences", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-06T14:42:13Z
--- datasets: - ugursa/Yahoo-Finance-News-Sentences language: - en metrics: - accuracy - f1 - precision - recall library_name: transformers pipeline_tag: text-classification widget: - text: "Microsoft's financial backing for ChatGPT maker OpenAI may be subject to EU merger rules, EU antitrust regulators said on Tuesday, underscoring a similar warning from its UK peer in December." example_title: "Bearish Sentence" - text: "Volumes for CME lithium, launched in 2021, climbed to 20,307 metric tons in 2023 from 468 tons in 2022." example_title: "Bullish Sentence" - text: "Investors will get a window into America's largest banks later when the CEOs of JPMorgan Chase (JPM), Goldman Sachs (GS), and Bank of America (BAC), among others, give testimony before the Senate’s banking committee." example_title: "Neutral Sentence" ---
darinchau/checkpoints
darinchau
2024-01-09T12:22:35Z
61
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:darinchau/checkpoints", "base_model:finetune:darinchau/checkpoints", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T03:09:38Z
--- license: apache-2.0 base_model: darinchau/checkpoints tags: - generated_from_trainer model-index: - name: checkpoints results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints This model is a fine-tuned version of [darinchau/checkpoints](https://huggingface.co/darinchau/checkpoints) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.8289 - eval_cer: 79.9622 - eval_runtime: 31.6459 - eval_samples_per_second: 3.16 - eval_steps_per_second: 0.221 - epoch: 35.11 - step: 3300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
gsl22/ellis-v1-emotion-leadership12
gsl22
2024-01-09T12:18:18Z
92
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T12:13:14Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: ellis-v1-emotion-leadership12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ellis-v1-emotion-leadership12 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3768 - Accuracy: 0.8757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.425 | 1.0 | 3154 | 0.3768 | 0.8757 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
papahawk/devi-7b
papahawk
2024-01-09T12:12:44Z
13
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2305.18290", "arxiv:2310.16944", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:finetune:HuggingFaceH4/zephyr-7b-beta", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-07T05:37:39Z
--- tags: - generated_from_trainer license: mit datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en base_model: HuggingFaceH4/zephyr-7b-beta pipeline_tag: text-generation model-index: - name: devi-7b description: "Initial fork of Zephyr 7B β, with plans for GGML integration and further development." results: # AI2 Reasoning Challenge (25-Shot) - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm name: normalized accuracy value: 62.03071672354948 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # HellaSwag (10-shot) - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm name: normalized accuracy value: 84.35570603465445 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # DROP (3-shot) - task: type: text-generation name: Text Generation dataset: name: Drop (3-Shot) type: drop split: validation args: num_few_shot: 3 metrics: - type: f1 name: f1 score value: 9.662437080536909 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # TruthfulQA (0-shot) - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.44916942762855 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # GSM8k (5-shot) - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 12.736921910538287 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # MMLU (5-Shot) - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 61.07 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # Winogrande (5-shot) - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 77.74269928966061 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta # AlpacaEval (taken from model card) - task: type: text-generation name: Text Generation dataset: name: AlpacaEval type: tatsu-lab/alpaca_eval metrics: - type: unknown name: win rate value: 0.9060 source: url: https://tatsu-lab.github.io/alpaca_eval/ # MT-Bench (taken from model card) - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown name: score value: 7.34 source: url: https://huggingface.co/spaces/lmsys/mt-bench --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <h2 style='text-align: center'>Devi 7B</h2> <h2 style='text-align: center'>Fork of Zephyr 7B β</h2> <h2 style='text-align: center '><em>All thanks to HuggingFaceH4 for their work!</em> </h2> <img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat - **Chatbot Arena:** Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org ## Performance At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks: | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) | |-------------|-----|----|---------------|--------------| | StableLM-Tuned-α | 7B| dSFT |2.75| -| | MPT-Chat | 7B |dSFT |5.42| -| | Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83| | Mistral-Instructv0.1 | 7B| - | 6.84 |-| | Zephyr-7b-α |7B| dDPO| 6.88| -| | **Zephyr-7b-β** 🪁 | **7B** | **dDPO** | **7.34** | **90.60** | | Falcon-Instruct | 40B |dSFT |5.17 |45.71| | Guanaco | 65B | SFT |6.41| 71.80| | Llama2-Chat | 70B |RLHF |6.86| 92.66| | Vicuna v1.3 | 33B |dSFT |7.12 |88.99| | WizardLM v1.0 | 70B |dSFT |7.71 |-| | Xwin-LM v0.1 | 70B |dPPO |- |95.57| | GPT-3.5-turbo | - |RLHF |7.94 |89.37| | Claude 2 | - |RLHF |8.06| 91.36| | GPT-4 | -| RLHF |8.99| 95.28| In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/raxvt5ma16d7T23my34WC.png) However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap. ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities. You can find the datasets used for training Zephyr-7B-β [here](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66) Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training and evaluation data During DPO training, this model achieves the following results on the evaluation set: - Loss: 0.7496 - Rewards/chosen: -4.5221 - Rewards/rejected: -8.3184 - Rewards/accuracies: 0.7812 - Rewards/margins: 3.7963 - Logps/rejected: -340.1541 - Logps/chosen: -299.4561 - Logits/rejected: -2.3081 - Logits/chosen: -2.3531 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results The table below shows the full set of DPO training metrics: | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6284 | 0.05 | 100 | 0.6098 | 0.0425 | -0.1872 | 0.7344 | 0.2297 | -258.8416 | -253.8099 | -2.7976 | -2.8234 | | 0.4908 | 0.1 | 200 | 0.5426 | -0.0279 | -0.6842 | 0.75 | 0.6563 | -263.8124 | -254.5145 | -2.7719 | -2.7960 | | 0.5264 | 0.15 | 300 | 0.5324 | 0.0414 | -0.9793 | 0.7656 | 1.0207 | -266.7627 | -253.8209 | -2.7892 | -2.8122 | | 0.5536 | 0.21 | 400 | 0.4957 | -0.0185 | -1.5276 | 0.7969 | 1.5091 | -272.2460 | -254.4203 | -2.8542 | -2.8764 | | 0.5362 | 0.26 | 500 | 0.5031 | -0.2630 | -1.5917 | 0.7812 | 1.3287 | -272.8869 | -256.8653 | -2.8702 | -2.8958 | | 0.5966 | 0.31 | 600 | 0.5963 | -0.2993 | -1.6491 | 0.7812 | 1.3499 | -273.4614 | -257.2279 | -2.8778 | -2.8986 | | 0.5014 | 0.36 | 700 | 0.5382 | -0.2859 | -1.4750 | 0.75 | 1.1891 | -271.7204 | -257.0942 | -2.7659 | -2.7869 | | 0.5334 | 0.41 | 800 | 0.5677 | -0.4289 | -1.8968 | 0.7969 | 1.4679 | -275.9378 | -258.5242 | -2.7053 | -2.7265 | | 0.5251 | 0.46 | 900 | 0.5772 | -0.2116 | -1.3107 | 0.7344 | 1.0991 | -270.0768 | -256.3507 | -2.8463 | -2.8662 | | 0.5205 | 0.52 | 1000 | 0.5262 | -0.3792 | -1.8585 | 0.7188 | 1.4793 | -275.5552 | -258.0276 | -2.7893 | -2.7979 | | 0.5094 | 0.57 | 1100 | 0.5433 | -0.6279 | -1.9368 | 0.7969 | 1.3089 | -276.3377 | -260.5136 | -2.7453 | -2.7536 | | 0.5837 | 0.62 | 1200 | 0.5349 | -0.3780 | -1.9584 | 0.7656 | 1.5804 | -276.5542 | -258.0154 | -2.7643 | -2.7756 | | 0.5214 | 0.67 | 1300 | 0.5732 | -1.0055 | -2.2306 | 0.7656 | 1.2251 | -279.2761 | -264.2903 | -2.6986 | -2.7113 | | 0.6914 | 0.72 | 1400 | 0.5137 | -0.6912 | -2.1775 | 0.7969 | 1.4863 | -278.7448 | -261.1467 | -2.7166 | -2.7275 | | 0.4655 | 0.77 | 1500 | 0.5090 | -0.7987 | -2.2930 | 0.7031 | 1.4943 | -279.8999 | -262.2220 | -2.6651 | -2.6838 | | 0.5731 | 0.83 | 1600 | 0.5312 | -0.8253 | -2.3520 | 0.7812 | 1.5268 | -280.4902 | -262.4876 | -2.6543 | -2.6728 | | 0.5233 | 0.88 | 1700 | 0.5206 | -0.4573 | -2.0951 | 0.7812 | 1.6377 | -277.9205 | -258.8084 | -2.6870 | -2.7097 | | 0.5593 | 0.93 | 1800 | 0.5231 | -0.5508 | -2.2000 | 0.7969 | 1.6492 | -278.9703 | -259.7433 | -2.6221 | -2.6519 | | 0.4967 | 0.98 | 1900 | 0.5290 | -0.5340 | -1.9570 | 0.8281 | 1.4230 | -276.5395 | -259.5749 | -2.6564 | -2.6878 | | 0.0921 | 1.03 | 2000 | 0.5368 | -1.1376 | -3.1615 | 0.7812 | 2.0239 | -288.5854 | -265.6111 | -2.6040 | -2.6345 | | 0.0733 | 1.08 | 2100 | 0.5453 | -1.1045 | -3.4451 | 0.7656 | 2.3406 | -291.4208 | -265.2799 | -2.6289 | -2.6595 | | 0.0972 | 1.14 | 2200 | 0.5571 | -1.6915 | -3.9823 | 0.8125 | 2.2908 | -296.7934 | -271.1505 | -2.6471 | -2.6709 | | 0.1058 | 1.19 | 2300 | 0.5789 | -1.0621 | -3.8941 | 0.7969 | 2.8319 | -295.9106 | -264.8563 | -2.5527 | -2.5798 | | 0.2423 | 1.24 | 2400 | 0.5455 | -1.1963 | -3.5590 | 0.7812 | 2.3627 | -292.5599 | -266.1981 | -2.5414 | -2.5784 | | 0.1177 | 1.29 | 2500 | 0.5889 | -1.8141 | -4.3942 | 0.7969 | 2.5801 | -300.9120 | -272.3761 | -2.4802 | -2.5189 | | 0.1213 | 1.34 | 2600 | 0.5683 | -1.4608 | -3.8420 | 0.8125 | 2.3812 | -295.3901 | -268.8436 | -2.4774 | -2.5207 | | 0.0889 | 1.39 | 2700 | 0.5890 | -1.6007 | -3.7337 | 0.7812 | 2.1330 | -294.3068 | -270.2423 | -2.4123 | -2.4522 | | 0.0995 | 1.45 | 2800 | 0.6073 | -1.5519 | -3.8362 | 0.8281 | 2.2843 | -295.3315 | -269.7538 | -2.4685 | -2.5050 | | 0.1145 | 1.5 | 2900 | 0.5790 | -1.7939 | -4.2876 | 0.8438 | 2.4937 | -299.8461 | -272.1744 | -2.4272 | -2.4674 | | 0.0644 | 1.55 | 3000 | 0.5735 | -1.7285 | -4.2051 | 0.8125 | 2.4766 | -299.0209 | -271.5201 | -2.4193 | -2.4574 | | 0.0798 | 1.6 | 3100 | 0.5537 | -1.7226 | -4.2850 | 0.8438 | 2.5624 | -299.8200 | -271.4610 | -2.5367 | -2.5696 | | 0.1013 | 1.65 | 3200 | 0.5575 | -1.5715 | -3.9813 | 0.875 | 2.4098 | -296.7825 | -269.9498 | -2.4926 | -2.5267 | | 0.1254 | 1.7 | 3300 | 0.5905 | -1.6412 | -4.4703 | 0.8594 | 2.8291 | -301.6730 | -270.6473 | -2.5017 | -2.5340 | | 0.085 | 1.76 | 3400 | 0.6133 | -1.9159 | -4.6760 | 0.8438 | 2.7601 | -303.7296 | -273.3941 | -2.4614 | -2.4960 | | 0.065 | 1.81 | 3500 | 0.6074 | -1.8237 | -4.3525 | 0.8594 | 2.5288 | -300.4951 | -272.4724 | -2.4597 | -2.5004 | | 0.0755 | 1.86 | 3600 | 0.5836 | -1.9252 | -4.4005 | 0.8125 | 2.4753 | -300.9748 | -273.4872 | -2.4327 | -2.4716 | | 0.0746 | 1.91 | 3700 | 0.5789 | -1.9280 | -4.4906 | 0.8125 | 2.5626 | -301.8762 | -273.5149 | -2.4686 | -2.5115 | | 0.1348 | 1.96 | 3800 | 0.6015 | -1.8658 | -4.2428 | 0.8281 | 2.3769 | -299.3976 | -272.8936 | -2.4943 | -2.5393 | | 0.0217 | 2.01 | 3900 | 0.6122 | -2.3335 | -4.9229 | 0.8281 | 2.5894 | -306.1988 | -277.5699 | -2.4841 | -2.5272 | | 0.0219 | 2.07 | 4000 | 0.6522 | -2.9890 | -6.0164 | 0.8281 | 3.0274 | -317.1334 | -284.1248 | -2.4105 | -2.4545 | | 0.0119 | 2.12 | 4100 | 0.6922 | -3.4777 | -6.6749 | 0.7969 | 3.1972 | -323.7187 | -289.0121 | -2.4272 | -2.4699 | | 0.0153 | 2.17 | 4200 | 0.6993 | -3.2406 | -6.6775 | 0.7969 | 3.4369 | -323.7453 | -286.6413 | -2.4047 | -2.4465 | | 0.011 | 2.22 | 4300 | 0.7178 | -3.7991 | -7.4397 | 0.7656 | 3.6406 | -331.3667 | -292.2260 | -2.3843 | -2.4290 | | 0.0072 | 2.27 | 4400 | 0.6840 | -3.3269 | -6.8021 | 0.8125 | 3.4752 | -324.9908 | -287.5042 | -2.4095 | -2.4536 | | 0.0197 | 2.32 | 4500 | 0.7013 | -3.6890 | -7.3014 | 0.8125 | 3.6124 | -329.9841 | -291.1250 | -2.4118 | -2.4543 | | 0.0182 | 2.37 | 4600 | 0.7476 | -3.8994 | -7.5366 | 0.8281 | 3.6372 | -332.3356 | -293.2291 | -2.4163 | -2.4565 | | 0.0125 | 2.43 | 4700 | 0.7199 | -4.0560 | -7.5765 | 0.8438 | 3.5204 | -332.7345 | -294.7952 | -2.3699 | -2.4100 | | 0.0082 | 2.48 | 4800 | 0.7048 | -3.6613 | -7.1356 | 0.875 | 3.4743 | -328.3255 | -290.8477 | -2.3925 | -2.4303 | | 0.0118 | 2.53 | 4900 | 0.6976 | -3.7908 | -7.3152 | 0.8125 | 3.5244 | -330.1224 | -292.1431 | -2.3633 | -2.4047 | | 0.0118 | 2.58 | 5000 | 0.7198 | -3.9049 | -7.5557 | 0.8281 | 3.6508 | -332.5271 | -293.2844 | -2.3764 | -2.4194 | | 0.006 | 2.63 | 5100 | 0.7506 | -4.2118 | -7.9149 | 0.8125 | 3.7032 | -336.1194 | -296.3530 | -2.3407 | -2.3860 | | 0.0143 | 2.68 | 5200 | 0.7408 | -4.2433 | -7.9802 | 0.8125 | 3.7369 | -336.7721 | -296.6682 | -2.3509 | -2.3946 | | 0.0057 | 2.74 | 5300 | 0.7552 | -4.3392 | -8.0831 | 0.7969 | 3.7439 | -337.8013 | -297.6275 | -2.3388 | -2.3842 | | 0.0138 | 2.79 | 5400 | 0.7404 | -4.2395 | -7.9762 | 0.8125 | 3.7367 | -336.7322 | -296.6304 | -2.3286 | -2.3737 | | 0.0079 | 2.84 | 5500 | 0.7525 | -4.4466 | -8.2196 | 0.7812 | 3.7731 | -339.1662 | -298.7007 | -2.3200 | -2.3641 | | 0.0077 | 2.89 | 5600 | 0.7520 | -4.5586 | -8.3485 | 0.7969 | 3.7899 | -340.4545 | -299.8206 | -2.3078 | -2.3517 | | 0.0094 | 2.94 | 5700 | 0.7527 | -4.5542 | -8.3509 | 0.7812 | 3.7967 | -340.4790 | -299.7773 | -2.3062 | -2.3510 | | 0.0054 | 2.99 | 5800 | 0.7520 | -4.5169 | -8.3079 | 0.7812 | 3.7911 | -340.0493 | -299.4038 | -2.3081 | -2.3530 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.14.0 ## Citation If you find Zephyr-7B-β is useful in your work, please cite it with: ``` @misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.15 | | ARC (25-shot) | 62.03 | | HellaSwag (10-shot) | 84.36 | | MMLU (5-shot) | 61.07 | | TruthfulQA (0-shot) | 57.45 | | Winogrande (5-shot) | 77.74 | | GSM8K (5-shot) | 12.74 | | DROP (3-shot) | 9.66 |
Harshitha0813/intent-classification
Harshitha0813
2024-01-09T12:03:07Z
44
0
transformers
[ "transformers", "tf", "xlm-roberta", "text-classification", "generated_from_keras_callback", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-05T06:47:17Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_keras_callback model-index: - name: Harshitha0813/intent-classification results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Harshitha0813/intent-classification This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5007 - Validation Loss: 1.4990 - Train Accuracy: 0.4564 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.5007 | 1.4990 | 0.4564 | 0 | ### Framework versions - Transformers 4.32.1 - TensorFlow 2.15.0 - Datasets 2.12.0 - Tokenizers 0.13.2
samwell/dqn-SpaceInvadersNoFrameskip-v41
samwell
2024-01-09T12:02:43Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T12:02:12Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 570.50 +/- 203.91 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samwell -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samwell -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga samwell ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
gagan3012/MetaModelv3
gagan3012
2024-01-09T12:01:23Z
1,549
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "MetaModelv3", "merge", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T22:55:00Z
--- license: apache-2.0 tags: - MetaModelv3 - merge --- # MetaModelv3 This model is a hybrid of the following models and is trained using the following configuration: * [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4) * [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__MetaModelv3) | Metric | Value | |-----------------------|---------------------------| | Avg. | 74.39 | | ARC (25-shot) | 71.16 | | HellaSwag (10-shot) | 88.39 | | MMLU (5-shot) | 66.32 | | TruthfulQA (0-shot) | 71.86 | | Winogrande (5-shot) | 83.35 | | GSM8K (5-shot) | 65.28 |
gagan3012/MetaModelv2
gagan3012
2024-01-09T12:00:35Z
1,543
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "MetaModelv2", "merge", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-03T15:04:33Z
--- license: apache-2.0 tags: - MetaModelv2 - merge --- # MetaModelv2 This model is a hybrid of the following models and is trained using the following configuration: * [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct) * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__MetaModelv2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 74.24 | | ARC (25-shot) | 71.08 | | HellaSwag (10-shot) | 88.56 | | MMLU (5-shot) | 66.29 | | TruthfulQA (0-shot) | 71.94 | | Winogrande (5-shot) | 83.11 | | GSM8K (5-shot) | 64.44 |
gagan3012/MetaModel
gagan3012
2024-01-09T11:59:50Z
1,550
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-03T11:26:23Z
--- license: apache-2.0 tags: - merge - mergekit --- # MetaModel This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit): * [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4) * [kekmodel/StopCarbon-10.7B-v5](https://huggingface.co/kekmodel/StopCarbon-10.7B-v5) ## 🧩 Configuration ```yaml slices: - sources: - model: jeonsworld/CarbonVillain-en-10.7B-v4 layer_range: [0, 48] - model: kekmodel/StopCarbon-10.7B-v5 layer_range: [0, 48] merge_method: slerp base_model: jeonsworld/CarbonVillain-en-10.7B-v4 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # Dataset Card for Evaluation run of gagan3012/MetaModel <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [gagan3012/MetaModel](https://huggingface.co/gagan3012/MetaModel) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_gagan3012__MetaModel", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-04T14:09:43.780941](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__MetaModel/blob/main/results_2024-01-04T14-09-43.780941.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6664380298886512, "acc_stderr": 0.031642195230944255, "acc_norm": 0.6671639222858992, "acc_norm_stderr": 0.03228745343467652, "mc1": 0.5691554467564259, "mc1_stderr": 0.01733527247533237, "mc2": 0.7184177934834866, "mc2_stderr": 0.014995634120330182 }, "harness|arc:challenge|25": { "acc": 0.6843003412969283, "acc_stderr": 0.013582571095815291, "acc_norm": 0.7107508532423208, "acc_norm_stderr": 0.01325001257939344 }, "harness|hellaswag|10": { "acc": 0.7132045409281019, "acc_stderr": 0.004513409114983828, "acc_norm": 0.8844851623182632, "acc_norm_stderr": 0.0031898897894046684 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.743421052631579, "acc_stderr": 0.0355418036802569, "acc_norm": 0.743421052631579, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.48, "acc_stderr": 0.05021167315686781, "acc_norm": 0.48, "acc_norm_stderr": 0.05021167315686781 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.625531914893617, "acc_stderr": 0.03163910665367291, "acc_norm": 0.625531914893617, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6413793103448275, "acc_stderr": 0.039966295748767186, "acc_norm": 0.6413793103448275, "acc_norm_stderr": 0.039966295748767186 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.5, "acc_stderr": 0.025751310131230234, "acc_norm": 0.5, "acc_norm_stderr": 0.025751310131230234 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42857142857142855, "acc_stderr": 0.0442626668137991, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.0442626668137991 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8129032258064516, "acc_stderr": 0.022185710092252252, "acc_norm": 0.8129032258064516, "acc_norm_stderr": 0.022185710092252252 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8121212121212121, "acc_stderr": 0.03050193405942914, "acc_norm": 0.8121212121212121, "acc_norm_stderr": 0.03050193405942914 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8636363636363636, "acc_stderr": 0.024450155973189835, "acc_norm": 0.8636363636363636, "acc_norm_stderr": 0.024450155973189835 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644244, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644244 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6692307692307692, "acc_stderr": 0.02385479568097114, "acc_norm": 0.6692307692307692, "acc_norm_stderr": 0.02385479568097114 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37037037037037035, "acc_stderr": 0.02944316932303154, "acc_norm": 0.37037037037037035, "acc_norm_stderr": 0.02944316932303154 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7142857142857143, "acc_stderr": 0.029344572500634332, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.029344572500634332 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8422018348623853, "acc_stderr": 0.01563002297009246, "acc_norm": 0.8422018348623853, "acc_norm_stderr": 0.01563002297009246 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5740740740740741, "acc_stderr": 0.03372343271653062, "acc_norm": 0.5740740740740741, "acc_norm_stderr": 0.03372343271653062 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8565400843881856, "acc_stderr": 0.022818291821017012, "acc_norm": 0.8565400843881856, "acc_norm_stderr": 0.022818291821017012 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.672645739910314, "acc_stderr": 0.03149384670994131, "acc_norm": 0.672645739910314, "acc_norm_stderr": 0.03149384670994131 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7557251908396947, "acc_stderr": 0.03768335959728743, "acc_norm": 0.7557251908396947, "acc_norm_stderr": 0.03768335959728743 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.037494924487096966, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.037494924487096966 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.8446601941747572, "acc_stderr": 0.035865947385739734, "acc_norm": 0.8446601941747572, "acc_norm_stderr": 0.035865947385739734 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.02280138253459753, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.02280138253459753 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8084291187739464, "acc_stderr": 0.014072859310451949, "acc_norm": 0.8084291187739464, "acc_norm_stderr": 0.014072859310451949 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7572254335260116, "acc_stderr": 0.023083658586984204, "acc_norm": 0.7572254335260116, "acc_norm_stderr": 0.023083658586984204 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39664804469273746, "acc_stderr": 0.016361354769822468, "acc_norm": 0.39664804469273746, "acc_norm_stderr": 0.016361354769822468 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.025494259350694905, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.025494259350694905 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7777777777777778, "acc_stderr": 0.02313237623454333, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.02313237623454333 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5035460992907801, "acc_stderr": 0.02982674915328092, "acc_norm": 0.5035460992907801, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.49478487614080835, "acc_stderr": 0.012769541449652547, "acc_norm": 0.49478487614080835, "acc_norm_stderr": 0.012769541449652547 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.75, "acc_stderr": 0.026303648393696036, "acc_norm": 0.75, "acc_norm_stderr": 0.026303648393696036 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6813725490196079, "acc_stderr": 0.018850084696468712, "acc_norm": 0.6813725490196079, "acc_norm_stderr": 0.018850084696468712 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.746938775510204, "acc_stderr": 0.027833023871399677, "acc_norm": 0.746938775510204, "acc_norm_stderr": 0.027833023871399677 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421603, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421603 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5783132530120482, "acc_stderr": 0.038444531817709175, "acc_norm": 0.5783132530120482, "acc_norm_stderr": 0.038444531817709175 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5691554467564259, "mc1_stderr": 0.01733527247533237, "mc2": 0.7184177934834866, "mc2_stderr": 0.014995634120330182 }, "harness|winogrande|5": { "acc": 0.8342541436464088, "acc_stderr": 0.010450899545370632 }, "harness|gsm8k|5": { "acc": 0.6535253980288097, "acc_stderr": 0.013107179054313398 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__MetaModel) | Metric | Value | |-----------------------|---------------------------| | Avg. | 74.4 | | ARC (25-shot) | 71.08 | | HellaSwag (10-shot) | 88.45 | | MMLU (5-shot) | 66.26 | | TruthfulQA (0-shot) | 71.84 | | Winogrande (5-shot) | 83.43 | | GSM8K (5-shot) | 65.35 |
Spanicin/Fulcrum_Aura
Spanicin
2024-01-09T11:51:32Z
11
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "HuggingFaceH4/zephyr-7b-beta", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T11:47:42Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - HuggingFaceH4/zephyr-7b-beta --- # Fulcrum_Aura Fulcrum_Aura is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Spanicin/Fulcrum_Aura" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
researchworkai/Sentiment-roBERTa-Twitter
researchworkai
2024-01-09T11:49:19Z
147
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "text-classification", "en", "dataset:tweet_eval", "arxiv:2010.12421", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-08T18:56:54Z
--- datasets: - tweet_eval language: - en --- # Twitter-roBERTa for Sentiment Analysis This is a roBERTa model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English. - Reference Paper: [_TweetEval_ (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). <b>Labels</b>: 0 -> Negative; 1 -> Neutral; 2 -> Positive ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='sentiment' MODEL = f"researchworkai/Sentiment-roBERTa-Twitter-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/researchworkai/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Good night 😊" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) positive 0.8466 2) neutral 0.1458 3) negative 0.0076 ``` ### BibTeX entry and citation info Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model. ```bibtex @inproceedings{barbieri-etal-2020-tweeteval, title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification", author = "Barbieri, Francesco and Camacho-Collados, Jose and Espinosa Anke, Luis and Neves, Leonardo", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.148", doi = "10.18653/v1/2020.findings-emnlp.148", pages = "1644--1650" } ```
jysssacc/roberta-base_IA3_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-09T11:48:29Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-09T11:42:47Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: roberta-base_IA3_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_IA3_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 15.1197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.9021 | 1.0 | 157 | 21.8572 | | 19.0595 | 2.0 | 314 | 20.8634 | | 18.1837 | 3.0 | 471 | 18.9005 | | 15.0565 | 4.0 | 628 | 16.2004 | | 13.4717 | 5.0 | 785 | 15.1197 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
divinitas-jyi/ppo-LunarLander-v2
divinitas-jyi
2024-01-09T11:46:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T09:21:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo-MlpPolicy results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.31 +/- 19.02 name: mean_reward verified: false --- # **ppo-MlpPolicy** Agent playing **LunarLander-v2** This is a trained model of a **ppo-MlpPolicy** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Joy28/videomae-base-finetuned-subset-200epochs
Joy28
2024-01-09T11:41:50Z
4
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-01-08T01:53:32Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-subset-200epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-subset-200epochs This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7635 - Accuracy: 0.7407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 11100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.6058 | 0.01 | 56 | 0.7442 | 0.7880 | | 0.4908 | 1.01 | 112 | 0.7775 | 0.7558 | | 0.5326 | 2.01 | 168 | 0.7973 | 0.7419 | | 0.4768 | 3.01 | 224 | 0.8451 | 0.7281 | | 0.4243 | 4.01 | 280 | 0.9361 | 0.6728 | | 0.6921 | 5.01 | 336 | 0.8979 | 0.7097 | | 0.3182 | 6.01 | 392 | 0.8852 | 0.7235 | | 0.6085 | 7.01 | 448 | 0.9224 | 0.7097 | | 0.4067 | 8.01 | 504 | 0.9631 | 0.6682 | | 0.47 | 9.01 | 560 | 0.9193 | 0.7465 | | 0.5058 | 10.01 | 616 | 0.8967 | 0.7650 | | 0.4187 | 11.01 | 672 | 0.7403 | 0.7834 | | 0.6033 | 12.01 | 728 | 1.0005 | 0.6221 | | 0.5032 | 13.01 | 784 | 1.1420 | 0.5899 | | 0.5967 | 14.01 | 840 | 1.2590 | 0.5484 | | 0.3103 | 15.01 | 896 | 0.9723 | 0.6544 | | 0.4201 | 16.01 | 952 | 1.1665 | 0.6406 | | 0.6246 | 17.01 | 1008 | 1.2497 | 0.4977 | | 0.6306 | 18.01 | 1064 | 1.3829 | 0.5668 | | 0.4179 | 19.01 | 1120 | 1.0787 | 0.5806 | | 0.5468 | 20.01 | 1176 | 1.1144 | 0.5714 | | 0.4166 | 21.01 | 1232 | 0.7674 | 0.6912 | | 0.3844 | 22.01 | 1288 | 0.9260 | 0.6959 | | 0.5138 | 23.01 | 1344 | 0.9093 | 0.7097 | | 0.792 | 24.01 | 1400 | 0.7327 | 0.7465 | | 0.5944 | 25.01 | 1456 | 0.8933 | 0.7650 | | 0.4855 | 26.01 | 1512 | 0.9830 | 0.6636 | | 0.6896 | 27.01 | 1568 | 0.7896 | 0.6590 | | 0.3617 | 28.01 | 1624 | 0.8900 | 0.6544 | | 0.6362 | 29.01 | 1680 | 1.0237 | 0.6912 | | 0.6475 | 30.01 | 1736 | 1.1399 | 0.6037 | | 0.5088 | 31.01 | 1792 | 0.7190 | 0.7742 | | 0.7271 | 32.01 | 1848 | 0.9492 | 0.6359 | | 0.3171 | 33.01 | 1904 | 0.9431 | 0.7281 | | 0.5847 | 34.01 | 1960 | 0.7997 | 0.7235 | | 0.4703 | 35.01 | 2016 | 0.9506 | 0.7051 | | 0.4995 | 36.01 | 2072 | 1.0830 | 0.7005 | | 0.5682 | 37.01 | 2128 | 1.0100 | 0.7005 | | 0.6424 | 38.01 | 2184 | 0.9587 | 0.6452 | | 0.5897 | 39.01 | 2240 | 0.8807 | 0.7097 | | 0.5222 | 40.01 | 2296 | 1.1219 | 0.6682 | | 0.5239 | 41.01 | 2352 | 1.0848 | 0.6406 | | 0.5957 | 42.01 | 2408 | 0.9640 | 0.6866 | | 0.5279 | 43.01 | 2464 | 1.0291 | 0.5853 | | 0.3545 | 44.01 | 2520 | 0.8908 | 0.6636 | | 0.6066 | 45.01 | 2576 | 1.2505 | 0.6406 | | 0.3658 | 46.01 | 2632 | 0.8362 | 0.6866 | | 0.5454 | 47.01 | 2688 | 1.3975 | 0.5622 | | 0.5956 | 48.01 | 2744 | 0.8236 | 0.6590 | | 0.4107 | 49.01 | 2800 | 1.2610 | 0.6267 | | 0.462 | 50.01 | 2856 | 1.2553 | 0.6406 | | 0.4837 | 51.01 | 2912 | 1.0389 | 0.6359 | | 0.621 | 52.01 | 2968 | 0.8281 | 0.7235 | | 0.4293 | 53.01 | 3024 | 1.0426 | 0.6267 | | 0.4255 | 54.01 | 3080 | 1.2942 | 0.5806 | | 0.5607 | 55.01 | 3136 | 1.1234 | 0.6498 | | 0.3104 | 56.01 | 3192 | 1.0643 | 0.6590 | | 0.3335 | 57.01 | 3248 | 1.2160 | 0.6590 | | 0.4232 | 58.01 | 3304 | 1.3532 | 0.5806 | | 0.6238 | 59.01 | 3360 | 0.9208 | 0.7005 | | 0.369 | 60.01 | 3416 | 1.2186 | 0.5530 | | 0.3874 | 61.01 | 3472 | 1.1746 | 0.6452 | | 0.3421 | 62.01 | 3528 | 1.2017 | 0.5945 | | 0.4243 | 63.01 | 3584 | 1.0288 | 0.6728 | | 0.2806 | 64.01 | 3640 | 0.8483 | 0.7419 | | 0.5357 | 65.01 | 3696 | 1.0890 | 0.6359 | | 0.5155 | 66.01 | 3752 | 1.1885 | 0.6359 | | 0.4367 | 67.01 | 3808 | 1.0738 | 0.6820 | | 0.48 | 68.01 | 3864 | 1.0894 | 0.6866 | | 0.4703 | 69.01 | 3920 | 1.2252 | 0.6498 | | 0.4531 | 70.01 | 3976 | 1.0584 | 0.6498 | | 0.2898 | 71.01 | 4032 | 1.7486 | 0.5576 | | 0.3684 | 72.01 | 4088 | 1.0524 | 0.6406 | | 0.2752 | 73.01 | 4144 | 1.2744 | 0.6728 | | 0.3092 | 74.01 | 4200 | 1.3918 | 0.5806 | | 0.3507 | 75.01 | 4256 | 1.4599 | 0.6544 | | 0.4722 | 76.01 | 4312 | 1.0549 | 0.7143 | | 0.4059 | 77.01 | 4368 | 1.2727 | 0.6728 | | 0.2734 | 78.01 | 4424 | 1.1258 | 0.6959 | | 0.4168 | 79.01 | 4480 | 0.9788 | 0.7189 | | 0.4456 | 80.01 | 4536 | 1.4757 | 0.6544 | | 0.4519 | 81.01 | 4592 | 1.2796 | 0.6820 | | 0.5283 | 82.01 | 4648 | 1.2542 | 0.7051 | | 0.4738 | 83.01 | 4704 | 1.2781 | 0.6083 | | 0.2128 | 84.01 | 4760 | 1.0077 | 0.6866 | | 0.3262 | 85.01 | 4816 | 1.0287 | 0.6820 | | 0.3631 | 86.01 | 4872 | 1.3574 | 0.6544 | | 0.4085 | 87.01 | 4928 | 1.1976 | 0.7235 | | 0.3582 | 88.01 | 4984 | 1.4126 | 0.6544 | | 0.3564 | 89.01 | 5040 | 1.3488 | 0.6406 | | 0.4207 | 90.01 | 5096 | 1.0565 | 0.7005 | | 0.4307 | 91.01 | 5152 | 0.9833 | 0.7281 | | 0.3863 | 92.01 | 5208 | 0.9340 | 0.6912 | | 0.2949 | 93.01 | 5264 | 0.9835 | 0.7143 | | 0.2957 | 94.01 | 5320 | 1.1397 | 0.7235 | | 0.3767 | 95.01 | 5376 | 1.4135 | 0.6221 | | 0.4949 | 96.01 | 5432 | 1.0483 | 0.7189 | | 0.3058 | 97.01 | 5488 | 1.8241 | 0.5530 | | 0.3406 | 98.01 | 5544 | 1.7386 | 0.5760 | | 0.2319 | 99.01 | 5600 | 1.4739 | 0.6175 | | 0.5261 | 100.01 | 5656 | 1.0822 | 0.7143 | | 0.4181 | 101.01 | 5712 | 1.2876 | 0.6728 | | 0.243 | 102.01 | 5768 | 1.0783 | 0.7235 | | 0.2603 | 103.01 | 5824 | 1.4557 | 0.6129 | | 0.4892 | 104.01 | 5880 | 1.2557 | 0.6912 | | 0.3073 | 105.01 | 5936 | 1.3899 | 0.5991 | | 0.3601 | 106.01 | 5992 | 1.2048 | 0.6820 | | 0.4371 | 107.01 | 6048 | 1.3645 | 0.6866 | | 0.5712 | 108.01 | 6104 | 1.2281 | 0.6636 | | 0.3697 | 109.01 | 6160 | 1.4402 | 0.6544 | | 0.2978 | 110.01 | 6216 | 1.3769 | 0.6912 | | 0.303 | 111.01 | 6272 | 1.3096 | 0.6959 | | 0.4606 | 112.01 | 6328 | 1.2236 | 0.7005 | | 0.2554 | 113.01 | 6384 | 1.2662 | 0.6959 | | 0.3033 | 114.01 | 6440 | 1.2476 | 0.6406 | | 0.3025 | 115.01 | 6496 | 1.0474 | 0.7143 | | 0.3513 | 116.01 | 6552 | 1.4692 | 0.6452 | | 0.4205 | 117.01 | 6608 | 1.2675 | 0.6912 | | 0.3898 | 118.01 | 6664 | 1.4018 | 0.6590 | | 0.2184 | 119.01 | 6720 | 1.2402 | 0.6959 | | 0.319 | 120.01 | 6776 | 1.0747 | 0.7097 | | 0.2455 | 121.01 | 6832 | 1.3515 | 0.7051 | | 0.2138 | 122.01 | 6888 | 1.5175 | 0.6682 | | 0.3805 | 123.01 | 6944 | 1.4817 | 0.6820 | | 0.3942 | 124.01 | 7000 | 1.5235 | 0.6221 | | 0.2207 | 125.01 | 7056 | 1.6295 | 0.5945 | | 0.2217 | 126.01 | 7112 | 1.3348 | 0.6912 | | 0.3173 | 127.01 | 7168 | 1.3566 | 0.7097 | | 0.4952 | 128.01 | 7224 | 1.2188 | 0.7327 | | 0.3238 | 129.01 | 7280 | 1.2574 | 0.7143 | | 0.1525 | 130.01 | 7336 | 1.5508 | 0.6313 | | 0.2518 | 131.01 | 7392 | 1.3058 | 0.6912 | | 0.4523 | 132.01 | 7448 | 1.7539 | 0.6313 | | 0.3732 | 133.01 | 7504 | 1.4478 | 0.6820 | | 0.2432 | 134.01 | 7560 | 1.3595 | 0.6912 | | 0.2798 | 135.01 | 7616 | 1.5007 | 0.6866 | | 0.3436 | 136.01 | 7672 | 1.3162 | 0.7465 | | 0.3033 | 137.01 | 7728 | 1.3700 | 0.7051 | | 0.3457 | 138.01 | 7784 | 1.1052 | 0.7465 | | 0.1381 | 139.01 | 7840 | 1.5786 | 0.6959 | | 0.3067 | 140.01 | 7896 | 1.5155 | 0.6912 | | 0.269 | 141.01 | 7952 | 1.2751 | 0.7512 | | 0.2646 | 142.01 | 8008 | 1.6017 | 0.6774 | | 0.3933 | 143.01 | 8064 | 1.4294 | 0.7005 | | 0.6315 | 144.01 | 8120 | 1.3814 | 0.6866 | | 0.2814 | 145.01 | 8176 | 1.1689 | 0.7512 | | 0.2749 | 146.01 | 8232 | 1.3208 | 0.7005 | | 0.3966 | 147.01 | 8288 | 1.2817 | 0.7189 | | 0.1787 | 148.01 | 8344 | 1.4568 | 0.7189 | | 0.3006 | 149.01 | 8400 | 1.3312 | 0.7143 | | 0.2871 | 150.01 | 8456 | 1.5808 | 0.6452 | | 0.2018 | 151.01 | 8512 | 1.6682 | 0.6267 | | 0.2698 | 152.01 | 8568 | 1.4281 | 0.6590 | | 0.162 | 153.01 | 8624 | 1.4369 | 0.7051 | | 0.3961 | 154.01 | 8680 | 1.3771 | 0.7143 | | 0.4034 | 155.01 | 8736 | 1.5444 | 0.6452 | | 0.2462 | 156.01 | 8792 | 1.4677 | 0.6728 | | 0.2564 | 157.01 | 8848 | 1.6085 | 0.6590 | | 0.2905 | 158.01 | 8904 | 1.3037 | 0.6912 | | 0.2762 | 159.01 | 8960 | 1.3974 | 0.7051 | | 0.1604 | 160.01 | 9016 | 1.5176 | 0.6959 | | 0.2399 | 161.01 | 9072 | 1.4504 | 0.7143 | | 0.3398 | 162.01 | 9128 | 1.4675 | 0.6728 | | 0.2495 | 163.01 | 9184 | 1.3757 | 0.7005 | | 0.3076 | 164.01 | 9240 | 1.3699 | 0.7051 | | 0.2491 | 165.01 | 9296 | 1.4333 | 0.7005 | | 0.1666 | 166.01 | 9352 | 1.6465 | 0.6313 | | 0.1871 | 167.01 | 9408 | 1.6614 | 0.6544 | | 0.2169 | 168.01 | 9464 | 1.8141 | 0.6175 | | 0.3918 | 169.01 | 9520 | 1.3402 | 0.7097 | | 0.2697 | 170.01 | 9576 | 1.4295 | 0.6774 | | 0.2261 | 171.01 | 9632 | 1.5952 | 0.6452 | | 0.1894 | 172.01 | 9688 | 1.5468 | 0.6590 | | 0.1714 | 173.01 | 9744 | 1.4434 | 0.6636 | | 0.3137 | 174.01 | 9800 | 1.5525 | 0.6313 | | 0.267 | 175.01 | 9856 | 1.6447 | 0.6452 | | 0.0797 | 176.01 | 9912 | 1.5593 | 0.6682 | | 0.2698 | 177.01 | 9968 | 1.3952 | 0.7005 | | 0.1364 | 178.01 | 10024 | 1.6720 | 0.6498 | | 0.2342 | 179.01 | 10080 | 1.6315 | 0.6682 | | 0.1909 | 180.01 | 10136 | 1.5374 | 0.7051 | | 0.2234 | 181.01 | 10192 | 1.5861 | 0.7097 | | 0.3425 | 182.01 | 10248 | 1.5664 | 0.6912 | | 0.4092 | 183.01 | 10304 | 1.6135 | 0.6774 | | 0.2427 | 184.01 | 10360 | 1.5366 | 0.6866 | | 0.3751 | 185.01 | 10416 | 1.5561 | 0.6959 | | 0.1831 | 186.01 | 10472 | 1.6049 | 0.7005 | | 0.2207 | 187.01 | 10528 | 1.6072 | 0.6959 | | 0.1096 | 188.01 | 10584 | 1.5016 | 0.7097 | | 0.2417 | 189.01 | 10640 | 1.5027 | 0.7097 | | 0.2974 | 190.01 | 10696 | 1.4897 | 0.7097 | | 0.2296 | 191.01 | 10752 | 1.4927 | 0.7235 | | 0.3323 | 192.01 | 10808 | 1.4947 | 0.7235 | | 0.3002 | 193.01 | 10864 | 1.5225 | 0.7143 | | 0.23 | 194.01 | 10920 | 1.4965 | 0.7189 | | 0.3147 | 195.01 | 10976 | 1.5123 | 0.7051 | | 0.1344 | 196.01 | 11032 | 1.5192 | 0.7051 | | 0.1843 | 197.01 | 11088 | 1.5235 | 0.7097 | | 0.1902 | 198.0 | 11100 | 1.5238 | 0.7097 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.1 - Datasets 2.16.1 - Tokenizers 0.15.0
qmeeus/whisper-small-multilingual-spoken-ner-end2end-lora
qmeeus
2024-01-09T11:41:33Z
3
0
peft
[ "peft", "safetensors", "whisper", "generated_from_trainer", "dataset:facebook/voxpopuli", "base_model:openai/whisper-small", "base_model:adapter:openai/whisper-small", "license:apache-2.0", "model-index", "region:us" ]
null
2024-01-09T11:35:29Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: openai/whisper-small datasets: - facebook/voxpopuli metrics: - wer model-index: - name: WhisperForSpokenNER-end2end results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: facebook/voxpopuli de+es+fr+nl type: facebook/voxpopuli split: de+es+fr+nl metrics: - type: wer value: 0.38886263390044107 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WhisperForSpokenNER-end2end This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the facebook/voxpopuli de+es+fr+nl dataset. It achieves the following results on the evaluation set: - Loss: 0.3381 - Wer: 0.3889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.3436 | 0.36 | 200 | 1.8791 | 0.8871 | | 1.1682 | 0.71 | 400 | 1.0307 | 0.5048 | | 0.7321 | 1.07 | 600 | 0.6300 | 0.3665 | | 0.4564 | 1.43 | 800 | 0.4381 | 0.3515 | | 0.4095 | 1.79 | 1000 | 0.4027 | 0.3330 | | 0.3813 | 2.14 | 1200 | 0.3847 | 0.3360 | | 0.3667 | 2.5 | 1400 | 0.3734 | 0.3392 | | 0.3583 | 2.86 | 1600 | 0.3649 | 0.3490 | | 0.3454 | 3.22 | 1800 | 0.3588 | 0.3572 | | 0.3422 | 3.57 | 2000 | 0.3537 | 0.3705 | | 0.3371 | 3.93 | 2200 | 0.3503 | 0.3811 | | 0.3291 | 4.29 | 2400 | 0.3475 | 0.3678 | | 0.324 | 4.65 | 2600 | 0.3451 | 0.3670 | | 0.3262 | 5.0 | 2800 | 0.3431 | 0.3710 | | 0.3168 | 5.36 | 3000 | 0.3419 | 0.3847 | | 0.3178 | 5.72 | 3200 | 0.3406 | 0.3833 | | 0.3136 | 6.08 | 3400 | 0.3400 | 0.3853 | | 0.3092 | 6.43 | 3600 | 0.3393 | 0.3896 | | 0.3106 | 6.79 | 3800 | 0.3389 | 0.3900 | | 0.3057 | 7.15 | 4000 | 0.3388 | 0.3803 | | 0.3087 | 7.51 | 4200 | 0.3383 | 0.3941 | | 0.308 | 7.86 | 4400 | 0.3382 | 0.3874 | | 0.3036 | 8.22 | 4600 | 0.3381 | 0.3896 | | 0.3087 | 8.58 | 4800 | 0.3380 | 0.3910 | | 0.3079 | 8.94 | 5000 | 0.3381 | 0.3889 | ### Framework versions - PEFT 0.7.1.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
VegetaSama/bert_key
VegetaSama
2024-01-09T11:39:55Z
4
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2024-01-09T11:38:15Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # bert_key This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("VegetaSama/bert_key") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 17 * Number of training documents: 10000 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | restaurant - meal - sandwich - food - lunch | 86 | -1_restaurant_meal_sandwich_food | | 0 | restaurant - drinks - dinner - bar - steak | 2059 | 0_restaurant_drinks_dinner_bar | | 1 | mexican food - tacos - taco - chips salsa - salsa | 2789 | 1_mexican food_tacos_taco_chips salsa | | 2 | shop - shopping - nordstrom - store - customer service | 731 | 2_shop_shopping_nordstrom_store | | 3 | thai food - chinese food - pad thai - thai - fried rice | 701 | 3_thai food_chinese food_pad thai_thai | | 4 | best pizza - pizza good - good pizza - pizza - pizzeria | 594 | 4_best pizza_pizza good_good pizza_pizza | | 5 | scottsdale - phoenix - restaurant - bbq - arizona | 586 | 5_scottsdale_phoenix_restaurant_bbq | | 6 | burger - good burger - burgers - burger fries - restaurant | 443 | 6_burger_good burger_burgers_burger fries | | 7 | restaurant - hostess - dinner - waiter - waitress | 354 | 7_restaurant_hostess_dinner_waiter | | 8 | best sushi - sushi - sushi place - sushi bar - spicy tuna | 321 | 8_best sushi_sushi_sushi place_sushi bar | | 9 | manicure - massage - pedicure - salon - nail | 294 | 9_manicure_massage_pedicure_salon | | 10 | hotels - hotel - resort - marriott - amenities | 288 | 10_hotels_hotel_resort_marriott | | 11 | coffee shop - coffee - starbucks - coffee shops - good coffee | 215 | 11_coffee shop_coffee_starbucks_coffee shops | | 12 | breakfast - pancakes - protein pancakes - bakery - lunch | 211 | 12_breakfast_pancakes_protein pancakes_bakery | | 13 | hike - hiking - trails - trail - south mountain | 135 | 13_hike_hiking_trails_trail | | 14 | downtown phoenix - central phoenix - restaurants - phoenix area - phoenix | 105 | 14_downtown phoenix_central phoenix_restaurants_phoenix area | | 15 | vets - vet - veterinary - pets - petsmart | 88 | 15_vets_vet_veterinary_pets | </details> ## Training hyperparameters * calculate_probabilities: True * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 5 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.24.3 * HDBSCAN: 0.8.33 * UMAP: 0.5.5 * Pandas: 2.0.3 * Scikit-Learn: 1.3.0 * Sentence-transformers: 2.2.2 * Transformers: 4.32.1 * Numba: 0.58.1 * Plotly: 5.9.0 * Python: 3.11.5
hynky/codellama-7b-sft-lora-func-names-java-4bit
hynky
2024-01-09T11:35:40Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "region:us" ]
null
2023-12-29T13:39:19Z
--- library_name: peft base_model: codellama/CodeLlama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use ``` from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer config = PeftConfig.from_pretrained("hynky/codellama-7b-sft-lora-func-names-java-4bit") model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf", torch_dtype='auto', device_map='auto', offload_folder="offload", offload_state_dict = True) model = PeftModel.from_pretrained(model, "hynky/codellama-7b-sft-lora-func-names-java-4bit") def generate_code(sample, max_new_tokens=200): batch = tokenizer(sample, return_tensors='pt').to(device) with torch.cuda.amp.autocast(): output_tokens = model.generate(**batch, max_new_tokens=max_new_tokens) return tokenizer.decode(output_tokens[0], skip_special_tokens=True) print(generate_code("public class AddTwoIntegers(")) ``` ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0
Aedelon/ppo-Huggy
Aedelon
2024-01-09T11:28:33Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-09T11:28:29Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Aedelon/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Spanicin/Fulcrum_Nova
Spanicin
2024-01-09T11:19:20Z
12
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T11:15:00Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # Fulcrum_Nova Fulcrum_Nova is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Spanicin/Fulcrum_Nova" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
slay/mistral_7b_guanaco
slay
2024-01-09T11:18:36Z
1
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-07T19:47:19Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
wave1art/q-taxi-v3
wave1art
2024-01-09T11:15:45Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T11:15:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="wave1art/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Rinchindugar/my_awesome_opus_books_model
Rinchindugar
2024-01-09T11:13:49Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-08T04:54:45Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6068 - Bleu: 5.6146 - Gen Len: 17.628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8511 | 1.0 | 6355 | 1.6309 | 5.4324 | 17.638 | | 1.8186 | 2.0 | 12710 | 1.6068 | 5.6146 | 17.628 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
s3nh/Eileithyia-toxicqa
s3nh
2024-01-09T11:13:46Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:Undi95/toxicqa-Llama2-13B", "base_model:merge:Undi95/toxicqa-Llama2-13B", "base_model:athirdpath/Eileithyia-13B", "base_model:merge:athirdpath/Eileithyia-13B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T11:09:31Z
--- base_model: - Undi95/toxicqa-Llama2-13B - athirdpath/Eileithyia-13B tags: - mergekit - merge --- # Eileithyia-toxicqa This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Undi95/toxicqa-Llama2-13B](https://huggingface.co/Undi95/toxicqa-Llama2-13B) * [athirdpath/Eileithyia-13B](https://huggingface.co/athirdpath/Eileithyia-13B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Undi95/toxicqa-Llama2-13B dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.3, 0.6, 0.4, 0.8, 1.0] - filter: mlp value: [0.7, 0.4, 0.6, 0.2, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: athirdpath/Eileithyia-13B - layer_range: [0, 32] model: Undi95/toxicqa-Llama2-13B ```
ntc-ai/SDXL-LoRA-slider.Crystal-Ball-Photography
ntc-ai
2024-01-09T11:13:32Z
40
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-09T11:13:29Z
--- language: - en thumbnail: "images/evaluate/Crystal Ball Photography.../Crystal Ball Photography_17_3.0.png" widget: - text: Crystal Ball Photography output: url: images/Crystal Ball Photography_17_3.0.png - text: Crystal Ball Photography output: url: images/Crystal Ball Photography_19_3.0.png - text: Crystal Ball Photography output: url: images/Crystal Ball Photography_20_3.0.png - text: Crystal Ball Photography output: url: images/Crystal Ball Photography_21_3.0.png - text: Crystal Ball Photography output: url: images/Crystal Ball Photography_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "Crystal Ball Photography" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - Crystal Ball Photography (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/Crystal Ball Photography_17_-3.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_17_0.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_17_3.0.png" width=256 height=256 /> | | <img src="images/Crystal Ball Photography_19_-3.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_19_0.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_19_3.0.png" width=256 height=256 /> | | <img src="images/Crystal Ball Photography_20_-3.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_20_0.0.png" width=256 height=256 /> | <img src="images/Crystal Ball Photography_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` Crystal Ball Photography ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.Crystal-Ball-Photography', weight_name='Crystal Ball Photography.safetensors', adapter_name="Crystal Ball Photography") # Activate the LoRA pipe.set_adapters(["Crystal Ball Photography"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, Crystal Ball Photography" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 960+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
FeleliHasima/Urakata_Style
FeleliHasima
2024-01-09T11:07:01Z
2
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:Lykon/AnyLoRA", "base_model:adapter:Lykon/AnyLoRA", "license:unknown", "region:us" ]
text-to-image
2024-01-09T11:06:56Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: <lora:urakata20140107:1>,hatsune miku,stage, parameters: negative_prompt: >- verybadimagenegative_v1.3, negative_hand-neg, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.5), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), lowers, bad hands, missing fingers, extra digit,bad hands, missing fingers, (((extra arms and legs))), output: url: images/001.png - text: '<lora:urakata20140107:1>,cityhousemaid,maid headdress,maid apron, ' parameters: negative_prompt: >- verybadimagenegative_v1.3, negative_hand-neg, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.5), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), lowers, bad hands, missing fingers, extra digit,bad hands, missing fingers, (((extra arms and legs))), output: url: images/002.png - text: 1girl,<lora:urakata001010:1>,miko, parameters: negative_prompt: FastNegativeV2, output: url: images/003.png - text: 1girl,<lora:urakata001010:1>,maid, parameters: negative_prompt: FastNegativeV2, output: url: images/004.png base_model: Lykon/AnyLoRA instance_prompt: urakata license: unknown --- # 裏方&#x2F;Urakata Style <Gallery /> ## Model description イラストレーター 裏方の画風lycorisです。 学習に使用した画像は全て自作したものであり、イラストレーター本人の著作物は一切使用していません。 このモデルはどういう使い方をしてもらっても構いません。 This model represents the art style lycoris of the illustrator &quot;Urakata&quot;. All images used for training are self-made, and no works of the illustrator themselves have been used. You may use this model in any way you see fit. ## Trigger words You should use `urakata` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/FeleliHasima/Urakata_Style/tree/main) them in the Files & versions tab.
shaikhussain2344/distilbert-base-uncased-finetuned-imdb-1
shaikhussain2344
2024-01-09T11:05:04Z
55
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-09T11:00:57Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: shaikhussain2344/distilbert-base-uncased-finetuned-imdb-1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # shaikhussain2344/distilbert-base-uncased-finetuned-imdb-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8442 - Validation Loss: 2.5619 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8442 | 2.5619 | 0 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
ashutoshsharma58/indian_food_image_detection
ashutoshsharma58
2024-01-09T10:46:00Z
175
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "code", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T12:07:26Z
--- license: apache-2.0 language: - en tags: - code --- ### Model Description This model classifies the uploaded imgage into various Indian cuisines. The categories are: adhirasam, aloo_gobi, aloo_matar, aloo_methi, aloo_shimla_mirch, aloo_tikki, anarsa, ariselu, bandar_laddu, basundi, bhatura, bhindi_masala, biryani, boondi, butter_chicken, chak_hao_kheer, cham_cham, chana_masala, chapati, chhena_kheeri, chicken_razala, chicken_tikka,chicken_tikka_masala, chikki, daal_baati_churma, daal_puri, dal_makhani, dal_tadka, dharwad_pedha, doodhpak, double_ka_meetha, dum_aloo, gajar_ka_halwa, gavvalu, ghevar, gulab_jamun, imarti, jalebi, kachori, kadai_paneer, kadhi_pakoda, kajjikaya, kakinada_khaja, kalakand, karela_bharta, kofta, kuzhi_paniyaram, lassi, ledikeni, litti_chokha, lyangcha, maach_jhol, makki_di_roti_sarson_da_saag, malapua, misi_roti, misti_doi, modak, mysore_pak, naan, navrattan_korma, palak_paneer, paneer_butter_masala, phirni, pithe, poha, poornalu, pootharekulu, qubani_ka_meetha, rabri, ras_malai, rasgulla, sandesh, shankarpali, sheer_korma, sheera, shrikhand, sohan_halwa, sohan_papdi, sutar_feni, unni_appam - **Developed by:** [Ashutosh Sharma]
cnat-08/BarbenheimerClassifier5
cnat-08
2024-01-09T10:34:24Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-01-09T10:33:52Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
alexandreacff/zephyr_7b_1enem_apostilas_1ksteps_linear_warmup
alexandreacff
2024-01-09T10:32:51Z
5
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-alpha", "base_model:adapter:HuggingFaceH4/zephyr-7b-alpha", "license:mit", "region:us" ]
null
2024-01-08T13:08:53Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: HuggingFaceH4/zephyr-7b-alpha model-index: - name: zephyr_7b_1enem_apostilas_1ksteps_linear_warmup results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr_7b_1enem_apostilas_1ksteps_linear_warmup This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 1.13.0+cu117 - Datasets 2.16.1 - Tokenizers 0.15.0
wcyat/whisper-small-yue-lora
wcyat
2024-01-09T10:15:39Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:wcyat/whisper-small-yue", "base_model:adapter:wcyat/whisper-small-yue", "region:us" ]
null
2024-01-09T08:45:34Z
--- library_name: peft base_model: wcyat/whisper-small-yue --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
aimlnerd/bert-finetuned-legalentity-ner-accelerate
aimlnerd
2024-01-09T10:14:48Z
103
1
transformers
[ "transformers", "safetensors", "bert", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-09T08:40:44Z
--- license: apache-2.0 --- # Extract Legal Entities from Insurance Documents using BERT transfomers This model is a fine tuned BERT transfomers for NER of legal entities in Life Insurance demand letters. Dataset is publicly available here https://github.com/aws-samples/aws-legal-entity-extraction.git The model extracts the following entities: * Law Firm * Law Office Address * Insurance Company * Insurance Company Address * Policy Holder Name * Beneficiary Name * Policy Number * Payout * Required Action * Sender ## HF Space https://huggingface.co/spaces/aimlnerd/legal-entity-ner-transformers This space expose the model as gradio app and contains, training dataset and code for training. Dataset consists of legal requisition/demand letters for Life Insurance, however this approach can be used across any industry & document which may benefit from spatial data in NER training. ## Data preprocessing The OCRed data is present as JSON here ```data/raw_data/annotations```. I wrote this code to convert the JSON data in format suitable for HF TokenClassification ```source/services/ner/awscomprehend_2_ner_format.py``` ## Finetuning BERT Transformers model ```source/services/ner/train/train.py``` This code fine tune the BERT model and uploads to huggingface
jysssacc/roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01
jysssacc
2024-01-09T10:02:26Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-09T10:02:03Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 19.1081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 13.3356 | 1.0 | 157 | 19.1081 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/opt-350m_PrefixTuning_lr5e-05_bs4_epoch20_wd0.01
jysssacc
2024-01-09T09:59:07Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-08T00:32:54Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_PrefixTuning_lr5e-05_bs4_epoch20_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_PrefixTuning_lr5e-05_bs4_epoch20_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.3679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.7195 | 1.0 | 157 | 9.7698 | | 9.4441 | 2.0 | 314 | 9.3700 | | 9.1944 | 3.0 | 471 | 8.7393 | | 8.663 | 4.0 | 628 | 8.3164 | | 8.4504 | 5.0 | 785 | 8.0815 | | 8.201 | 6.0 | 942 | 7.9094 | | 8.0965 | 7.0 | 1099 | 7.7845 | | 7.9382 | 8.0 | 1256 | 7.6983 | | 7.7828 | 9.0 | 1413 | 7.6242 | | 7.7907 | 10.0 | 1570 | 7.5654 | | 7.7221 | 11.0 | 1727 | 7.5225 | | 7.5905 | 12.0 | 1884 | 7.4850 | | 7.5802 | 13.0 | 2041 | 7.4554 | | 7.5914 | 14.0 | 2198 | 7.4319 | | 7.5632 | 15.0 | 2355 | 7.4102 | | 7.4937 | 16.0 | 2512 | 7.3961 | | 7.4921 | 17.0 | 2669 | 7.3838 | | 7.457 | 18.0 | 2826 | 7.3748 | | 7.5307 | 19.0 | 2983 | 7.3697 | | 7.4599 | 20.0 | 3140 | 7.3679 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
duyne/distilbert-base-uncased-finetuned-emotion
duyne
2024-01-09T09:55:42Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-14T14:31:20Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.921 - name: F1 type: f1 value: 0.9209278298716373 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2239 - Accuracy: 0.921 - F1: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3194 | 0.907 | 0.9061 | | No log | 2.0 | 500 | 0.2239 | 0.921 | 0.9209 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1
mmnga
2024-01-09T09:52:44Z
28
18
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "fr", "it", "de", "es", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-16T19:09:26Z
--- language: - fr - it - de - es - en license: apache-2.0 tags: - moe inference: false --- # Model Card for Mixtral-Fusion-4x7B-Instruct-v0.1 This model is an experimental model created by merging [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) experts. # How we merged experts Changed to merge using slerp. [Discussion](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/discussions/2) [old merge version](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/tree/v0.1.0) ~~We simply take the average of every two experts.weight.~~ ~~The same goes for gate.weight.~~ # How To Convert use colab cpu-high-memory. [convert_mixtral_8x7b_to_4x7b.ipynb](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/blob/main/notebook/convert_mixtral_8x7b_to_4x7b.ipynb) # OtherModels [mmnga/Mixtral-Extraction-4x7B-Instruct-v0.1](https://huggingface.co/mmnga/Mixtral-Extraction-4x7B-Instruct-v0.1) # Usage ~~~python pip install git+https://github.com/huggingface/transformers --upgrade pip install torch accelerate bitsandbytes flash_attn ~~~ ~~~python from transformers import AutoTokenizer, AutoModelForCausalLM, MixtralForCausalLM import torch model_name_or_path = "mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = MixtralForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=True) text = "[INST] What was John Holt's vision on education? [/INST] " inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ~~~
mmnga/TinyMixtral-x8-Clonebase-7b
mmnga
2024-01-09T09:52:08Z
29
4
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-17T17:22:25Z
--- language: - en license: apache-2.0 tags: - moe inference: false --- # Model Card for TinyMixtral-x8-Clonebase-7b This model is based on [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T), converted to a mistral model, and then placed the clone in mixtral. **This model was created experimentally for training a small mixtral.** **Without Train, the performance of this model is the same as TinyLlama.** # How it was made First, since tinyllama is an llama model, I converted it to a mistral model. After that, I cloned the FFN part and made it experts. Since they are all the same tensor, the performance does not change. All gates have the same value. # How To Convert use colab cpu-high-memory. This model was created with experts=8, but since it is a clone, you can create as many experts as you like. [tinyllama_to_mixtral_clonebase.ipynb](https://huggingface.co/mmnga/TinyMixtral-x8-Clonebase-7b/blob/main/notebook/tinyllama_to_mixtral_clonebase.ipynb) # revision [main TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/mmnga/TinyMixtral-x8-Clonebase-7b) [old TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/mmnga/TinyMixtral-x8-Clonebase-7b/tree/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) # Usage ~~~python pip install transformers --upgrade pip install flash_attn bitsandbytes accelerate ~~~ ~~~python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name_or_path = "mmnga/TinyMixtral-x8-Clonebase-7b" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", load_in_8bit=True) prompt = "Introducing the recipe for today's dinner." with torch.no_grad(): token_ids = tokenizer.encode(prompt, return_tensors="pt") output_ids = model.generate( token_ids.to(model.device), do_sample=True, max_new_tokens=128, repetition_penalty=1.5 ) output = tokenizer.decode(output_ids[0]) print(output) ~~~
Naati101/tb
Naati101
2024-01-09T09:47:58Z
0
0
keras
[ "keras", "tf-keras", "image-classification", "medical", "region:us" ]
image-classification
2024-01-09T09:47:19Z
--- library_name: keras tags: - image-classification - medical --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Gayathri142214002/Pegasus_paraphraser_Com_9
Gayathri142214002
2024-01-09T09:46:09Z
139
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:Gayathri142214002/Pegasus_paraphraser_Com_8", "base_model:finetune:Gayathri142214002/Pegasus_paraphraser_Com_8", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-09T09:21:40Z
--- license: apache-2.0 base_model: Gayathri142214002/Pegasus_paraphraser_Com_8 tags: - generated_from_trainer model-index: - name: Pegasus_paraphraser_Com_9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Pegasus_paraphraser_Com_9 This model is a fine-tuned version of [Gayathri142214002/Pegasus_paraphraser_Com_8](https://huggingface.co/Gayathri142214002/Pegasus_paraphraser_Com_8) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
racheltong/va_openai-whisper-tiny-en-colab_0.001_10
racheltong
2024-01-09T09:43:17Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-tiny", "base_model:adapter:openai/whisper-tiny", "region:us" ]
null
2024-01-09T09:43:06Z
--- library_name: peft base_model: openai/whisper-tiny --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
ybelkada/test-tags-model
ybelkada
2024-01-09T09:34:10Z
178
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "en", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T09:27:19Z
--- language: en tags: - transformers - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HackerCIS/distilbert-base-uncased-finetuned-emotion
HackerCIS
2024-01-09T09:29:43Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T09:11:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9175 - name: F1 type: f1 value: 0.9173530455189519 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2326 - Accuracy: 0.9175 - F1: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8354 | 1.0 | 250 | 0.3426 | 0.901 | 0.8997 | | 0.263 | 2.0 | 500 | 0.2326 | 0.9175 | 0.9174 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Arim/temp
Arim
2024-01-09T09:10:14Z
0
0
null
[ "region:us" ]
null
2023-12-20T06:36:20Z
upload custom base model for lora training
decruz07/llama-2-7b-miniguanaco
decruz07
2024-01-09T09:01:11Z
1,484
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T08:28:34Z
--- license: apache-2.0 --- ## llama-2-7b-miniguanaco This is my first model, with LLama-2-7b model finetuned with miniguanaco datasets. This is a simple finetune based off a Google Colab notebook. Finetune instructions were from Labonne's first tutorial. To run it: import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math model_path = "decruz07/llama-2-7b-miniguanaco" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:")
0x7o/nanoFialka-v1
0x7o
2024-01-09T09:00:16Z
103
4
transformers
[ "transformers", "onnx", "safetensors", "gpt2", "text-generation", "ru", "dataset:0x7194633/fialka-v3-data", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T08:53:01Z
--- license: apache-2.0 datasets: - 0x7194633/fialka-v3-data language: - ru pipeline_tag: text-generation --- # Nano Fialka v1.0 ## Description This is a test model trained for non-serious tasks. For a production environment, use [Fialka 13B](https://huggingface.co/collections/0x7194633/fialka-llms-658a87c2003ceee6937a0d2e). ## Usage The model has a query format as in zephyr. ``` <|user|> Что такое мем?</s> <|assistant|> Мем (англ. meme) — это единица культурной информации, которая распространяется в социальных сетях и других онлайн-платформах с помощью цифровых технологий или через физический контакт. Мемы могут быть связаны между собой тематически или иметь общие черты, такие как использование определенных слов или фраз для создания определенного настроения или выражения эмоций. Они также могут содержать информацию о культуре, истории или науке, которую можно использовать для обучения новым вещам или расширения кругозора. ```
tuanHenry/distilbert_base_uncased_Movies_genre_predict
tuanHenry
2024-01-09T08:47:03Z
96
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-09T08:46:43Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0750 - Accuracy: 0.6202 - F1: 0.6107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 0.02 | 20 | 2.2641 | 0.2781 | 0.1211 | | No log | 0.04 | 40 | 2.0495 | 0.2786 | 0.1221 | | No log | 0.05 | 60 | 1.8930 | 0.4343 | 0.3708 | | No log | 0.07 | 80 | 1.8073 | 0.4046 | 0.3218 | | No log | 0.09 | 100 | 1.7021 | 0.4710 | 0.4005 | | No log | 0.11 | 120 | 1.6779 | 0.4715 | 0.4077 | | No log | 0.12 | 140 | 1.6050 | 0.4804 | 0.4139 | | No log | 0.14 | 160 | 1.5298 | 0.5032 | 0.4416 | | No log | 0.16 | 180 | 1.4980 | 0.5037 | 0.4427 | | No log | 0.18 | 200 | 1.4566 | 0.5112 | 0.4553 | | No log | 0.19 | 220 | 1.4303 | 0.5295 | 0.4736 | | No log | 0.21 | 240 | 1.4033 | 0.5340 | 0.4835 | | No log | 0.23 | 260 | 1.4549 | 0.5052 | 0.4588 | | No log | 0.25 | 280 | 1.3637 | 0.5404 | 0.5032 | | No log | 0.26 | 300 | 1.3285 | 0.5473 | 0.5067 | | No log | 0.28 | 320 | 1.3536 | 0.5454 | 0.5199 | | No log | 0.3 | 340 | 1.3069 | 0.5528 | 0.5166 | | No log | 0.32 | 360 | 1.2906 | 0.5464 | 0.5134 | | No log | 0.33 | 380 | 1.2867 | 0.5612 | 0.5355 | | No log | 0.35 | 400 | 1.2854 | 0.5493 | 0.5145 | | No log | 0.37 | 420 | 1.2707 | 0.5702 | 0.5465 | | No log | 0.39 | 440 | 1.2698 | 0.5667 | 0.5484 | | No log | 0.41 | 460 | 1.2531 | 0.5766 | 0.5637 | | No log | 0.42 | 480 | 1.2348 | 0.5801 | 0.5592 | | No log | 0.44 | 500 | 1.2392 | 0.5756 | 0.5531 | | No log | 0.46 | 520 | 1.2110 | 0.5821 | 0.5584 | | No log | 0.48 | 540 | 1.2050 | 0.5771 | 0.5546 | | No log | 0.49 | 560 | 1.2078 | 0.5756 | 0.5482 | | No log | 0.51 | 580 | 1.2088 | 0.5776 | 0.5518 | | No log | 0.53 | 600 | 1.1987 | 0.5905 | 0.5758 | | No log | 0.55 | 620 | 1.1893 | 0.5761 | 0.5548 | | No log | 0.56 | 640 | 1.1991 | 0.5801 | 0.5667 | | No log | 0.58 | 660 | 1.1939 | 0.5865 | 0.5755 | | No log | 0.6 | 680 | 1.1701 | 0.5875 | 0.5653 | | No log | 0.62 | 700 | 1.1852 | 0.5870 | 0.5709 | | No log | 0.63 | 720 | 1.1699 | 0.5845 | 0.5668 | | No log | 0.65 | 740 | 1.1778 | 0.5910 | 0.5731 | | No log | 0.67 | 760 | 1.1546 | 0.5984 | 0.5806 | | No log | 0.69 | 780 | 1.1720 | 0.5890 | 0.5687 | | No log | 0.7 | 800 | 1.1554 | 0.5940 | 0.5730 | | No log | 0.72 | 820 | 1.1454 | 0.5910 | 0.5748 | | No log | 0.74 | 840 | 1.1457 | 0.6034 | 0.5893 | | No log | 0.76 | 860 | 1.1631 | 0.5935 | 0.5663 | | No log | 0.78 | 880 | 1.1277 | 0.6059 | 0.5930 | | No log | 0.79 | 900 | 1.1291 | 0.5984 | 0.5839 | | No log | 0.81 | 920 | 1.1345 | 0.5954 | 0.5826 | | No log | 0.83 | 940 | 1.1253 | 0.6034 | 0.5868 | | No log | 0.85 | 960 | 1.1397 | 0.5989 | 0.5899 | | No log | 0.86 | 980 | 1.1223 | 0.6009 | 0.5890 | | No log | 0.88 | 1000 | 1.1363 | 0.6059 | 0.5917 | | No log | 0.9 | 1020 | 1.1391 | 0.5944 | 0.5787 | | No log | 0.92 | 1040 | 1.1480 | 0.5860 | 0.5732 | | No log | 0.93 | 1060 | 1.1065 | 0.6063 | 0.5925 | | No log | 0.95 | 1080 | 1.1250 | 0.5984 | 0.5844 | | No log | 0.97 | 1100 | 1.1344 | 0.5935 | 0.5835 | | No log | 0.99 | 1120 | 1.1452 | 0.5880 | 0.5700 | | 1.3609 | 1.0 | 1140 | 1.1084 | 0.6093 | 0.5960 | | 1.3609 | 1.02 | 1160 | 1.1079 | 0.6098 | 0.6006 | | 1.3609 | 1.04 | 1180 | 1.1178 | 0.6073 | 0.5951 | | 1.3609 | 1.06 | 1200 | 1.1267 | 0.6004 | 0.5878 | | 1.3609 | 1.07 | 1220 | 1.1084 | 0.6098 | 0.5977 | | 1.3609 | 1.09 | 1240 | 1.1109 | 0.6039 | 0.5910 | | 1.3609 | 1.11 | 1260 | 1.1142 | 0.6078 | 0.5968 | | 1.3609 | 1.13 | 1280 | 1.1218 | 0.6024 | 0.5915 | | 1.3609 | 1.15 | 1300 | 1.0929 | 0.6197 | 0.6057 | | 1.3609 | 1.16 | 1320 | 1.1001 | 0.6108 | 0.5995 | | 1.3609 | 1.18 | 1340 | 1.1049 | 0.6068 | 0.5993 | | 1.3609 | 1.2 | 1360 | 1.1019 | 0.6098 | 0.5971 | | 1.3609 | 1.22 | 1380 | 1.1140 | 0.6029 | 0.5942 | | 1.3609 | 1.23 | 1400 | 1.0981 | 0.6078 | 0.5991 | | 1.3609 | 1.25 | 1420 | 1.1129 | 0.6078 | 0.5965 | | 1.3609 | 1.27 | 1440 | 1.0896 | 0.6173 | 0.6076 | | 1.3609 | 1.29 | 1460 | 1.1062 | 0.6059 | 0.5935 | | 1.3609 | 1.3 | 1480 | 1.1038 | 0.6143 | 0.5996 | | 1.3609 | 1.32 | 1500 | 1.0906 | 0.6182 | 0.6072 | | 1.3609 | 1.34 | 1520 | 1.0883 | 0.6182 | 0.6099 | | 1.3609 | 1.36 | 1540 | 1.1114 | 0.6024 | 0.5955 | | 1.3609 | 1.37 | 1560 | 1.0884 | 0.6158 | 0.6044 | | 1.3609 | 1.39 | 1580 | 1.0994 | 0.6088 | 0.6021 | | 1.3609 | 1.41 | 1600 | 1.0857 | 0.6187 | 0.6055 | | 1.3609 | 1.43 | 1620 | 1.1067 | 0.6098 | 0.6027 | | 1.3609 | 1.44 | 1640 | 1.0845 | 0.6237 | 0.6167 | | 1.3609 | 1.46 | 1660 | 1.0899 | 0.6177 | 0.6065 | | 1.3609 | 1.48 | 1680 | 1.0925 | 0.6177 | 0.6067 | | 1.3609 | 1.5 | 1700 | 1.0831 | 0.6148 | 0.6042 | | 1.3609 | 1.52 | 1720 | 1.0988 | 0.6108 | 0.6017 | | 1.3609 | 1.53 | 1740 | 1.0810 | 0.6173 | 0.6069 | | 1.3609 | 1.55 | 1760 | 1.0857 | 0.6163 | 0.6049 | | 1.3609 | 1.57 | 1780 | 1.1003 | 0.6103 | 0.5985 | | 1.3609 | 1.59 | 1800 | 1.0861 | 0.6143 | 0.6045 | | 1.3609 | 1.6 | 1820 | 1.1024 | 0.6073 | 0.5976 | | 1.3609 | 1.62 | 1840 | 1.0786 | 0.6222 | 0.6094 | | 1.3609 | 1.64 | 1860 | 1.0984 | 0.6088 | 0.5968 | | 1.3609 | 1.66 | 1880 | 1.0750 | 0.6227 | 0.6146 | | 1.3609 | 1.67 | 1900 | 1.0849 | 0.6108 | 0.6029 | | 1.3609 | 1.69 | 1920 | 1.0787 | 0.6192 | 0.6102 | | 1.3609 | 1.71 | 1940 | 1.0831 | 0.6133 | 0.6036 | | 1.3609 | 1.73 | 1960 | 1.0739 | 0.6197 | 0.6136 | | 1.3609 | 1.74 | 1980 | 1.0766 | 0.6187 | 0.6092 | | 1.3609 | 1.76 | 2000 | 1.0778 | 0.6207 | 0.6119 | | 1.3609 | 1.78 | 2020 | 1.0739 | 0.6168 | 0.6053 | | 1.3609 | 1.8 | 2040 | 1.0736 | 0.6182 | 0.6066 | | 1.3609 | 1.81 | 2060 | 1.0723 | 0.6212 | 0.6107 | | 1.3609 | 1.83 | 2080 | 1.0792 | 0.6222 | 0.6126 | | 1.3609 | 1.85 | 2100 | 1.0823 | 0.6177 | 0.6091 | | 1.3609 | 1.87 | 2120 | 1.0706 | 0.6197 | 0.6094 | | 1.3609 | 1.89 | 2140 | 1.0729 | 0.6217 | 0.6107 | | 1.3609 | 1.9 | 2160 | 1.0728 | 0.6202 | 0.6094 | | 1.3609 | 1.92 | 2180 | 1.0754 | 0.6187 | 0.6083 | | 1.3609 | 1.94 | 2200 | 1.0775 | 0.6192 | 0.6094 | | 1.3609 | 1.96 | 2220 | 1.0773 | 0.6187 | 0.6090 | | 1.3609 | 1.97 | 2240 | 1.0767 | 0.6177 | 0.6082 | | 1.3609 | 1.99 | 2260 | 1.0750 | 0.6202 | 0.6107 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
fblgit/UNAversal-2x7B-v1
fblgit
2024-01-09T08:46:15Z
1,488
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "llama-factory", "lora", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T07:44:56Z
--- license: apache-2.0 tags: - llama-factory - lora - generated_from_trainer model-index: - name: UNAversal-2x7B-v1 results: [] --- # UNAversal-2x7B-v1 Merely Phase 1 UNA, only MLP's and its kinda of a beta. The goal was to produce a small but powerful MoE. This is a 2 MoE model, of 7B each expert. Based on intel-neural series v3. | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |--------------|-------|------|-----:|----------|-----:|---|-----:| |arc_challenge |Yaml |none | 25|acc |0.7133|± |0.0132| | | |none | 25|acc_norm |0.7235|± |0.0131| |arc_easy |Yaml |none | 0|acc |0.8674|± |0.0070| | | |none | 0|acc_norm |0.8291|± |0.0077| |boolq |Yaml |none | 0|acc |0.8768|± |0.0057| |lambada_openai|Yaml |none | 0|perplexity|3.6656|± |0.0841| | | |none | 0|acc |0.7017|± |0.0064| |mathqa |Yaml |none | 0|acc |0.3474|± |0.0087| | | |none | 0|acc_norm |0.3585|± |0.0088| |piqa |Yaml |none | 0|acc |0.8411|± |0.0085| | | |none | 0|acc_norm |0.8526|± |0.0083| |sciq |Yaml |none | 0|acc |0.9600|± |0.0062| | | |none | 0|acc_norm |0.9370|± |0.0077|
uttam333/layoutlm-custom
uttam333
2024-01-09T08:41:02Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlm", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-09T08:34:30Z
--- tags: - generated_from_trainer model-index: - name: layoutlm-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-custom This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1583 - Noise: {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641} - Signal: {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640} - Overall Precision: 0.8716 - Overall Recall: 0.8634 - Overall F1: 0.8675 - Overall Accuracy: 0.9656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Noise | Signal | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3882 | 1.0 | 18 | 0.2617 | {'precision': 0.6654804270462633, 'recall': 0.5834633385335414, 'f1': 0.6217788861180383, 'number': 641} | {'precision': 0.6149732620320856, 'recall': 0.5390625, 'f1': 0.5745212323064114, 'number': 640} | 0.6402 | 0.5613 | 0.5982 | 0.8986 | | 0.1694 | 2.0 | 36 | 0.1752 | {'precision': 0.7387820512820513, 'recall': 0.719188767550702, 'f1': 0.7288537549407115, 'number': 641} | {'precision': 0.709470304975923, 'recall': 0.690625, 'f1': 0.6999208234362629, 'number': 640} | 0.7241 | 0.7049 | 0.7144 | 0.9296 | | 0.1039 | 3.0 | 54 | 0.1356 | {'precision': 0.7865168539325843, 'recall': 0.7644305772230889, 'f1': 0.7753164556962026, 'number': 641} | {'precision': 0.77491961414791, 'recall': 0.753125, 'f1': 0.7638668779714739, 'number': 640} | 0.7807 | 0.7588 | 0.7696 | 0.9439 | | 0.064 | 4.0 | 72 | 0.1342 | {'precision': 0.8220472440944881, 'recall': 0.8143525741029641, 'f1': 0.8181818181818181, 'number': 641} | {'precision': 0.8028391167192429, 'recall': 0.7953125, 'f1': 0.7990580847723705, 'number': 640} | 0.8125 | 0.8048 | 0.8086 | 0.9522 | | 0.0433 | 5.0 | 90 | 0.1241 | {'precision': 0.8544303797468354, 'recall': 0.8424336973478939, 'f1': 0.8483896307934014, 'number': 641} | {'precision': 0.8320126782884311, 'recall': 0.8203125, 'f1': 0.8261211644374509, 'number': 640} | 0.8432 | 0.8314 | 0.8373 | 0.9601 | | 0.0293 | 6.0 | 108 | 0.1274 | {'precision': 0.8650793650793651, 'recall': 0.8502340093603744, 'f1': 0.8575924468922109, 'number': 641} | {'precision': 0.8378378378378378, 'recall': 0.8234375, 'f1': 0.830575256107171, 'number': 640} | 0.8515 | 0.8368 | 0.8441 | 0.9617 | | 0.0199 | 7.0 | 126 | 0.1372 | {'precision': 0.8722397476340694, 'recall': 0.8627145085803433, 'f1': 0.8674509803921568, 'number': 641} | {'precision': 0.8530805687203792, 'recall': 0.84375, 'f1': 0.8483896307934015, 'number': 640} | 0.8627 | 0.8532 | 0.8579 | 0.9640 | | 0.0139 | 8.0 | 144 | 0.1386 | {'precision': 0.8839427662957074, 'recall': 0.8673946957878315, 'f1': 0.8755905511811023, 'number': 641} | {'precision': 0.856687898089172, 'recall': 0.840625, 'f1': 0.8485804416403785, 'number': 640} | 0.8703 | 0.8540 | 0.8621 | 0.9656 | | 0.0126 | 9.0 | 162 | 0.1467 | {'precision': 0.8829113924050633, 'recall': 0.8705148205928237, 'f1': 0.8766692851531814, 'number': 641} | {'precision': 0.8541996830427893, 'recall': 0.8421875, 'f1': 0.848151062155783, 'number': 640} | 0.8686 | 0.8564 | 0.8624 | 0.9654 | | 0.0114 | 10.0 | 180 | 0.1531 | {'precision': 0.8694968553459119, 'recall': 0.8627145085803433, 'f1': 0.8660924040720438, 'number': 641} | {'precision': 0.8472440944881889, 'recall': 0.840625, 'f1': 0.8439215686274509, 'number': 640} | 0.8584 | 0.8517 | 0.8550 | 0.9631 | | 0.0099 | 11.0 | 198 | 0.1581 | {'precision': 0.8703125, 'recall': 0.8689547581903276, 'f1': 0.8696330991412958, 'number': 641} | {'precision': 0.8450704225352113, 'recall': 0.84375, 'f1': 0.8444096950742768, 'number': 640} | 0.8577 | 0.8564 | 0.8570 | 0.9634 | | 0.0064 | 12.0 | 216 | 0.1543 | {'precision': 0.8866141732283465, 'recall': 0.8783151326053042, 'f1': 0.8824451410658307, 'number': 641} | {'precision': 0.8643533123028391, 'recall': 0.85625, 'f1': 0.8602825745682888, 'number': 640} | 0.8755 | 0.8673 | 0.8714 | 0.9659 | | 0.0059 | 13.0 | 234 | 0.1628 | {'precision': 0.8732394366197183, 'recall': 0.8705148205928237, 'f1': 0.871875, 'number': 641} | {'precision': 0.8526645768025078, 'recall': 0.85, 'f1': 0.8513302034428795, 'number': 640} | 0.8630 | 0.8603 | 0.8616 | 0.9645 | | 0.0056 | 14.0 | 252 | 0.1587 | {'precision': 0.878740157480315, 'recall': 0.8705148205928237, 'f1': 0.8746081504702194, 'number': 641} | {'precision': 0.8580441640378549, 'recall': 0.85, 'f1': 0.8540031397174254, 'number': 640} | 0.8684 | 0.8603 | 0.8643 | 0.9651 | | 0.005 | 15.0 | 270 | 0.1583 | {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641} | {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640} | 0.8716 | 0.8634 | 0.8675 | 0.9656 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
1DS/adapter-category-mapping-beauty_baby_hpc_grocery_computer_kitchen-Llama-2-7b-chat-hf-v1
1DS
2024-01-09T08:38:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-09T08:38:13Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
amd/ese_vovnet39b
amd
2024-01-09T08:35:03Z
0
0
null
[ "onnx", "RyzenAI", "vision", "classification", "pytorch", "dataset:imagenet-1k", "arxiv:1904.09730", "license:apache-2.0", "region:us" ]
null
2023-12-04T09:17:27Z
--- license: apache-2.0 datasets: - imagenet-1k metrics: - accuracy tags: - RyzenAI - vision - classification - pytorch --- # ESE_VoVNet39b Quantized ESE_VoVNet39b model that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/). ## Model description VoVNet was first introduced in the paper [An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection](https://arxiv.org/abs/1904.09730). Pretrained on ImageNet-1k in timm by Ross Wightman using RandAugment RA recipe. The model implementation is from [timm](https://huggingface.co/timm/ese_vovnet39b.ra_in1k). ## How to use ### Installation Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI. Run the following script to install pre-requisites for this model. ```bash pip install -r requirements.txt ``` ### Data Preparation Follow [ImageNet](https://huggingface.co/datasets/imagenet-1k) to prepare dataset. ### Model Evaluation ```python python eval_onnx.py --onnx_model ese_vovnet39b_int.onnx --ipu --provider_config Path\To\vaip_config.json --data_dir /Path/To/Your/Dataset ``` ### Performance |Metric |Accuracy on IPU| | :----: | :----: | |Top1/Top5| 78.96% / 94.53%| ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{lee2019energy, title = {An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection}, author = {Lee, Youngwan and Hwang, Joong-won and Lee, Sangrok and Bae, Yuseok and Park, Jongyoul}, booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops}, year = {2019} } ```
ensound/labiezione_generator
ensound
2024-01-09T08:33:39Z
80
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T01:22:51Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
amd/efficientnet-es
amd
2024-01-09T08:32:48Z
0
0
null
[ "onnx", "RyzenAI", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
2023-12-04T09:02:55Z
--- license: apache-2.0 tags: - RyzenAI - image-classification - onnx datasets: - imagenet-1k --- # EfficientNet The EfficientNet model was proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The specific version of EfficientNet here is EfficientNet-ES (EdgeTPU-Small). We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/). ## Model description The abstract from the paper is the following: *Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.* The original code can be found [here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?sort=trending&search=efficientnet) to look for fine-tuned versions on a task that interests you. ## How to use ### Installation 1. Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI. 2. Run the following script to install pre-requisites for this model. ```shell pip install -r requirements.txt ``` ### Test & Evaluation - Inference one image (Image Classification): ```python import onnxruntime import argparse from PIL import Image import torchvision.transforms as transforms parser = argparse.ArgumentParser() parser.add_argument('--onnx_path', type=str, default="EfficientNet_int.onnx", required=False) parser.add_argument('--image_path', type=str, required=True) args = parser.parse_args() def read_image(): # Read a PIL image image = Image.open(args.image_path) normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) transform = transforms.Compose([ transforms.ToTensor(), transforms.Resize((224, 224)), normalize, ]) img_tensor = transform(image).unsqueeze(0) return img_tensor.numpy() def main(): so = onnxruntime.SessionOptions() ort_session = onnxruntime.InferenceSession( args.onnx_path, so, providers=['CUDAExecutionProvider']) ort_inputs = { "WrapModel::input_0": read_image() } output = ort_session.run(None, ort_inputs)[0] print("class id =", output[0].argmax()) if __name__ == "__main__": main() ``` - Evaluate ImageNet validation dataset(50,000 images), using `eval_onnx.py` . ```shell python eval_onnx.py --onnx_model EfficientNet_int.onnx --ipu --provider_config Path\To\vaip_config.json --data_dir /Path/To/Your/Dataset ``` ### Performance ​ Dataset: ImageNet validation dataset (50,000 images). | Metric | Accuracy on IPU | | :-----------------: | :-------------: | | top1& top5 accuracy | 77.72% / 93.78% | ## Citation ```bibtex @article{EfficientNet, author = {Mingxing Tan and Quoc V. Le}, title = {Searching for MobileNetV3}, year = {2019}, url = {https://arxiv.org/abs/1905.11946}, } ```
santhosh/madlad400-3b-ct2
santhosh
2024-01-09T08:27:24Z
139
12
transformers
[ "transformers", "text2text-generation", "text-generation-inference", "translation", "multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb", "dataset:allenai/MADLAD-400", "arxiv:2309.04662", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2024-01-08T10:48:41Z
--- license: apache-2.0 language: - multilingual - en - ru - es - fr - de - it - pt - pl - nl - vi - tr - sv - id - ro - cs - zh - hu - ja - th - fi - fa - uk - da - el - "no" - bg - sk - ko - ar - lt - ca - sl - he - et - lv - hi - sq - ms - az - sr - ta - hr - kk - is - ml - mr - te - af - gl - fil - be - mk - eu - bn - ka - mn - bs - uz - ur - sw - yue - ne - kn - kaa - gu - si - cy - eo - la - hy - ky - tg - ga - mt - my - km - tt - so - ku - ps - pa - rw - lo - ha - dv - fy - lb - ckb - mg - gd - am - ug - ht - grc - hmn - sd - jv - mi - tk - ceb - yi - ba - fo - or - xh - su - kl - ny - sm - sn - co - zu - ig - yo - pap - st - haw - as - oc - cv - lus - tet - gsw - sah - br - rm - sa - bo - om - se - ce - cnh - ilo - hil - udm - os - lg - ti - vec - ts - tyv - kbd - ee - iba - av - kha - to - tn - nso - fj - zza - ak - ada - otq - dz - bua - cfm - ln - chm - gn - krc - wa - hif - yua - srn - war - rom - bik - pam - sg - lu - ady - kbp - syr - ltg - myv - iso - kac - bho - ay - kum - qu - za - pag - ngu - ve - pck - zap - tyz - hui - bbc - tzo - tiv - ksd - gom - min - ang - nhe - bgp - nzi - nnb - nv - zxx - bci - kv - new - mps - alt - meu - bew - fon - iu - abt - mgh - mnw - tvl - dov - tlh - ho - kw - mrj - meo - crh - mbt - emp - ace - ium - mam - gym - mai - crs - pon - ubu - fip - quc - gv - kj - btx - ape - chk - rcf - shn - tzh - mdf - ppk - ss - gag - cab - kri - seh - ibb - tbz - bru - enq - ach - cuk - kmb - wo - kek - qub - tab - bts - kos - rwo - cak - tuc - bum - cjk - gil - stq - tsg - quh - mak - arn - ban - jiv - sja - yap - tcy - toj - twu - xal - amu - rmc - hus - nia - kjh - bm - guh - mas - acf - dtp - ksw - bzj - din - zne - mad - msi - mag - mkn - kg - lhu - ch - qvi - mh - djk - sus - mfe - srm - dyu - ctu - gui - pau - inb - bi - mni - guc - jam - wal - jac - bas - gor - skr - nyu - noa - sda - gub - nog - cni - teo - tdx - sxn - rki - nr - frp - alz - taj - lrc - cce - rn - jvn - hvn - nij - dwr - izz - msm - bus - ktu - chr - maz - tzj - suz - knj - bim - gvl - bqc - tca - pis - prk - laj - mel - qxr - niq - ahk - shp - hne - spp - koi - krj - quf - luz - agr - tsc - mqy - gof - gbm - miq - dje - awa - bjj - qvz - sjp - tll - raj - kjg - bgz - quy - cbk - akb - oj - ify - mey - ks - cac - brx - qup - syl - jax - ff - ber - tks - trp - mrw - adh - smt - srr - ffm - qvc - mtr - ann - kaa - aa - noe - nut - gyn - kwi - xmm - msb library_name: transformers tags: - text2text-generation - text-generation-inference datasets: - allenai/MADLAD-400 pipeline_tag: translation --- # Model Card for MADLAD-400-3B-CT2 # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) # TL;DR MADLAD-400-3B-MT is a multilingual machine translation model based on the T5 architecture that was trained on 1 trillion tokens covering over 450 languages using publicly available data. It is competitive with models that are significantly larger. **Disclaimer**: [Santhosh Thottingal](https://huggingface.co/santhosh), who was not involved in this research, converted the original models to CTranslate2 optimized model and wrote the contents of this model card based on [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** Multilingual (400+ languages) - **License:** Apache 2.0 - **Related Models:** [All MADLAD-400 Checkpoints](https://huggingface.co/models?search=madlad) - **Original Checkpoints:** [All Original MADLAD-400 Checkpoints](https://github.com/google-research/google-research/tree/master/madlad_400) - **Resources for more information:** - [Research paper](https://arxiv.org/abs/2309.04662) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face MADLAD-400 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/MADLAD-400) - [Pending PR](https://github.com/huggingface/transformers/pull/27471) # Usage Find below some example scripts on how to use the model: ## Running the model on a CPU or GPU First, install the CTranslate2 packages that are required: `pip install ctranslate2 sentencepiece` ```python import ctranslate2 from sentencepiece import SentencePieceProcessor from huggingface_hub import snapshot_download model_name = "santhosh/madlad400-3b-ct2" model_path = snapshot_download(model_name) tokenizer = SentencePieceProcessor() tokenizer.load(f"{model_path}/sentencepiece.model") translator = ctranslate2.Translator(model_path) input_text = "I love pizza!" input_tokens = tokenizer.encode(f"<2{target_language}> {input_text}", out_type=str) results = translator.translate_batch( [input_tokens], batch_type="tokens", max_batch_size=1024, beam_size=1, no_repeat_ngram_size=1, repetition_penalty=2, ) translated_sentence = tokenizer.decode(results[0].hypotheses[0]) print(translated_sentence) # Eu adoro pizza! ``` # Uses ## Direct Use and Downstream Use > Primary intended uses: Machine Translation and multilingual NLP tasks on over 400 languages. > Primary intended users: Research community. ## Out-of-Scope Use > These models are trained on general domain data and are therefore not meant to > work on domain-specific models out-of-the box. Moreover, these research models have not been assessed > for production usecases. # Bias, Risks, and Limitations > We note that we evaluate on only 204 of the languages supported by these models and on machine translation > and few-shot machine translation tasks. Users must consider use of this model carefully for their own > usecase. ## Ethical considerations and risks > We trained these models with MADLAD-400 and publicly available data to create baseline models that > support NLP for over 400 languages, with a focus on languages underrepresented in large-scale corpora. > Given that these models were trained with web-crawled datasets that may contain sensitive, offensive or > otherwise low-quality content despite extensive preprocessing, it is still possible that these issues to the > underlying training data may cause differences in model performance and toxic (or otherwise problematic) > output for certain domains. Moreover, large models are dual use technologies that have specific risks > associated with their use and development. We point the reader to surveys such as those written by > Weidinger et al. or Bommasani et al. for a more detailed discussion of these risks, and to Liebling > et al. for a thorough discussion of the risks of machine translation systems. ## Known Limitations More information needed ## Sensitive Use: More information needed # Training Details > We train models of various sizes: a 3B, 32-layer parameter model, > a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. > We share all parameters of the model across language pairs, > and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder > side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target > language. See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. ## Training Data > For both the machine translation and language model, MADLAD-400 is used. For the machine translation > model, a combination of parallel datasources covering 157 languages is also used. Further details are > described in the [paper](https://arxiv.org/pdf/2309.04662.pdf). ## Training Procedure See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics > For evaluation, we used WMT, NTREX, Flores-200 and Gatones datasets as described in Section 4.3 in the [paper](https://arxiv.org/pdf/2309.04662.pdf). > The translation quality of this model varies based on language, as seen in the paper, and likely varies on > domain, though we have not assessed this. ## Results ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/EzsMD1AwCuFH0S0DeD-n8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/CJ5zCUVy7vTU76Lc8NZcK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7f632037d6452a321fa15/NK0S-yVeWuhKoidpLYh3m.png) See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details. # Environmental Impact More information needed # Citation **BibTeX:** ```bibtex @misc{kudugunta2023madlad400, title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset}, author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat}, year={2023}, eprint={2309.04662}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
s3nh/GOAT-Finance-7B
s3nh
2024-01-09T08:27:09Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:AdaptLLM/finance-chat", "base_model:merge:AdaptLLM/finance-chat", "base_model:GOAT-AI/GOAT-7B-Community", "base_model:merge:GOAT-AI/GOAT-7B-Community", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T08:22:54Z
--- base_model: - GOAT-AI/GOAT-7B-Community - AdaptLLM/finance-chat tags: - mergekit - merge --- # GOAT-Finance-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [GOAT-AI/GOAT-7B-Community](https://huggingface.co/GOAT-AI/GOAT-7B-Community) * [AdaptLLM/finance-chat](https://huggingface.co/AdaptLLM/finance-chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: GOAT-AI/GOAT-7B-Community dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.22, 0.61, 0.46, 0.77, 1.0] - filter: mlp value: [0.78, 0.39, 0.54, 0.23, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: AdaptLLM/finance-chat - layer_range: [0, 32] model: GOAT-AI/GOAT-7B-Community ```
1DS/adapter-title-brand-mapping-Llama-2-7b-chat-hf-v1
1DS
2024-01-09T08:23:35Z
0
0
peft
[ "peft", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-09T08:23:35Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Infrence Function def generate(title): # Define the roles and markers # Define the roles and markers prompt = prompt = f"[INST]Identify the brand from the given product title.[/INST]\n\n<TITL> {title} </TITL>\n\n"custom prompt here print("Prompt:") print(prompt) encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0") output = model.generate(input_ids=encoding.input_ids, attention_mask=encoding.attention_mask, max_new_tokens=200, do_sample=True, temperature=0.01, eos_token_id=tokenizer.eos_token_id, top_k=0) print() # Subtract the length of input_ids from output to get only the model's response output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False) output_text = re.sub('\n+', '\n', output_text) # remove excessive newline characters print("Generated Assistant Response:") print(output_text) return output_text
1DS/adapter-review-phrases-sentiments-Llama-2-7b-chat-hf-v2.1
1DS
2024-01-09T08:21:26Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-09T08:21:25Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
s3nh/beberik-Lonepino-11B-GGUF
s3nh
2024-01-09T08:19:46Z
1
2
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T07:30:13Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/beberik/Lonepino-11B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
satcos/Reinforce-CartPole-v1
satcos
2024-01-09T08:15:34Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-09T08:15:20Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
freshpearYoon/medium3
freshpearYoon
2024-01-09T08:03:25Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-09T01:54:25Z
--- language: - ko license: apache-2.0 base_model: openai/whisper-medium tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: whisper_medium results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the aihub dataset. It achieves the following results on the evaluation set: - Cer: 15.6625 - Loss: 1.4176 - Wer: 32.4788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:----:|:-------:|:---------------:|:-------:| | 1.8819 | 0.01 | 100 | 11.9999 | 1.5851 | 29.7754 | | 1.6964 | 0.02 | 200 | 14.6066 | 1.4982 | 31.2945 | | 1.6783 | 0.02 | 300 | 14.8315 | 1.4504 | 31.7318 | | 1.6238 | 0.03 | 400 | 15.3631 | 1.4259 | 32.1490 | | 1.7569 | 0.04 | 500 | 15.6625 | 1.4176 | 32.4788 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.15.0 - Tokenizers 0.15.0
azambasheer99/azam-spam-detector-v1
azambasheer99
2024-01-09T07:56:46Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-01-09T07:54:07Z
--- license: mit --- A spam detector model to detect spam messages. Outputs 0=Ham or 1=Spam. Currently test prototype model.
isha-ingersol/llama2-qlora-finetuned-french
isha-ingersol
2024-01-09T07:56:46Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyPixel/Llama-2-7B-bf16-sharded", "base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded", "region:us" ]
null
2024-01-09T07:56:27Z
--- library_name: peft base_model: TinyPixel/Llama-2-7B-bf16-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
baichuan-inc/Baichuan-13B-Chat
baichuan-inc
2024-01-09T07:56:42Z
3,287
631
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "zh", "en", "arxiv:2104.09864", "arxiv:2108.12409", "arxiv:2009.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T05:58:27Z
--- language: - zh - en pipeline_tag: text-generation inference: false --- # Baichuan-13B-Chat <!-- Provide a quick summary of what the model is/does. --> ## 介绍 Baichuan-13B-Chat为Baichuan-13B系列模型中对齐后的版本,预训练模型可见[Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)。 [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点: 1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。 2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。 3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。 4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。 Baichuan-13B-Chat is the aligned version in the Baichuan-13B series of models, and the pre-trained model can be found at [Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base). [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features: 1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096. 2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code. 3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss. 4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email. ## 使用方式 如下是一个使用Baichuan-13B-Chat进行对话的示例,正确输出为"乔戈里峰。世界第二高峰———乔戈里峰西方登山者称其为k2峰,海拔高度是8611米,位于喀喇昆仑山脉的中巴边境上" ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat") messages = [] messages.append({"role": "user", "content": "世界上第二高的山峰是哪座"}) response = model.chat(tokenizer, messages) print(response) ``` Here is an example of a conversation using Baichuan-13B-Chat, the correct output is "K2. The world's second highest peak - K2, also known as Mount Godwin-Austen or Chhogori, with an altitude of 8611 meters, is located on the China-Pakistan border in the Karakoram Range." ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat") messages = [] messages.append({"role": "user", "content": "Which moutain is the second highest one in the world?"}) response = model.chat(tokenizer, messages) print(response) ``` ## 量化部署 Baichuan-13B 支持 int8 和 int4 量化,用户只需在推理代码中简单修改两行即可实现。请注意,如果是为了节省显存而进行量化,应加载原始精度模型到 CPU 后再开始量化;避免在 `from_pretrained` 时添加 `device_map='auto'` 或者其它会导致把原始精度模型直接加载到 GPU 的行为的参数。 Baichuan-13B supports int8 and int4 quantization, users only need to make a simple two-line change in the inference code to implement it. Please note, if quantization is done to save GPU memory, the original precision model should be loaded onto the CPU before starting quantization. Avoid adding parameters such as `device_map='auto'` or others that could cause the original precision model to be loaded directly onto the GPU when executing `from_pretrained`. 使用 int8 量化 (To use int8 quantization): ```python model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-13B-Chat", torch_dtype=torch.float16, trust_remote_code=True) model = model.quantize(8).cuda() ``` 同样的,如需使用 int4 量化 (Similarly, to use int4 quantization): ```python model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-13B-Chat", torch_dtype=torch.float16, trust_remote_code=True) model = model.quantize(4).cuda() ``` ## 模型详情 ### 模型描述 <!-- Provide a longer summary of what this model is. --> - **Developed by:** 百川智能(Baichuan Intelligent Technology) - **Email**: [email protected] - **Language(s) (NLP):** Chinese/English - **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20模型社区许可协议.pdf)| [EN](Community%20License%20for%20Baichuan-13B%20Model.pdf)) **商业用途(For commercial use):** 请通过 [Email](mailto:[email protected]) 联系申请书面授权。(Contact us via [Email](mailto:[email protected]) above to apply for written authorization.) ### 模型结构 <!-- Provide the basic links for the model. --> 整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | 具体参数和见下表 | 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | The specific parameters are as follows: | Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 ## 使用须知 <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### 免责声明 我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment. We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited. ## 训练详情 训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。 For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B). ## 测评结果 ## [C-Eval](https://cevalbenchmark.com/index.html#home) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 | | Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 | | Vicuna-13B | 30.5 | 38.2 | 32.5 | 32.5 | 32.8 | | Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 | | Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 | | LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 | | moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 | | **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** | | **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** | ## [MMLU](https://arxiv.org/abs/2009.03300) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Vicuna-13B | 40.4 | 60.5 | 49.5 | 58.4 | 52.0 | | LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 | | Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 | | Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 | | Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 | | Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 | | moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 | | **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** | | **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** | > 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。 ## [CMMLU](https://github.com/haonan-li/CMMLU) | Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average | |-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:| | Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 | | Vicuna-13B | 31.8 | 36.2 | 37.6 | 39.5 | 34.3 | 36.3 | | Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 | | Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 | | Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 | | LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 | | moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 | | **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** | | **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** | > 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。 ## 微信群组 ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)
jamesm808/ppo-LunarLander-v2
jamesm808
2024-01-09T07:52:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-07T09:02:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -154.64 +/- 53.16 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
billborkowski/llava-NousResearch_Nous-Hermes-2-Vision-GGUF
billborkowski
2024-01-09T07:49:49Z
2,919
22
transformers
[ "transformers", "pytorch", "gguf", "llava_mistral", "text-generation", "mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "multimodal", "llava", "conversational", "en", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T04:58:30Z
--- base_model: mistralai/Mistral-7B-v0.1 tags: - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation - multimodal - llava model-index: - name: Nous-Hermes-2-Vision results: [] license: apache-2.0 language: - en --- GGUF Quants by Twobob, Thanks to @jartine and @cmp-nct for the assists It's vicuna ref: [here](https://github.com/qnguyen3/hermes-llava/blob/173b4ef441b5371c1e7d99da7a2e7c14c77ad12f/llava/conversation.py#L252) Caveat emptor: There is still some kind of bug in the inference that is likely to get fixed upstream. Just FYI ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a22257d3149e05bc6d259f/aF3VQrpwGJQLxbeyj1JOf.png) # Nous-Hermes-2-Vision - Mistral 7B ![image/png](https://camo.githubusercontent.com/b09dc35a93b4b70748fa4e2f307b011cd3d548369dd926ec9a2d3a51f7b3721e/68747470733a2f2f66696c65732e6f616975736572636f6e74656e742e636f6d2f66696c652d6b4437565358734f5649576472624b3042353662686644363f73653d323032332d31322d3033543137253341333425334135385a2673703d722673763d323032312d30382d30362673723d6226727363633d6d61782d6167652533443331353336303030253243253230696d6d757461626c6526727363643d6174746163686d656e7425334225323066696c656e616d6525334439643530333039622d356236342d343964302d623832362d6165316638366132396661382e77656270267369673d50396973694b4679654a54435a47424b526d45494b3043586e6e55676c6334704a583071312532425478666a34253344) *In the tapestry of Greek mythology, Hermes reigns as the eloquent Messenger of the Gods, a deity who deftly bridges the realms through the art of communication. It is in homage to this divine mediator that I name this advanced LLM "Hermes," a system crafted to navigate the complex intricacies of human discourse with celestial finesse.* ## Model description Nous-Hermes-2-Vision stands as a pioneering Vision-Language Model, leveraging advancements from the renowned **OpenHermes-2.5-Mistral-7B** by teknium. This model incorporates two pivotal enhancements, setting it apart as a cutting-edge solution: - **SigLIP-400M Integration**: Diverging from traditional approaches that rely on substantial 3B vision encoders, Nous-Hermes-2-Vision harnesses the formidable SigLIP-400M. This strategic choice not only streamlines the model's architecture, making it more lightweight, but also capitalizes on SigLIP's remarkable capabilities. The result? A remarkable boost in performance that defies conventional expectations. - **Custom Dataset Enriched with Function Calling**: Our model's training data includes a unique feature – function calling. This distinctive addition transforms Nous-Hermes-2-Vision into a **Vision-Language Action Model**. Developers now have a versatile tool at their disposal, primed for crafting a myriad of ingenious automations. This project is led by [qnguyen3](https://twitter.com/stablequan) and [teknium](https://twitter.com/Teknium1). ## Training ### Dataset - 220K from **LVIS-INSTRUCT4V** - 60K from **ShareGPT4V** - 150K Private **Function Calling Data** - 50K conversations from teknium's **OpenHermes-2.5** ## Usage ### Prompt Format - Like other LLaVA's variants, this model uses Vicuna-V1 as its prompt template. Please refer to `conv_llava_v1` in [this file](https://github.com/qnguyen3/hermes-llava/blob/main/llava/conversation.py) - For Gradio UI, please visit this [GitHub Repo](https://github.com/qnguyen3/hermes-llava) ### Function Calling - For functiong calling, the message should start with a `<fn_call>` tag. Here is an example: ```json <fn_call>{ "type": "object", "properties": { "bus_colors": { "type": "array", "description": "The colors of the bus in the image.", "items": { "type": "string", "enum": ["red", "blue", "green", "white"] } }, "bus_features": { "type": "string", "description": "The features seen on the back of the bus." }, "bus_location": { "type": "string", "description": "The location of the bus (driving or pulled off to the side).", "enum": ["driving", "pulled off to the side"] } } } ``` Output: ```json { "bus_colors": ["red", "white"], "bus_features": "An advertisement", "bus_location": "driving" } ``` ## Example ### Chat ![image/png](https://i.ibb.co/tMg8h2t/Screenshot-from-2023-12-04-00-13-59.png) ### Function Calling Input image: ![image/png](https://www.slcmenu.com/wp-content/uploads/2017/11/In-N-Out-Burger-menu-2020-982x1024.jpg) Input message: ```json <fn_call>{ "type": "object", "properties": { "food_list": { "type": "array", "description": "List of all the food", "items": { "type": "string", } }, } } ``` Output: ```json { "food_list": [ "Double Burger", "Cheeseburger", "French Fries", "Shakes", "Coffee" ] } ```
mlx-community/Llama-2-7b-WikiChat-mlx
mlx-community
2024-01-09T07:49:22Z
2
0
mlx
[ "mlx", "llama", "en", "license:llama2", "region:us" ]
null
2024-01-09T06:55:05Z
--- language: - en license: llama2 tags: - mlx --- # Llama-2-7b-WikiChat-mlx This model was converted to MLX format from [`stanford-oval/Llama-2-7b-WikiChat`](). Refer to the [original model card](https://huggingface.co/stanford-oval/Llama-2-7b-WikiChat) for more details on the model. ## Use with mlx ```bash pip install mlx git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/llms/hf_llm python generate.py --model mlx-community/Llama-2-7b-WikiChat-mlx --prompt "My name is" ```
baichuan-inc/Baichuan-7B
baichuan-inc
2024-01-09T07:45:22Z
19,658
839
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "zh", "en", "arxiv:1910.07467", "arxiv:2009.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-13T07:47:16Z
--- language: - zh - en pipeline_tag: text-generation inference: false --- # Baichuan-7B <!-- Provide a quick summary of what the model is/does. --> Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。 如果希望使用Baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。 Baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B). ## Why use Baichuan-7B - 在同尺寸模型中Baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标 - Baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平 - 不同于LLaMA完全禁止商业使用,Baichuan-7B使用更宽松的开源协议,允许用于商业目的 - Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics. - Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval. - Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes. ## How to Get Started with the Model 如下是一个使用Baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐" ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True) inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt') inputs = inputs.to('cuda:0') pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` The following is a task of performing 1-shot inference using Baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez" ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True) inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt') inputs = inputs.to('cuda:0') pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** 百川智能(Baichuan Intelligent Technology) - **Email**: [email protected] - **Language(s) (NLP):** Chinese/English - **License:** [Baichuan-7B License](https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) ### Model Sources <!-- Provide the basic links for the model. --> 整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计 - **Position Embedding**:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。 - **Feedforward Layer**:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。 - **Layer Normalization**: 基于[RMSNorm](https://arxiv.org/abs/1910.07467)的Pre-Normalization。 具体参数和见下表 | Hyperparameter | Value | |----------------|-------| |n_parameters | 7000559616 | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 64000 | | sequence length | 4096 | The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA: - Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities. - Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008. - Layer Normalization: Pre-Normalization based on [RMSNorm](https://arxiv.org/abs/1910.07467). The specific parameters are as follows: | Hyperparameter | Value | |----------------|-------| |n_parameters | 7000559616 | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 64000 | | sequence length | 4096 | ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> 我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。 We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B). ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> 在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。 Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。Baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。 Baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. Baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Training Details 训练具体设置参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。 For specific training settings, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B). ## Evaluation ### 中文评测 #### C-Eval [CEval数据集](https://cevalbenchmark.com/index.html)是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。 | Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others | |-----------------------------|---------|-----------|------|-----------------|------------|--------| | GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 | | ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 | | Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 | | Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 | | moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 | | Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 | | LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 | | ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 | | Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 | | Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 | | TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 | | Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 | | BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 | | BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 | | **Baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 | #### Gaokao [Gaokao](https://github.com/ExpressAI/AI-Gaokao) 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。 我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。 以下是测试的结果。 | Model | Average | |-------------------------|-----------------| | Open-LLaMA-v2-pretrain | 21.41 | | Ziya-LLaMA-13B-pretrain | 23.17 | | Falcon-7B | 23.98 | | TigerBot-7B-base | 25.94 | | LLaMA-7B | 27.81 | | ChatGLM-6B | 21.41 | | BLOOM-7B | 26.96 | | BLOOMZ-7B | 28.72 | | Aquila-7B<sup>*</sup> | 24.39 | | **Baichuan-7B** | **36.24** | #### AGIEval [AGIEval](https://github.com/microsoft/AGIEval) 旨在评估模型的认知和解决问题相关的任务中的一般能力。 我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。 | Model | Average | |-------------------------|-----------------| | Open-LLaMA-v2-pretrain | 23.49 | | Ziya-LLaMA-13B-pretrain | 27.64 | | Falcon-7B | 27.18 | | TigerBot-7B-base | 25.19 | | LLaMA-7B | 28.17 | | ChatGLM-6B | 23.49 | | BLOOM-7B | 26.55 | | BLOOMZ-7B | 30.27 | | Aquila-7B<sup>*</sup> | 25.58 | | **Baichuan-7B** | **34.44** | <sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考 ### English Leaderboard In addition to Chinese, we also tested the model's performance in English. #### MMLU [MMLU](https://arxiv.org/abs/2009.03300) is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset. We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation scheme, and the final 5-shot results are as follows: | Model | Humanities | Social Sciences | STEM | Other | Average | |----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:| | LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 | | Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 | | mpt-7B<sup>1</sup> | - | - | - | - | 35.6 | | ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 | | BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 | | BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 | | moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 | | moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 | | **Baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 | The superscript in the Model column indicates the source of the results. ``` 0:reimplemented 1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu ``` ## Our Group ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)
kwaikeg/kagentlms_qwen_7b_mat
kwaikeg
2024-01-09T07:45:10Z
42
15
transformers
[ "transformers", "pytorch", "qwen", "feature-extraction", "text-generation", "custom_code", "en", "zh", "dataset:kwaikeg/KAgentInstruct", "dataset:kwaikeg/KAgentBench", "license:cc-by-nc-nd-4.0", "region:us" ]
text-generation
2023-11-17T06:24:12Z
--- license: cc-by-nc-nd-4.0 datasets: - kwaikeg/KAgentInstruct - kwaikeg/KAgentBench language: - en - zh pipeline_tag: text-generation --- KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)) is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes: 1. **KAgentSys-Lite**: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper. 2. **KAgentLMs**: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper. 3. **KAgentInstruct**: Fine-tuned data of instructions generated by the Meta-agent in the paper. 4. **KAgentBench**: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling. ## User Guide ### Direct usage Tutorial can refer to [QwenLM/Qwen](https://github.com/QwenLM/Qwen) ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("kwaikeg/kagentlms_qwen_7b_mat", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( "kwaikeg/kagentlms_qwen_7b_mat", device_map="auto", trust_remote_code=True ).eval() response, history = model.chat(tokenizer, "你好", history=None) print(response) ``` ### AgentLMs as service #### Serving by [vLLM](https://github.com/vllm-project/vllm) (GPU) We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects): ```bash pip install vllm pip install "fschat[model_worker,webui]" ``` To deploy KAgentLMs, you first need to start the controller in one terminal. ```bash python -m fastchat.serve.controller ``` Secondly, you should use the following command in another terminal for single-gpu inference service deployment: ```bash python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code ``` Where `$model_path` is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add `--dtype half` to the command line. Thirdly, start the REST API server in the third terminal. ```bash python -m fastchat.serve.openai_api_server --host localhost --port 8888 ``` Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example: ```bash curl http://localhost:8888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}' ``` #### Serving by [Lamma.cpp](https://github.com/ggerganov/llama.cpp) (CPU) llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc). The converted model can be found in [kwaikeg/kagentlms_qwen_7b_mat_gguf](https://huggingface.co/kwaikeg/kagentlms_qwen_7b_mat_gguf). To install the server package and get started: ```bash pip install "llama-cpp-python[server]" python3 -m llama_cpp.server --model kagentlms_qwen_7b_mat_gguf/ggml-model-q4_0.gguf --chat_format chatml --port 8888 ``` ### Citation ``` @article{pan2023kwaiagents, author = {Haojie Pan and Zepeng Zhai and Hao Yuan and Yaojia Lv and Ruiji Fu and Ming Liu and Zhongyuan Wang and Bing Qin }, title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models}, journal = {CoRR}, volume = {abs/2312.04889}, year = {2023} } ```
LI-ST/Mistral-7B-ko-v0.005
LI-ST
2024-01-09T07:36:16Z
39
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T10:22:04Z
--- license: cc-by-nc-nd-4.0 language: - en - ko library_name: transformers pipeline_tag: text-generation --- <p><h1>Mistral-7B-ko</h1></p> basemodel: Open-Orca/Mistral-7B-OpenOrca ================================================= <BR> This model is a temporary model for testing. <BR> We will be deleting it soon. <BR> =================================================
zxhezexin/openlrm-large-obj-1.0
zxhezexin
2024-01-09T07:32:56Z
7
5
transformers
[ "transformers", "image-to-3d", "dataset:allenai/objaverse", "arxiv:2311.04400", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
image-to-3d
2024-01-09T05:59:51Z
--- license: cc-by-nc-4.0 datasets: - allenai/objaverse pipeline_tag: image-to-3d --- # Model Card for OpenLRM ## Overview This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400). ## Model Details | Model | Training Data | Layers | Feat. Dim | Trip. Dim. | Render Res. | Link | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | openlrm-small-obj-1.0 | Objaverse | 12 | 768 | 32 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-small-obj-1.0) | | openlrm-base-obj-1.0 | Objaverse | 12 | 1024 | 40 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-base-obj-1.0) | | openlrm-large-obj-1.0 | Objaverse | 16 | 1024 | 80 | 384 | [HF](https://huggingface.co/zxhezexin/openlrm-large-obj-1.0) | | openlrm-small | Objaverse + MVImgNet | 12 | 768 | 32 | 192 | To be released | | openlrm-base | Objaverse + MVImgNet | 12 | 1024 | 40 | 192 | To be released | | openlrm-large | Objaverse + MVImgNet | 16 | 1024 | 80 | 384 | To be released | ## Differences from the Original Paper - We do not use the deferred back-propagation technique in the original paper. - The triplane decoder contains 4 layers in our implementation. ## License - The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT). - They are provided for research purposes only, and CANNOT be used commercially. ## Disclaimer This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors. ### Ethical Considerations - This model should be used responsibly and ethically, and should not be used for malicious purposes. - Users should be aware of potential biases in the training data. - The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups. ### Usage Considerations - The model is provided "as is" without warranty of any kind. - Users are responsible for ensuring that their use complies with all relevant laws and regulations. - The developers and contributors of this model are not liable for any damages or losses arising from the use of this model. --- *This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
zxhezexin/openlrm-small-obj-1.0
zxhezexin
2024-01-09T07:32:35Z
41
6
transformers
[ "transformers", "image-to-3d", "dataset:allenai/objaverse", "arxiv:2311.04400", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
image-to-3d
2024-01-09T05:56:48Z
--- license: cc-by-nc-4.0 datasets: - allenai/objaverse pipeline_tag: image-to-3d --- # Model Card for OpenLRM ## Overview This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400). ## Model Details | Model | Training Data | Layers | Feat. Dim | Trip. Dim. | Render Res. | Link | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | openlrm-small-obj-1.0 | Objaverse | 12 | 768 | 32 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-small-obj-1.0) | | openlrm-base-obj-1.0 | Objaverse | 12 | 1024 | 40 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-base-obj-1.0) | | openlrm-large-obj-1.0 | Objaverse | 16 | 1024 | 80 | 384 | [HF](https://huggingface.co/zxhezexin/openlrm-large-obj-1.0) | | openlrm-small | Objaverse + MVImgNet | 12 | 768 | 32 | 192 | To be released | | openlrm-base | Objaverse + MVImgNet | 12 | 1024 | 40 | 192 | To be released | | openlrm-large | Objaverse + MVImgNet | 16 | 1024 | 80 | 384 | To be released | ## Differences from the Original Paper - We do not use the deferred back-propagation technique in the original paper. - The triplane decoder contains 4 layers in our implementation. ## License - The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT). - They are provided for research purposes only, and CANNOT be used commercially. ## Disclaimer This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors. ### Ethical Considerations - This model should be used responsibly and ethically, and should not be used for malicious purposes. - Users should be aware of potential biases in the training data. - The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups. ### Usage Considerations - The model is provided "as is" without warranty of any kind. - Users are responsible for ensuring that their use complies with all relevant laws and regulations. - The developers and contributors of this model are not liable for any damages or losses arising from the use of this model. --- *This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
ProjectsbyGaurav/donut-base-gaurav-receipt-epoch-5
ProjectsbyGaurav
2024-01-09T07:28:52Z
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:podbilabs/wildreceipt-donut", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-01-09T05:50:46Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - podbilabs/wildreceipt-donut model-index: - name: donut-base-gaurav-receipt-epoch-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-gaurav-receipt-epoch-5 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0