modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zaanind/gpt2_nmt_tune | zaanind | 2024-10-28T14:59:57Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T11:04:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwenslerp1-7B-i1-GGUF | mradermacher | 2024-10-28T14:59:05Z | 35 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Qwenslerp1-7B",
"base_model:quantized:allknowingroger/Qwenslerp1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-28T13:48:38Z | ---
base_model: allknowingroger/Qwenslerp1-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwenslerp1-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwenslerp1-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenslerp1-7B-i1-GGUF/resolve/main/Qwenslerp1-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF | mradermacher | 2024-10-28T14:58:09Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wangrongsheng/LongWriter-llama3.1-8b-abliterated",
"base_model:quantized:wangrongsheng/LongWriter-llama3.1-8b-abliterated",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T12:46:04Z | ---
base_model: wangrongsheng/LongWriter-llama3.1-8b-abliterated
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wangrongsheng/LongWriter-llama3.1-8b-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LongWriter-llama3.1-8b-abliterated-GGUF/resolve/main/LongWriter-llama3.1-8b-abliterated.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mav23/dolly-v2-12b-GGUF | mav23 | 2024-10-28T14:52:53Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"region:us"
] | null | 2024-10-28T13:23:26Z | ---
license: mit
language:
- en
library_name: transformers
inference: false
datasets:
- databricks/databricks-dolly-15k
---
# dolly-v2-12b Model Card
## Summary
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these smaller models sizes:
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
but a robust statement as to the sources of these variations requires further study.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking! |
g-assismoraes/deberta-large-semeval25_EN08_fold4 | g-assismoraes | 2024-10-28T14:52:31Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T14:37:49Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
model-index:
- name: deberta-large-semeval25_EN08_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-large-semeval25_EN08_fold4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0968
- Precision Samples: 0.1277
- Recall Samples: 0.8179
- F1 Samples: 0.2131
- Precision Macro: 0.3800
- Recall Macro: 0.7101
- F1 Macro: 0.2707
- Precision Micro: 0.1256
- Recall Micro: 0.7889
- F1 Micro: 0.2167
- Precision Weighted: 0.2111
- Recall Weighted: 0.7889
- F1 Weighted: 0.2494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.3595 | 1.0 | 73 | 10.0150 | 0.1187 | 0.4154 | 0.1640 | 0.9017 | 0.3154 | 0.2532 | 0.1115 | 0.3167 | 0.1650 | 0.6711 | 0.3167 | 0.0826 |
| 9.1971 | 2.0 | 146 | 9.4160 | 0.1191 | 0.6148 | 0.1778 | 0.7693 | 0.4444 | 0.2773 | 0.1049 | 0.5417 | 0.1758 | 0.4587 | 0.5417 | 0.1310 |
| 7.9996 | 3.0 | 219 | 8.8114 | 0.1176 | 0.7117 | 0.1924 | 0.5851 | 0.5468 | 0.2806 | 0.1088 | 0.6667 | 0.1871 | 0.3031 | 0.6667 | 0.1706 |
| 7.463 | 4.0 | 292 | 8.5503 | 0.1224 | 0.7819 | 0.1931 | 0.5197 | 0.6480 | 0.2805 | 0.1125 | 0.7472 | 0.1955 | 0.2758 | 0.7472 | 0.1944 |
| 8.4991 | 5.0 | 365 | 8.3932 | 0.1203 | 0.7938 | 0.2006 | 0.4699 | 0.6469 | 0.2725 | 0.1138 | 0.7472 | 0.1976 | 0.2545 | 0.7472 | 0.2056 |
| 5.8266 | 6.0 | 438 | 8.2974 | 0.1222 | 0.8157 | 0.2042 | 0.4218 | 0.6797 | 0.2494 | 0.1148 | 0.7778 | 0.2001 | 0.2412 | 0.7778 | 0.2214 |
| 6.4555 | 7.0 | 511 | 8.2044 | 0.1241 | 0.7945 | 0.2076 | 0.3889 | 0.6770 | 0.2569 | 0.1224 | 0.7667 | 0.2111 | 0.2286 | 0.7667 | 0.2450 |
| 6.1701 | 8.0 | 584 | 8.2297 | 0.1285 | 0.8057 | 0.2131 | 0.3902 | 0.7018 | 0.2765 | 0.1267 | 0.7722 | 0.2176 | 0.2159 | 0.7722 | 0.2478 |
| 6.2618 | 9.0 | 657 | 8.1061 | 0.1281 | 0.8229 | 0.2137 | 0.3794 | 0.7040 | 0.2694 | 0.1243 | 0.7833 | 0.2145 | 0.2090 | 0.7833 | 0.2462 |
| 6.6155 | 10.0 | 730 | 8.0968 | 0.1277 | 0.8179 | 0.2131 | 0.3800 | 0.7101 | 0.2707 | 0.1256 | 0.7889 | 0.2167 | 0.2111 | 0.7889 | 0.2494 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Cloyne/vietnamese-sbert | Cloyne | 2024-10-28T14:49:54Z | 56 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:120210",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:keepitreal/vietnamese-sbert",
"base_model:finetune:keepitreal/vietnamese-sbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-28T14:49:39Z | ---
base_model: keepitreal/vietnamese-sbert
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:120210
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Chα»§ tα»ch Ủy ban nhΓ’n dΓ’n xΓ£ cΓ³ quyα»n ra quyαΊΏt Δα»nh cΖ°α»‘ng chαΊΏ thΓ‘o
dα»‘ cΓ΄ng trΓ¬nh xΓ’y dα»±ng trΓͺn ΔαΊ₯t nΓ΄ng nghiα»p khi chΖ°a chuyα»n mα»₯c ΔΓch sα» dα»₯ng ΔαΊ₯t
hay khΓ΄ng?
sentences:
- 'Δα»i tượng, Δiα»u kiα»n kΓ©o dΓ i tuα»i phα»₯c vα»₯ tαΊ‘i ngΕ©
1. Δα»i tượng:
a) QuΓ’n nhΓ’n chuyΓͺn nghiα»p cΓ³ trΓ¬nh Δα» cao ΔαΊ³ng trα» lΓͺn Δang ΔαΊ£m nhiα»m cΓ‘c chα»©c
danh: Kα»Ή thuαΊt viΓͺn, NhΓ’n viΓͺn Kα»Ή thuαΊt, HuαΊ₯n luyα»n viΓͺn, Nghα» sΔ©, NhαΊ‘c sΔ©, Diα»
n
viΓͺn lΓ m viα»c ΔΓΊng chuyΓͺn ngΓ nh ΔΓ o tαΊ‘o α» cΓ‘c cΖ‘ sα» nghiΓͺn cα»©u, nhΓ trΖ°α»ng, bα»nh
viα»n, trung tΓ’m thα» dα»₯c thα» thao, ΔoΓ n nghα» thuαΊt, nhΓ mΓ‘y, doanh nghiα»p quα»c
phΓ²ng; ΔΖ‘n vα» ΔΓ³ng quΓ’n α» Δα»a bΓ n vΓΉng sΓ’u, vΓΉng xa, biΓͺn giα»i, hαΊ£i ΔαΊ£o.
b) QuΓ’n nhΓ’n chuyΓͺn nghiα»p Δang lΓ m viα»c thuα»c cΓ‘c chuyΓͺn ngΓ nh hαΊΉp Δược ΔΓ o tαΊ‘o
cΓ΄ng phu hoαΊ·c chuyΓͺn ngΓ nh QuΓ’n Δα»i chΖ°a ΔΓ o tαΊ‘o Δược; thợ bαΊc cao.
c) QuΓ’n nhΓ’n chuyΓͺn nghiα»p Δang ΔαΊ£m nhiα»m chα»©c vα»₯ chα» huy, quαΊ£n lΓ½ α» cΓ‘c nhΓ mΓ‘y,
doanh nghiα»p quα»c phΓ²ng.
d) QuΓ’n nhΓ’n chuyΓͺn nghiα»p khΓ΄ng thuα»c Δα»i tượng quy Δα»nh tαΊ‘i Δiα»m a, Δiα»m b,
Δiα»m c khoαΊ£n nΓ y do Bα» trΖ°α»ng Bα» Quα»c phΓ²ng quyαΊΏt Δα»nh.
2. Δiα»u kiα»n:
QuΓ’n nhΓ’n chuyΓͺn nghiα»p thuα»c Δα»i tượng quy Δα»nh tαΊ‘i khoαΊ£n 1 Δiα»u nΓ y Δược kΓ©o
dΓ i tuα»i phα»₯c vα»₯ tαΊ‘i ngΕ© khi cΓ³ Δα»§ cΓ‘c Δiα»u kiα»n sau:
a) ΔΖ‘n vα» cΓ³ biΓͺn chαΊΏ vΓ nhu cαΊ§u sα» dα»₯ng;
b) HαΊΏt hαΊ‘n tuα»i phα»₯c vα»₯ tαΊ‘i ngΕ© cao nhαΊ₯t theo cαΊ₯p bαΊc quΓ’n hΓ m quy Δα»nh tαΊ‘i khoαΊ£n
2 Δiα»u 17 LuαΊt QuΓ’n nhΓ’n chuyΓͺn nghiα»p, cΓ΄ng nhΓ’n vΓ viΓͺn chα»©c quα»c phΓ²ng; chΖ°a
cΓ³ ngΖ°α»i thay thαΊΏ; tα»± nguyα»n tiαΊΏp tα»₯c phα»₯c vα»₯ tαΊ‘i ngΕ©;
c) CΓ³ Δα»§ phαΊ©m chαΊ₯t chΓnh trα», ΔαΊ‘o Δα»©c, sα»©c khα»e Δα» hoΓ n thΓ nh nhiα»m vα»₯ Δược giao;
d) CΓ³ trΓ¬nh Δα» chuyΓͺn mΓ΄n kα»Ή thuαΊt, nghiα»p vα»₯ giα»i; tay nghα» cao; chαΊ₯t lượng,
hiα»u quαΊ£ cΓ΄ng tΓ‘c tα»t.'
- 'Thi hΓ nh quyαΊΏt Δα»nh cΖ°α»‘ng chαΊΏ
1. NgΖ°α»i ra quyαΊΏt Δα»nh cΖ°α»‘ng chαΊΏ cΓ³ trΓ‘ch nhiα»m gα»i ngay quyαΊΏt Δα»nh cΖ°α»‘ng chαΊΏ
cho cΓ‘c cΓ‘ nhΓ’n, tα» chα»©c liΓͺn quan vΓ tα» chα»©c thα»±c hiα»n viα»c cΖ°α»‘ng chαΊΏ thi hΓ nh
quyαΊΏt Δα»nh xα» phαΊ‘t cα»§a mΓ¬nh vΓ cα»§a cαΊ₯p dΖ°α»i.
..."'
- 'TrΓ¬nh tα»±, thα»§ tα»₯c ΔΔng kΓ½ tΓ i khoαΊ£n Δα»nh danh Δiα»n tα» Δα»i vα»i cΓ΄ng dΓ’n Viα»t Nam
1. ΔΔng kΓ½ tΓ i khoαΊ£n Δα»nh danh Δiα»n tα» mα»©c Δα» 1 qua α»©ng dα»₯ng VNelD Δα»i vα»i cΓ΄ng
dΓ’n ΔΓ£ cΓ³ thαΊ» CΔn cΖ°α»c cΓ΄ng dΓ’n gαΊ―n chΓp Δiα»n tα»
a) CΓ΄ng dΓ’n sα» dα»₯ng thiαΊΏt bα» di Δα»ng tαΊ£i vΓ cΓ i ΔαΊ·t α»©ng dα»₯ng VNelD.
b) CΓ΄ng dΓ’n sα» dα»₯ng α»©ng dα»₯ng VNelD Δα» nhαΊp thΓ΄ng tin vα» sα» Δα»nh danh cΓ‘ nhΓ’n vΓ
sα» Δiα»n thoαΊ‘i hoαΊ·c Δα»a chα» thΖ° Δiα»n tα»; cung cαΊ₯p cΓ‘c thΓ΄ng tin theo hΖ°α»ng dαΊ«n
trΓͺn α»©ng dα»₯ng VNelD; thu nhαΊn αΊ£nh chΓ’n dung bαΊ±ng thiαΊΏt bα» di Δα»ng vΓ gα»i yΓͺu cαΊ§u
Δα» nghα» cαΊ₯p tΓ i khoαΊ£n Δα»nh danh Δiα»n tα» tα»i cΖ‘ quan quαΊ£n lΓ½ Δα»nh danh vΓ xΓ‘c thα»±c
Δiα»n tα» qua α»©ng dα»₯ng VNelD.
c) CΖ‘ quan quαΊ£n lΓ½ Δα»nh danh Δiα»n tα» thΓ΄ng bΓ‘o kαΊΏt quαΊ£ ΔΔng kΓ½ tΓ i khoαΊ£n qua α»©ng
dα»₯ng VNelD hoαΊ·c tin nhαΊ―n SMS hoαΊ·c Δα»a chα» thΖ° Δiα»n tα».
2. ΔΔng kΓ½ tΓ i khoαΊ£n Δα»nh danh Δiα»n tα» mα»©c Δα» 2
a) Δα»i vα»i cΓ΄ng dΓ’n ΔΓ£ Δược cαΊ₯p thαΊ» CΔn cΖ°α»c cΓ΄ng dΓ’n gαΊ―n chΓp Δiα»n tα»:
CΓ΄ng dΓ’n ΔαΊΏn CΓ΄ng an xΓ£, phΖ°α»ng, thα» trαΊ₯n hoαΊ·c nΖ‘i lΓ m thα»§ tα»₯c cαΊ₯p thαΊ» CΔn cΖ°α»c
cΓ΄ng dΓ’n Δα» lΓ m thα»§ tα»₯c cαΊ₯p tΓ i khoαΊ£n Δα»nh danh Δiα»n tα». CΓ΄ng dΓ’n xuαΊ₯t trΓ¬nh thαΊ»
CΔn cΖ°α»c cΓ΄ng dΓ’n gαΊ―n chΓp Δiα»n tα», cung cαΊ₯p thΓ΄ng tin vα» sα» Δiα»n thoαΊ‘i hoαΊ·c Δα»a
chα» thΖ° Δiα»n tα» vΓ Δα» nghα» bα» sung thΓ΄ng tin Δược tΓch hợp vΓ o tΓ i khoαΊ£n Δα»nh
danh Δiα»n tα».
CΓ‘n bα» tiαΊΏp nhαΊn nhαΊp thΓ΄ng tin cΓ΄ng dΓ’n cung cαΊ₯p vΓ o hα» thα»ng Δα»nh danh vΓ xΓ‘c
thα»±c Δiα»n tα»; chα»₯p αΊ£nh chΓ’n dung, thu nhαΊn vΓ’n tay cα»§a cΓ΄ng dΓ’n ΔαΊΏn lΓ m thα»§ tα»₯c
Δα» xΓ‘c thα»±c vα»i CΖ‘ sα» dα»― liα»u cΔn cΖ°α»c cΓ΄ng dΓ’n vΓ khαΊ³ng Δα»nh sα»± Δα»ng Γ½ ΔΔng kΓ½
tαΊ‘o lαΊp tΓ i khoαΊ£n Δα»nh danh Δiα»n tα».
CΖ‘ quan quαΊ£n lΓ½ Δα»nh danh Δiα»n tα» thΓ΄ng bΓ‘o kαΊΏt quαΊ£ ΔΔng kΓ½ tΓ i khoαΊ£n qua α»©ng
dα»₯ng VNelD hoαΊ·c tin nhαΊ―n SMS hoαΊ·c Δα»a chα» thΖ° Δiα»n tα».
b) CΖ‘ quan CΓ΄ng an tiαΊΏn hΓ nh cαΊ₯p tΓ i khoαΊ£n Δα»nh danh Δiα»n tα» mα»©c Δα» 2 cΓΉng vα»i
cαΊ₯p thαΊ» CΔn cΖ°α»c cΓ΄ng dΓ’n vα»i trΖ°α»ng hợp cΓ΄ng dΓ’n chΖ°a Δược cαΊ₯p CΔn cΖ°α»c cΓ΄ng
dΓ’n gαΊ―n chΓp Δiα»n tα».'
- source_sentence: Mα»©c hΖ°α»ng chαΊΏ Δα» thai sαΊ£n Δα»i vα»i lao Δα»ng nam lΓ ngΖ°α»i nΖ°α»c ngoΓ i
Δược phΓ‘p luαΊt quy Δα»nh nhΖ° thαΊΏ nΓ o?
sentences:
- '"Δiα»u 21. ThΓ΄ng bΓ‘o kαΊΏt quαΊ£ vΓ xΓ‘c nhαΊn nhαΊp hα»c
1. CΖ‘ sα» ΔΓ o tαΊ‘o gα»i giαΊ₯y bΓ‘o trΓΊng tuyα»n cho nhα»―ng thΓ sinh trΓΊng tuyα»n, trong
ΔΓ³ ghi rΓ΅ nhα»―ng thα»§ tα»₯c cαΊ§n thiαΊΏt Δα»i vα»i thΓ sinh khi nhαΊp hα»c vΓ phΖ°Ζ‘ng thα»©c
nhαΊp hα»c cα»§a thΓ sinh.
2. ThΓ sinh xΓ‘c nhαΊn nhαΊp hα»c bαΊ±ng hΓ¬nh thα»©c trα»±c tuyαΊΏn trΓͺn hα» thα»ng, trΖ°α»c khi
nhαΊp hα»c tαΊ‘i cΖ‘ sα» ΔΓ o tαΊ‘o.
3. Δα»i vα»i nhα»―ng thΓ sinh khΓ΄ng xΓ‘c nhαΊn nhαΊp hα»c trong thα»i hαΊ‘n quy Δα»nh:
a) NαΊΏu khΓ΄ng cΓ³ lΓ½ do chΓnh ΔΓ‘ng thΓ¬ coi nhΖ° thΓ sinh tα»« chα»i nhαΊp hα»c vΓ cΖ‘ sα»
ΔΓ o tαΊ‘o cΓ³ quyα»n khΓ΄ng tiαΊΏp nhαΊn;
b) NαΊΏu do α»m Δau, tai nαΊ‘n, cΓ³ giαΊ₯y xΓ‘c nhαΊn cα»§a bα»nh viα»n quαΊn, huyα»n trα» lΓͺn
hoαΊ·c do thiΓͺn tai cΓ³ xΓ‘c nhαΊn cα»§a UBND quαΊn, huyα»n trα» lΓͺn, cΖ‘ sα» ΔΓ o tαΊ‘o xem
xΓ©t quyαΊΏt Δα»nh tiαΊΏp nhαΊn thΓ sinh vΓ o hα»c hoαΊ·c bαΊ£o lΖ°u kαΊΏt quαΊ£ tuyα»n sinh Δα» thΓ
sinh vΓ o hα»c sau;
c) NαΊΏu do sai sΓ³t, nhαΊ§m lαΊ«n cα»§a cΓ‘n bα» thα»±c hiα»n cΓ΄ng tΓ‘c tuyα»n sinh hoαΊ·c cΓ‘ nhΓ’n
thΓ sinh gΓ’y ra, cΖ‘ sα» ΔΓ o tαΊ‘o chα»§ Δα»ng phα»i hợp vα»i cΓ‘c cΓ‘ nhΓ’n, tα» chα»©c liΓͺn
quan xem xΓ©t cΓ‘c minh chα»©ng vΓ quyαΊΏt Δα»nh viα»c tiαΊΏp nhαΊn thΓ sinh vΓ o hα»c hoαΊ·c
bαΊ£o lΖ°u kαΊΏt quαΊ£ tuyα»n sinh Δα» thΓ sinh vΓ o hα»c sau.
4. ThΓ sinh ΔΓ£ xΓ‘c nhαΊn nhαΊp hα»c tαΊ‘i mα»t cΖ‘ sα» ΔΓ o tαΊ‘o khΓ΄ng Δược tham gia xΓ©t
tuyα»n α» nΖ‘i khΓ‘c hoαΊ·c α» cΓ‘c Δợt xΓ©t tuyα»n bα» sung, trα»« trΖ°α»ng hợp Δược cΖ‘ sα» ΔΓ o
tαΊ‘o cho phΓ©p."'
- 'Tα» chα»©c, nhiα»m vα»₯, quyα»n hαΊ‘n cα»§a Ban Chα» huy
...
2. Nhiα»m vα»₯, quyα»n hαΊ‘n cα»§a Ban Chα» huy:
a) Chα» ΔαΊ‘o xΓ’y dα»±ng, ban hΓ nh quy Δα»nh vα» cΓ΄ng tΓ‘c bαΊ£o ΔαΊ£m an toΓ n PCCC vΓ CNCH
tαΊ‘i Trα»₯ sα» cΖ‘ quan Bα» TΖ° phΓ‘p.
b) HΖ°α»ng dαΊ«n, phα»i hợp vα»i cΓ‘c ΔΖ‘n vα» thuα»c Bα» vΓ chα» ΔαΊ‘o Δα»i PCCC vΓ CNCH cΖ‘
sα» tα» chα»©c tuyΓͺn truyα»n, bα»i dΖ°α»‘ng nghiα»p vα»₯ PCCC vΓ CNCH.
c) Chα» ΔαΊ‘o Δα»i PCCC vΓ CNCH cΖ‘ sα» tαΊ‘i Trα»₯ sα» cΖ‘ quan Bα» TΖ° phΓ‘p xΓ’y dα»±ng, trΓ¬nh
cαΊ₯p cΓ³ thαΊ©m quyα»n phΓͺ duyα»t vΓ tα» chα»©c thα»±c tαΊp phΖ°Ζ‘ng Γ‘n PCCC, phΖ°Ζ‘ng Γ‘n CNCH.
d) Chα» ΔαΊ‘o Δα»i PCCC vΓ CNCH cΖ‘ sα» tαΊ‘i Trα»₯ sα» cΖ‘ quan Bα» TΖ° phΓ‘p quαΊ£n lΓ½ cΓ‘c trang
thiαΊΏt bα» PCCC vΓ CNCH.
Δ) Chα» ΔαΊ‘o chα»―a chΓ‘y, CNCH khi xαΊ£y ra chΓ‘y, sα»± cα», tai nαΊ‘n tαΊ‘i Trα»₯ sα» cΖ‘ quan
Bα» TΖ° phΓ‘p.
e) Chα» ΔαΊ‘o viα»c tα» chα»©c lαΊp vΓ lΖ°u giα»― hα» sΖ‘ quαΊ£n lΓ½, theo dΓ΅i hoαΊ‘t Δα»ng PCCC,
CNCH tαΊ‘i Trα»₯ sα» cΖ‘ quan Bα» TΖ° phΓ‘p.
g) Chα» ΔαΊ‘o viα»c sΖ‘ kαΊΏt, tα»ng kαΊΏt cΓ‘c hoαΊ‘t Δα»ng vα» PCCC vΓ CNCH cα»§a cΖ‘ quan; kiα»m
tra, ΔΓ΄n Δα»c viα»c chαΊ₯p hΓ nh cΓ‘c quy Δα»nh vα» PCCC vΓ CNCH.
h) Δα» xuαΊ₯t viα»c khen thΖ°α»ng, kα»· luαΊt cΓ‘c tαΊp thα», cΓ‘ nhΓ’n trong viα»c thα»±c hiα»n
cΓ΄ng tΓ‘c PCCC, CNCH.
i) Chα» ΔαΊ‘o Δα»i PCCC vΓ CNCH cΖ‘ sα» dα»± trΓΉ kinh phΓ cho cΓ‘c hoαΊ‘t Δα»ng PCCC vΓ CNCH
tαΊ‘i Trα»₯ sα» cΖ‘ quan Bα» TΖ° phΓ‘p.
k) Thα»±c hiα»n cΓ‘c nhiα»m vα»₯ khΓ‘c do Bα» trΖ°α»ng giao vΓ theo quy Δα»nh cα»§a phΓ‘p luαΊt.'
- 'Mα»©c hΖ°α»ng chαΊΏ Δα» thai sαΊ£n
...
b) Mα»©c hΖ°α»ng mα»t ngΓ y Δα»i vα»i trΖ°α»ng hợp quy Δα»nh tαΊ‘i Δiα»u 32 vΓ khoαΊ£n 2 Δiα»u
34 cα»§a LuαΊt nΓ y Δược tΓnh bαΊ±ng mα»©c hΖ°α»ng chαΊΏ Δα» thai sαΊ£n theo thΓ‘ng chia cho 24
ngΓ y.'
- source_sentence: Doanh nghiα»p Δược Γ‘p dα»₯ng chαΊΏ Δα» Ζ°u tiΓͺn khΓ΄ng cung cαΊ₯p bΓ‘o cΓ‘o
kiα»m toΓ‘n ΔΓΊng thα»i hαΊ‘n bα» phαΊ‘t bao nhiΓͺu tiα»n?
sentences:
- 'Thay Δα»i ThαΊ©m phΓ‘n, Hα»i thαΊ©m
1. ThαΊ©m phΓ‘n, Hα»i thαΊ©m phαΊ£i tα»« chα»i tham gia xΓ©t xα» hoαΊ·c bα» thay Δα»i khi thuα»c
mα»t trong cΓ‘c trΖ°α»ng hợp:
a) TrΖ°α»ng hợp quy Δα»nh tαΊ‘i Δiα»u 49 cα»§a Bα» luαΊt nΓ y;
b) Hα» cΓΉng trong mα»t Hα»i Δα»ng xΓ©t xα» vΓ lΓ ngΖ°α»i thΓ’n thΓch vα»i nhau;
c) ΔΓ£ tham gia xΓ©t xα» sΖ‘ thαΊ©m hoαΊ·c phΓΊc thαΊ©m hoαΊ·c tiαΊΏn hΓ nh tα» tα»₯ng vα»₯ Γ‘n ΔΓ³ vα»i
tΖ° cΓ‘ch lΓ Δiα»u tra viΓͺn, CΓ‘n bα» Δiα»u tra, Kiα»m sΓ‘t viΓͺn, Kiα»m tra viΓͺn, ThαΊ©m
tra viΓͺn, ThΖ° kΓ½ TΓ²a Γ‘n.
2. Viα»c thay Δα»i ThαΊ©m phΓ‘n, Hα»i thαΊ©m trΖ°α»c khi mα» phiΓͺn tΓ²a do ChΓ‘nh Γ‘n hoαΊ·c PhΓ³
ChΓ‘nh Γ‘n TΓ²a Γ‘n Δược phΓ’n cΓ΄ng giαΊ£i quyαΊΏt vα»₯ Γ‘n quyαΊΏt Δα»nh.
ThαΊ©m phΓ‘n bα» thay Δα»i lΓ ChΓ‘nh Γ‘n TΓ²a Γ‘n thΓ¬ do ChΓ‘nh Γ‘n TΓ²a Γ‘n trΓͺn mα»t cαΊ₯p quyαΊΏt
Δα»nh.
Viα»c thay Δα»i ThαΊ©m phΓ‘n, Hα»i thαΊ©m tαΊ‘i phiΓͺn tΓ²a do Hα»i Δα»ng xΓ©t xα» quyαΊΏt Δα»nh
trΖ°α»c khi bαΊ―t ΔαΊ§u xΓ©t hα»i bαΊ±ng cΓ‘ch biα»u quyαΊΏt tαΊ‘i phΓ²ng nghα» Γ‘n. Khi xem xΓ©t
thay Δα»i thΓ nh viΓͺn nΓ o thΓ¬ thΓ nh viΓͺn ΔΓ³ Δược trΓ¬nh bΓ y Γ½ kiαΊΏn cα»§a mΓ¬nh, Hα»i
Δα»ng quyαΊΏt Δα»nh theo Δa sα».
TrΖ°α»ng hợp phαΊ£i thay Δα»i ThαΊ©m phΓ‘n, Hα»i thαΊ©m tαΊ‘i phiΓͺn tΓ²a thΓ¬ Hα»i Δα»ng xΓ©t xα»
ra quyαΊΏt Δα»nh hoΓ£n phiΓͺn tΓ²a.'
- 'βΔiα»u 21. ChαΊ₯m dα»©t hΖ°α»ng trợ cαΊ₯p thαΊ₯t nghiα»p
1. CΓ‘c trΖ°α»ng hợp ngΖ°α»i lao Δα»ng Δang hΖ°α»ng trợ cαΊ₯p thαΊ₯t nghiα»p bα» chαΊ₯m dα»©t hΖ°α»ng
trợ cαΊ₯p thαΊ₯t nghiα»p Δược quy Δα»nh nhΖ° sau:
e) Trong thα»i gian hΖ°α»ng trợ cαΊ₯p thαΊ₯t nghiα»p, 03 thΓ‘ng liΓͺn tα»₯c khΓ΄ng thα»±c hiα»n
thΓ΄ng bΓ‘o hαΊ±ng thΓ‘ng vα» viα»c tΓ¬m kiαΊΏm viα»c lΓ m vα»i trung tΓ’m dα»ch vα»₯ viα»c lΓ m
theo quy Δα»nh
NgΓ y mΓ ngΖ°α»i lao Δα»ng Δược xΓ‘c Δα»nh bα» chαΊ₯m dα»©t hΖ°α»ng trợ cαΊ₯p thαΊ₯t nghiα»p lΓ
ngΓ y kαΊΏt thΓΊc cα»§a thα»i hαΊ‘n thΓ΄ng bΓ‘o tΓ¬m kiαΊΏm viα»c lΓ m cα»§a thΓ‘ng thα»© 3 liΓͺn tα»₯c
mΓ ngΖ°α»i lao Δα»ng khΓ΄ng thα»±c hiα»n thΓ΄ng bΓ‘o hαΊ±ng thΓ‘ng vα» viα»c tΓ¬m kiαΊΏm viα»c lΓ m."'
- 'Vi phαΊ‘m quy Δα»nh vα» thα»i hαΊ‘n lΓ m thα»§ tα»₯c hαΊ£i quan, nα»p hα» sΖ‘ thuαΊΏ
...
2. PhαΊ‘t tiα»n tα»« 1.000.000 Δα»ng ΔαΊΏn 2.000.000 Δα»ng Δα»i vα»i hΓ nh vi khΓ΄ng thα»±c hiα»n
ΔΓΊng thα»i hαΊ‘n quy Δα»nh thuα»c mα»t trong cΓ‘c trΖ°α»ng hợp sau:
a) Cung cαΊ₯p bΓ‘o cΓ‘o kiα»m toΓ‘n, bΓ‘o cΓ‘o tΓ i chΓnh cα»§a doanh nghiα»p Δược Γ‘p dα»₯ng
chαΊΏ Δα» Ζ°u tiΓͺn;
b) ThΓ΄ng bΓ‘o cho cΖ‘ quan hαΊ£i quan quyαΊΏt Δα»nh xα» lΓ½ vi phαΊ‘m phΓ‘p luαΊt vα» quαΊ£n lΓ½
thuαΊΏ, kαΊΏ toΓ‘n Δα»i vα»i doanh nghiα»p Δược Γ‘p dα»₯ng chαΊΏ Δα» Ζ°u tiΓͺn;
c) BΓ‘o cΓ‘o vα» lượng hΓ ng hΓ³a nhαΊp khαΊ©u phα»₯c vα»₯ xΓ’y dα»±ng nhΓ xΖ°α»ng, hΓ ng hΓ³a gα»i
kho bΓͺn ngoΓ i cα»§a doanh nghiα»p chαΊΏ xuαΊ₯t;
d) BΓ‘o cΓ‘o vα» lượng hΓ ng hΓ³a trung chuyα»n ΔΖ°a vΓ o, ΔΖ°a ra, cΓ²n lΖ°u tαΊ‘i cαΊ£ng;
Δ) BΓ‘o cΓ‘o thα»ng kΓͺ thΓ΄ng quan hΓ ng bΖ°u chΓnh ΔΖ°a vΓ o Viα»t Nam Δα» chuyα»n tiαΊΏp
Δi quα»c tαΊΏ.
...'
- source_sentence: TΓ i chΓnh cα»§a Hα»i Kiα»m toΓ‘n viΓͺn hΓ nh nghα» Viα»t Nam Δược chi cho
nhα»―ng khoαΊ£n nΓ o?
sentences:
- 'GiαΊ£i thα» vΓ xα» lΓ½ tΓ i chΓnh khi giαΊ£i thα»
1. Khi xΓ©t thαΊ₯y hoαΊ‘t Δα»ng cα»§a Hα»i khΓ΄ng cΓ³ hiα»u quαΊ£, khΓ΄ng mang lαΊ‘i lợi Γch cho
Hα»i viΓͺn hoαΊ·c gΓ’y phiα»n hΓ , cαΊ£n trα» cho Hα»i viΓͺn thΓ¬ BCH Hα»i quyαΊΏt Δα»nh triα»u
tαΊp ΔαΊ‘i hα»i Δα» bΓ n biα»n phΓ‘p cα»§ng cα» tα» chα»©c hoαΊ·c giαΊ£i thα» Hα»i. NαΊΏu giαΊ£i thα» Hα»i
thΓ¬ do ΔαΊ‘i hα»i ΔαΊ‘i biα»u hoαΊ·c ΔαΊ‘i hα»i toΓ n quα»c cα»§a Hα»i thΓ΄ng qua vΓ Δα» nghα» cΖ‘
quan NhΓ nΖ°α»c cΓ³ thαΊ©m quyα»n xem xΓ©t, quyαΊΏt Δα»nh.
2. Khi Hα»i bα» giαΊ£i thα», Ban ThΖ°α»ng trα»±c vΓ Ban Kiα»m tra cα»§a Hα»i phαΊ£i tiαΊΏn hΓ nh
kiα»m kΓͺ tΓ i sαΊ£n, kiα»m quα»Ή vΓ bΓ‘o cΓ‘o BCH Hα»i quyαΊΏt Δα»nh viα»c xα» lΓ½ tΓ i sαΊ£n, tiα»n
tα»n quα»Ή vΓ tiαΊΏn hΓ nh thα»§ tα»₯c giαΊ£i thα» theo quy Δα»nh cα»§a phΓ‘p luαΊt.'
- '"Δiα»u 14. Miα»
n trα»« Δα»i vα»i thα»a thuαΊn hαΊ‘n chαΊΏ cαΊ‘nh tranh bα» cαΊ₯m
1. Thα»a thuαΊn hαΊ‘n chαΊΏ cαΊ‘nh tranh quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 1, 2, 3, 7, 8, 9, 10 vΓ
11 Δiα»u 11 bα» cαΊ₯m theo quy Δα»nh tαΊ‘i Δiα»u 12 cα»§a LuαΊt nΓ y Δược miα»
n trα»« cΓ³ thα»i
hαΊ‘n nαΊΏu cΓ³ lợi cho ngΖ°α»i tiΓͺu dΓΉng vΓ ΔΓ‘p α»©ng mα»t trong cΓ‘c Δiα»u kiα»n sau ΔΓ’y:
a) TΓ‘c Δα»ng thΓΊc ΔαΊ©y tiαΊΏn bα» kα»Ή thuαΊt, cΓ΄ng nghα», nΓ’ng cao chαΊ₯t lượng hΓ ng hΓ³a,
dα»ch vα»₯;
b) TΔng cΖ°α»ng sα»©c cαΊ‘nh tranh cα»§a doanh nghiα»p Viα»t Nam trΓͺn thα» trΖ°α»ng quα»c tαΊΏ;
c) ThΓΊc ΔαΊ©y viα»c Γ‘p dα»₯ng thα»ng nhαΊ₯t tiΓͺu chuαΊ©n chαΊ₯t lượng, Δα»nh mα»©c kα»Ή thuαΊt cα»§a
chα»§ng loαΊ‘i sαΊ£n phαΊ©m;
d) Thα»ng nhαΊ₯t cΓ‘c Δiα»u kiα»n thα»±c hiα»n hợp Δα»ng, giao hΓ ng, thanh toΓ‘n nhΖ°ng khΓ΄ng
liΓͺn quan ΔαΊΏn giΓ‘ vΓ cΓ‘c yαΊΏu tα» cα»§a giΓ‘.
2. Thα»a thuαΊn lao Δα»ng, thα»a thuαΊn hợp tΓ‘c trong cΓ‘c ngΓ nh, lΔ©nh vα»±c ΔαΊ·c thΓΉ Δược
thα»±c hiα»n theo quy Δα»nh cα»§a luαΊt khΓ‘c thΓ¬ thα»±c hiα»n theo quy Δα»nh cα»§a luαΊt ΔΓ³".'
- '"Δiα»u 2. Sα»a Δα»i, bα» sung mα»t sα» Δiα»u cα»§a Nghα» Δα»nh sα» 15/2019/NΔ-CP ngΓ y 01
thΓ‘ng 02 nΔm 2019 cα»§a ChΓnh phα»§ quy Δα»nh chi tiαΊΏt mα»t sα» Δiα»u vΓ biα»n phΓ‘p thi
hΓ nh LuαΊt GiΓ‘o dα»₯c nghα» nghiα»p
...
12. Sα»a Δα»i, bα» sung Δiα»u 24 nhΖ° sau:
Δiα»u 24. ThαΊ©m quyα»n cαΊ₯p giαΊ₯y chα»©ng nhαΊn ΔΔng kΓ½ hoαΊ‘t Δα»ng liΓͺn kαΊΏt ΔΓ o tαΊ‘o vα»i
nΖ°α»c ngoΓ i
1. Tα»ng cα»₯c GiΓ‘o dα»₯c nghα» nghiα»p cαΊ₯p giαΊ₯y chα»©ng nhαΊn ΔΔng kΓ½ hoαΊ‘t Δα»ng liΓͺn kαΊΏt
ΔΓ o tαΊ‘o vα»i nΖ°α»c ngoΓ i Δα»i vα»i trΖ°α»ng cao ΔαΊ³ng.
2. Sα» Lao Δα»ng - ThΖ°Ζ‘ng binh vΓ XΓ£ hα»i nΖ‘i trΖ°α»ng trung cαΊ₯p, trung tΓ’m giΓ‘o dα»₯c
nghα» nghiα»p, trung tΓ’m giΓ‘o dα»₯c nghα» nghiα»p - giΓ‘o dα»₯c thΖ°α»ng xuyΓͺn vΓ doanh nghiα»p
tα» chα»©c hoαΊ‘t Δα»ng liΓͺn kαΊΏt ΔΓ o tαΊ‘o vα»i nΖ°α»c ngoΓ i cαΊ₯p giαΊ₯y chα»©ng nhαΊn ΔΔng kΓ½
hoαΊ‘t Δα»ng liΓͺn kαΊΏt ΔΓ o tαΊ‘o vα»i nΖ°α»c ngoΓ i Δα»i vα»i trΖ°α»ng trung cαΊ₯p, trung tΓ’m
giΓ‘o dα»₯c nghα» nghiα»p, trung tΓ’m giΓ‘o dα»₯c nghα» nghiα»p - giΓ‘o dα»₯c thΖ°α»ng xuyΓͺn vΓ
doanh nghiα»p."'
- source_sentence: NLΔ kΓ½ nhiα»u hợp Δα»ng lao Δα»ng thΓ¬ ΔΓ³ng BHYT nhΖ° thαΊΏ nΓ o?
sentences:
- 'Hα» sΖ‘, thα»§ tα»₯c xΓ‘c Δα»nh trΖ°α»ng hợp Δược bα»i thΖ°α»ng
[...]
3. Trong thα»i hαΊ‘n 05 ngΓ y lΓ m viα»c, kα» tα»« ngΓ y nhαΊn Δược ΔΖ‘n vΓ cΓ‘c giαΊ₯y tα» hợp
lα», nαΊΏu xΓ‘c Δα»nh yΓͺu cαΊ§u thuα»c trΓ‘ch nhiα»m giαΊ£i quyαΊΏt cα»§a mΓ¬nh thΓ¬ Sα» Y tαΊΏ phαΊ£i
thα»₯ lΓ½ vΓ thΓ΄ng bΓ‘o bαΊ±ng vΔn bαΊ£n vα» viα»c thα»₯ lΓ½ ΔΖ‘n cho ngΖ°α»i bα» thiα»t hαΊ‘i hoαΊ·c
thΓ’n nhΓ’n cα»§a ngΖ°α»i bα» thiα»t hαΊ‘i (sau ΔΓ’y gα»i tαΊ―t lΓ ngΖ°α»i bα» thiα»t hαΊ‘i). TrΖ°α»ng
hợp hα» sΖ‘ khΓ΄ng ΔαΊ§y Δα»§ thΓ¬ Sα» Y tαΊΏ cΓ³ vΔn bαΊ£n hΖ°α»ng dαΊ«n ngΖ°α»i bα» thiα»t hαΊ‘i bα»
sung.
4. Trong thα»i hαΊ‘n 15 ngΓ y, kα» tα»« ngΓ y nhαΊn Δược ΔΖ‘n yΓͺu cαΊ§u cα»§a ngΖ°α»i bα» thiα»t
hαΊ‘i, Sα» Y tαΊΏ phαΊ£i hoΓ n thΓ nh viα»c xΓ‘c Δα»nh nguyΓͺn nhΓ’n gΓ’y tai biαΊΏn, mα»©c Δα» tα»n
thΖ°Ζ‘ng vΓ thΓ΄ng bΓ‘o bαΊ±ng vΔn bαΊ£n cho ngΖ°α»i yΓͺu cαΊ§u Δα»ng thα»i bΓ‘o cΓ‘o Bα» Y tαΊΏ.'
- 'Chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n
1. Tα» chα»©c, cΓ‘ nhΓ’n nhαΊn chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n phαΊ£i cΓ³ Δα»§ Δiα»u
kiα»n Δα» Δược cαΊ₯p GiαΊ₯y phΓ©p thΔm dΓ² khoΓ‘ng sαΊ£n theo quy Δα»nh cα»§a LuαΊt nΓ y.
2. Viα»c chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n phαΊ£i Δược cΖ‘ quan quαΊ£n lΓ½ nhΓ nΖ°α»c
cΓ³ thαΊ©m quyα»n cαΊ₯p GiαΊ₯y phΓ©p thΔm dΓ² khoΓ‘ng sαΊ£n chαΊ₯p thuαΊn; trΖ°α»ng hợp Δược chαΊ₯p
thuαΊn, tα» chα»©c, cΓ‘ nhΓ’n nhαΊn chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n Δược cαΊ₯p GiαΊ₯y
phΓ©p thΔm dΓ² khoΓ‘ng sαΊ£n mα»i.
3. Tα» chα»©c, cΓ‘ nhΓ’n chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n ΔΓ£ thα»±c hiα»n Δược Γt
nhαΊ₯t 50% dα»± toΓ‘n cα»§a Δα» Γ‘n thΔm dΓ² khoΓ‘ng sαΊ£n.
4. ChΓnh phα»§ quy Δα»nh chi tiαΊΏt viα»c chuyα»n nhượng quyα»n thΔm dΓ² khoΓ‘ng sαΊ£n.'
- '"Sα»a Δα»i, bα» sung mα»t sα» Δiα»u cα»§a LuαΊt bαΊ£o hiα»m y tαΊΏ:
...
6. Sα»a Δα»i, bα» sung Δiα»u 12 nhΖ° sau:
βΔiα»u 12. Δα»i tượng tham gia bαΊ£o hiα»m y tαΊΏ
1. NhΓ³m do ngΖ°α»i lao Δα»ng vΓ ngΖ°α»i sα» dα»₯ng lao Δα»ng ΔΓ³ng, bao gα»m:
a) NgΖ°α»i lao Δα»ng lΓ m viα»c theo hợp Δα»ng lao Δα»ng khΓ΄ng xΓ‘c Δα»nh thα»i hαΊ‘n, hợp
Δα»ng lao Δα»ng cΓ³ thα»i hαΊ‘n tα»« Δα»§ 3 thΓ‘ng trα» lΓͺn; ngΖ°α»i lao Δα»ng lΓ ngΖ°α»i quαΊ£n
lΓ½ doanh nghiα»p hΖ°α»ng tiα»n lΖ°Ζ‘ng; cΓ‘n bα», cΓ΄ng chα»©c, viΓͺn chα»©c (sau ΔΓ’y gα»i chung
lΓ ngΖ°α»i lao Δα»ng);
b) NgΖ°α»i hoαΊ‘t Δα»ng khΓ΄ng chuyΓͺn trΓ‘ch α» xΓ£, phΖ°α»ng, thα» trαΊ₯n theo quy Δα»nh cα»§a
phΓ‘p luαΊt.=
...
4. NhΓ³m Δược ngΓ’n sΓ‘ch nhΓ nΖ°α»c hα» trợ mα»©c ΔΓ³ng, bao gα»m:
a) NgΖ°α»i thuα»c hα» gia ΔΓ¬nh cαΊn nghΓ¨o;
b) Hα»c sinh, sinh viΓͺn.
5. NhΓ³m tham gia bαΊ£o hiα»m y tαΊΏ theo hα» gia ΔΓ¬nh gα»m nhα»―ng ngΖ°α»i thuα»c hα» gia ΔΓ¬nh,
trα»« Δα»i tượng quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 1, 2, 3 vΓ 4 Δiα»u nΓ y.
6. ChΓnh phα»§ quy Δα»nh cΓ‘c Δα»i tượng khΓ‘c ngoΓ i cΓ‘c Δα»i tượng quy Δα»nh tαΊ‘i cΓ‘c
khoαΊ£n 3, 4 vΓ 5 Δiα»u nΓ y; quy Δα»nh viα»c cαΊ₯p thαΊ» bαΊ£o hiα»m y tαΊΏ Δα»i vα»i Δα»i tượng
do Bα» Quα»c phΓ²ng, Bα» CΓ΄ng an quαΊ£n lΓ½ vΓ Δα»i tượng quy Δα»nh tαΊ‘i Δiα»m 1 khoαΊ£n 3
Δiα»u nΓ y; quy Δα»nh lα» trΓ¬nh thα»±c hiα»n bαΊ£o hiα»m y tαΊΏ, phαΊ‘m vi quyα»n lợi, mα»©c hΖ°α»ng
bαΊ£o hiα»m y tαΊΏ, khΓ‘m bα»nh, chα»―a bα»nh bαΊ£o hiα»m y tαΊΏ, quαΊ£n lΓ½, sα» dα»₯ng phαΊ§n kinh
phΓ dΓ nh cho khΓ‘m bα»nh, chα»―a bα»nh bαΊ£o hiα»m y tαΊΏ, giΓ‘m Δα»nh bαΊ£o hiα»m y tαΊΏ, thanh
toΓ‘n, quyαΊΏt toΓ‘n bαΊ£o hiα»m y tαΊΏ Δα»i vα»i cΓ‘c Δα»i tượng quy Δα»nh tαΊ‘i Δiα»m a khoαΊ£n
3 Δiα»u nΓ y.β'
---
# SentenceTransformer based on keepitreal/vietnamese-sbert
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) <!-- at revision a9467ef2ef47caa6448edeabfd8e5e5ce0fa2a23 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("Cloyne/vietnamese-embedding_finetuned")
# Run inference
sentences = [
'NLΔ kΓ½ nhiα»u hợp Δα»ng lao Δα»ng thΓ¬ ΔΓ³ng BHYT nhΖ° thαΊΏ nΓ o?',
'"Sα»a Δα»i, bα» sung mα»t sα» Δiα»u cα»§a LuαΊt bαΊ£o hiα»m y tαΊΏ:\n...\n6. Sα»a Δα»i, bα» sung Δiα»u 12 nhΖ° sau:\nβΔiα»u 12. Δα»i tượng tham gia bαΊ£o hiα»m y tαΊΏ\n1. NhΓ³m do ngΖ°α»i lao Δα»ng vΓ ngΖ°α»i sα» dα»₯ng lao Δα»ng ΔΓ³ng, bao gα»m:\na) NgΖ°α»i lao Δα»ng lΓ m viα»c theo hợp Δα»ng lao Δα»ng khΓ΄ng xΓ‘c Δα»nh thα»i hαΊ‘n, hợp Δα»ng lao Δα»ng cΓ³ thα»i hαΊ‘n tα»« Δα»§ 3 thΓ‘ng trα» lΓͺn; ngΖ°α»i lao Δα»ng lΓ ngΖ°α»i quαΊ£n lΓ½ doanh nghiα»p hΖ°α»ng tiα»n lΖ°Ζ‘ng; cΓ‘n bα», cΓ΄ng chα»©c, viΓͺn chα»©c (sau ΔΓ’y gα»i chung lΓ ngΖ°α»i lao Δα»ng);\nb) NgΖ°α»i hoαΊ‘t Δα»ng khΓ΄ng chuyΓͺn trΓ‘ch α» xΓ£, phΖ°α»ng, thα» trαΊ₯n theo quy Δα»nh cα»§a phΓ‘p luαΊt.=\n...\n4. NhΓ³m Δược ngΓ’n sΓ‘ch nhΓ nΖ°α»c hα» trợ mα»©c ΔΓ³ng, bao gα»m:\na) NgΖ°α»i thuα»c hα» gia ΔΓ¬nh cαΊn nghΓ¨o;\nb) Hα»c sinh, sinh viΓͺn.\n5. NhΓ³m tham gia bαΊ£o hiα»m y tαΊΏ theo hα» gia ΔΓ¬nh gα»m nhα»―ng ngΖ°α»i thuα»c hα» gia ΔΓ¬nh, trα»« Δα»i tượng quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 1, 2, 3 vΓ 4 Δiα»u nΓ y.\n6. ChΓnh phα»§ quy Δα»nh cΓ‘c Δα»i tượng khΓ‘c ngoΓ i cΓ‘c Δα»i tượng quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 3, 4 vΓ 5 Δiα»u nΓ y; quy Δα»nh viα»c cαΊ₯p thαΊ» bαΊ£o hiα»m y tαΊΏ Δα»i vα»i Δα»i tượng do Bα» Quα»c phΓ²ng, Bα» CΓ΄ng an quαΊ£n lΓ½ vΓ Δα»i tượng quy Δα»nh tαΊ‘i Δiα»m 1 khoαΊ£n 3 Δiα»u nΓ y; quy Δα»nh lα» trΓ¬nh thα»±c hiα»n bαΊ£o hiα»m y tαΊΏ, phαΊ‘m vi quyα»n lợi, mα»©c hΖ°α»ng bαΊ£o hiα»m y tαΊΏ, khΓ‘m bα»nh, chα»―a bα»nh bαΊ£o hiα»m y tαΊΏ, quαΊ£n lΓ½, sα» dα»₯ng phαΊ§n kinh phΓ dΓ nh cho khΓ‘m bα»nh, chα»―a bα»nh bαΊ£o hiα»m y tαΊΏ, giΓ‘m Δα»nh bαΊ£o hiα»m y tαΊΏ, thanh toΓ‘n, quyαΊΏt toΓ‘n bαΊ£o hiα»m y tαΊΏ Δα»i vα»i cΓ‘c Δα»i tượng quy Δα»nh tαΊ‘i Δiα»m a khoαΊ£n 3 Δiα»u nΓ y.β',
'Hα» sΖ‘, thα»§ tα»₯c xΓ‘c Δα»nh trΖ°α»ng hợp Δược bα»i thΖ°α»ng\n[...]\n3. Trong thα»i hαΊ‘n 05 ngΓ y lΓ m viα»c, kα» tα»« ngΓ y nhαΊn Δược ΔΖ‘n vΓ cΓ‘c giαΊ₯y tα» hợp lα», nαΊΏu xΓ‘c Δα»nh yΓͺu cαΊ§u thuα»c trΓ‘ch nhiα»m giαΊ£i quyαΊΏt cα»§a mΓ¬nh thΓ¬ Sα» Y tαΊΏ phαΊ£i thα»₯ lΓ½ vΓ thΓ΄ng bΓ‘o bαΊ±ng vΔn bαΊ£n vα» viα»c thα»₯ lΓ½ ΔΖ‘n cho ngΖ°α»i bα» thiα»t hαΊ‘i hoαΊ·c thΓ’n nhΓ’n cα»§a ngΖ°α»i bα» thiα»t hαΊ‘i (sau ΔΓ’y gα»i tαΊ―t lΓ ngΖ°α»i bα» thiα»t hαΊ‘i). TrΖ°α»ng hợp hα» sΖ‘ khΓ΄ng ΔαΊ§y Δα»§ thΓ¬ Sα» Y tαΊΏ cΓ³ vΔn bαΊ£n hΖ°α»ng dαΊ«n ngΖ°α»i bα» thiα»t hαΊ‘i bα» sung.\n4. Trong thα»i hαΊ‘n 15 ngΓ y, kα» tα»« ngΓ y nhαΊn Δược ΔΖ‘n yΓͺu cαΊ§u cα»§a ngΖ°α»i bα» thiα»t hαΊ‘i, Sα» Y tαΊΏ phαΊ£i hoΓ n thΓ nh viα»c xΓ‘c Δα»nh nguyΓͺn nhΓ’n gΓ’y tai biαΊΏn, mα»©c Δα» tα»n thΖ°Ζ‘ng vΓ thΓ΄ng bΓ‘o bαΊ±ng vΔn bαΊ£n cho ngΖ°α»i yΓͺu cαΊ§u Δα»ng thα»i bΓ‘o cΓ‘o Bα» Y tαΊΏ.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 120,210 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 25.08 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 206.98 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nα»i dung lα»ng ghΓ©p vαΊ₯n Δα» bΓ¬nh ΔαΊ³ng giα»i trong xΓ’y dα»±ng vΔn bαΊ£n quy phαΊ‘m phΓ‘p luαΊt Δược quy Δα»nh thαΊΏ nΓ o?</code> | <code>Nα»i dung lα»ng ghΓ©p vαΊ₯n Δα» bΓ¬nh ΔαΊ³ng giα»i trong xΓ’y dα»±ng vΔn bαΊ£n quy phαΊ‘m phΓ‘p luαΊt<br>Trong phαΊ‘m vi Δiα»u chα»nh cα»§a vΔn bαΊ£n quy phαΊ‘m phΓ‘p luαΊt:<br>1. XΓ‘c Δα»nh nα»i dung liΓͺn quan ΔαΊΏn vαΊ₯n Δα» bΓ¬nh ΔαΊ³ng giα»i hoαΊ·c vαΊ₯n Δα» bαΊ₯t bΓ¬nh ΔαΊ³ng giα»i, phΓ’n biα»t Δα»i xα» vα» giα»i.<br>2. Quy Δα»nh cΓ‘c biα»n phΓ‘p cαΊ§n thiαΊΏt Δα» thα»±c hiα»n bΓ¬nh ΔαΊ³ng giα»i hoαΊ·c Δα» giαΊ£i quyαΊΏt vαΊ₯n Δα» bαΊ₯t bΓ¬nh ΔαΊ³ng giα»i, phΓ’n biα»t Δα»i xα» vα» giα»i; dα»± bΓ‘o tΓ‘c Δα»ng cα»§a cΓ‘c quy Δα»nh ΔΓ³ Δα»i vα»i nam vΓ nα»― sau khi Δược ban hΓ nh.<br>3. XΓ‘c Δα»nh nguα»n nhΓ’n lα»±c, tΓ i chΓnh cαΊ§n thiαΊΏt Δα» triα»n khai cΓ‘c biα»n phΓ‘p thα»±c hiα»n bΓ¬nh ΔαΊ³ng giα»i hoαΊ·c Δα» giαΊ£i quyαΊΏt vαΊ₯n Δα» bαΊ₯t bΓ¬nh ΔαΊ³ng giα»i, phΓ’n biα»t Δα»i xα» vα» giα»i.</code> |
| <code>Δiα»u kiα»n Δα» giΓ‘o viΓͺn trong cΖ‘ sα» giΓ‘o dα»₯c mαΊ§m non, tiα»u hα»c ngoΓ i cΓ΄ng lαΊp bα» αΊ£nh hΖ°α»ng bα»i Covid-19 Δược hΖ°α»ng chΓnh sΓ‘ch hα» trợ lΓ gΓ¬?</code> | <code>Δiα»u kiα»n Δược hΖ°α»ng<br>CΓ‘n bα» quαΊ£n lΓ½, giΓ‘o viΓͺn, nhΓ’n viΓͺn Δược hΖ°α»ng chΓnh sΓ‘ch khi bαΊ£o ΔαΊ£m cΓ‘c Δiα»u kiα»n sau:<br>1. LΓ ngΖ°α»i Δang lΓ m viα»c tαΊ‘i cΖ‘ sα» giΓ‘o dα»₯c ngoΓ i cΓ΄ng lαΊp trΖ°α»c khi cΖ‘ sα» phαΊ£i tαΊ‘m dα»«ng hoαΊ‘t Δα»ng theo yΓͺu cαΊ§u cα»§a cΖ‘ quan nhΓ nΖ°α»c cΓ³ thαΊ©m quyα»n Δα» phΓ²ng, chα»ng dα»ch COVID-19 tΓnh tα»« ngΓ y 01 thΓ‘ng 5 nΔm 2021 ΔαΊΏn hαΊΏt ngΓ y 31 thΓ‘ng 12 nΔm 2021.<br>2. Nghα» viα»c khΓ΄ng hΖ°α»ng lΖ°Ζ‘ng tα»« 01 thΓ‘ng trα» lΓͺn tΓnh tα»« ngΓ y 01 thΓ‘ng 5 nΔm 2021 ΔαΊΏn hαΊΏt ngΓ y 31 thΓ‘ng 12 nΔm 2021.<br>3. ChΖ°a Δược hΖ°α»ng chΓnh sΓ‘ch hα» trợ Δα»i vα»i ngΖ°α»i lao Δα»ng tαΊ‘m hoΓ£n hợp Δα»ng lao Δα»ng, nghα» viα»c khΓ΄ng hΖ°α»ng lΖ°Ζ‘ng theo quy Δα»nh tαΊ‘i khoαΊ£n 4, khoαΊ£n 5, khoαΊ£n 6 Mα»₯c II Nghα» quyαΊΏt sα» 68/NQ-CP ngΓ y 01 thΓ‘ng 7 nΔm 2021 cα»§a ChΓnh phα»§ vα» mα»t sα» chΓnh sΓ‘ch hα» trợ ngΖ°α»i lao Δα»ng vΓ ngΖ°α»i sα» dα»₯ng lao Δα»ng gαΊ·p khΓ³ khΔn do ΔαΊ‘i dα»ch COVID-19, Nghα» quyαΊΏt sα» 126/NQ-CP ngΓ y 08 thΓ‘ng 10 nΔm 2021 cα»§a ChΓnh phα»§ sα»a Δα»i, bα» sung Nghα» quyαΊΏt sα» 68/NQ-CP ngΓ y 01 thΓ‘ng 7 nΔm 2021 cα»§a ChΓnh phα»§ vα» mα»t sα» chΓnh sΓ‘ch hα» trợ ngΖ°α»i lao Δα»ng vΓ ngΖ°α»i sα» dα»₯ng lao Δα»ng gαΊ·p khΓ³ khΔn do ΔαΊ‘i dα»ch COVID-19 (sau ΔΓ’y gα»i tαΊ―t lΓ Nghα» quyαΊΏt sα» 68/NQ-CP) do khΓ΄ng tham gia BαΊ£o hiα»m xΓ£ hα»i bαΊ―t buα»c.<br>4. CΓ³ xΓ‘c nhαΊn lΓ m viα»c tαΊ‘i cΖ‘ sα» giΓ‘o dα»₯c ngoΓ i cΓ΄ng lαΊp Γt nhαΊ₯t hαΊΏt nΔm hα»c 2021 - 2022 theo kαΊΏ hoαΊ‘ch nΔm hα»c cα»§a Δα»a phΖ°Ζ‘ng, bao gα»m cΖ‘ sα» giΓ‘o dα»₯c ngoΓ i cΓ΄ng lαΊp ΔΓ£ lΓ m viα»c trΖ°α»c ΔΓ’y hoαΊ·c cΖ‘ sα» giΓ‘o dα»₯c ngoΓ i cΓ΄ng lαΊp khΓ‘c trong trΖ°α»ng hợp cΖ‘ sα» giΓ‘o dα»₯c ngoΓ i cΓ΄ng lαΊp trΖ°α»c ΔΓ’y lΓ m viα»c khΓ΄ng hoαΊ‘t Δα»ng trα» lαΊ‘i.</code> |
| <code>NguyΓͺn tαΊ―c Γ‘p dα»₯ng phα»₯ cαΊ₯p Ζ°u ΔΓ£i nghα» y tαΊΏ thαΊΏ nΓ o?</code> | <code>NguyΓͺn tαΊ―c Γ‘p dα»₯ng<br>1. TrΖ°α»ng hợp cΓ΄ng chα»©c, viΓͺn chα»©c chuyΓͺn mΓ΄n y tαΊΏ thuα»c Δα»i tượng Δược hΖ°α»ng cΓ‘c mα»©c phα»₯ cαΊ₯p Ζ°u ΔΓ£i theo nghα» khΓ‘c nhau thΓ¬ Δược hΖ°α»ng mα»t mα»©c phα»₯ cαΊ₯p Ζ°u ΔΓ£i theo nghα» cao nhαΊ₯t.<br>2. CΓ΄ng chα»©c, viΓͺn chα»©c ΔΓ£ hΖ°α»ng phα»₯ cαΊ₯p Ζ°u ΔΓ£i theo nghα» quy Δα»nh tαΊ‘i ThΓ΄ng tΖ° liΓͺn tα»ch sα» 06/2010/TTLT-BYT-BNV-BTC ngΓ y 22/3/2010 cα»§a Bα» Y tαΊΏ, Bα» Nα»i vα»₯, Bα» TΓ i chΓnh hΖ°α»ng dαΊ«n thα»±c hiα»n Nghα» Δα»nh sα» 64/2009/NΔ-CP ngΓ y 30/7/2009 cα»§a ChΓnh phα»§ vα» chΓnh sΓ‘ch Δα»i vα»i cΓ‘n bα», viΓͺn chα»©c y tαΊΏ cΓ΄ng tΓ‘c α» vΓΉng cΓ³ Δiα»u kiα»n kinh tαΊΏ - xΓ£ hα»i ΔαΊ·c biα»t khΓ³ khΔn thΓ¬ khΓ΄ng hΖ°α»ng phα»₯ cαΊ₯p Ζ°u ΔΓ£i theo nghα» quy Δα»nh tαΊ‘i ThΓ΄ng tΖ° liΓͺn tα»ch nΓ y.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### train
* Dataset: train
* Size: 13,357 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 24.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 202.71 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ToΓ Γ‘n cαΊ₯p nΓ o cΓ³ thαΊ©m quyα»n giαΊ£i quyαΊΏt viα»c ΔΓ²i tΓ i sαΊ£n ΔΓ£ cho ngΖ°α»i khΓ‘c vay theo hợp Δα»ng cho vay?</code> | <code>"Δiα»u 35. ThαΊ©m quyα»n cα»§a TΓ²a Γ‘n nhΓ’n dΓ’n cαΊ₯p huyα»n<br>1. TΓ²a Γ‘n nhΓ’n dΓ’n cαΊ₯p huyα»n cΓ³ thαΊ©m quyα»n giαΊ£i quyαΊΏt theo thα»§ tα»₯c sΖ‘ thαΊ©m nhα»―ng tranh chαΊ₯p sau ΔΓ’y:<br>a) Tranh chαΊ₯p vα» dΓ’n sα»±, hΓ΄n nhΓ’n vΓ gia ΔΓ¬nh quy Δα»nh tαΊ‘i Δiα»u 26 vΓ Δiα»u 28 cα»§a Bα» luαΊt nΓ y, trα»« tranh chαΊ₯p quy Δα»nh tαΊ‘i khoαΊ£n 7 Δiα»u 26 cα»§a Bα» luαΊt nΓ y;<br>b) Tranh chαΊ₯p vα» kinh doanh, thΖ°Ζ‘ng mαΊ‘i quy Δα»nh tαΊ‘i khoαΊ£n 1 Δiα»u 30 cα»§a Bα» luαΊt nΓ y;<br>c) Tranh chαΊ₯p vα» lao Δα»ng quy Δα»nh tαΊ‘i Δiα»u 32 cα»§a Bα» luαΊt nΓ y.<br>2. TΓ²a Γ‘n nhΓ’n dΓ’n cαΊ₯p huyα»n cΓ³ thαΊ©m quyα»n giαΊ£i quyαΊΏt nhα»―ng yΓͺu cαΊ§u sau ΔΓ’y:<br>a) YΓͺu cαΊ§u vα» dΓ’n sα»± quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 1, 2, 3, 4, 6, 7, 8, 9 vΓ 10 Δiα»u 27 cα»§a Bα» luαΊt nΓ y;<br>b) YΓͺu cαΊ§u vα» hΓ΄n nhΓ’n vΓ gia ΔΓ¬nh quy Δα»nh tαΊ‘i cΓ‘c khoαΊ£n 1, 2, 3, 4, 5, 6, 7, 8, 10 vΓ 11 Δiα»u 29 cα»§a Bα» luαΊt nΓ y;<br>c) YΓͺu cαΊ§u vα» kinh doanh, thΖ°Ζ‘ng mαΊ‘i quy Δα»nh tαΊ‘i khoαΊ£n 1 vΓ khoαΊ£n 6 Δiα»u 31 cα»§a Bα» luαΊt nΓ y;<br>d) YΓͺu cαΊ§u vα» lao Δα»ng quy Δα»nh tαΊ‘i khoαΊ£n 1 vΓ khoαΊ£n 5 Δiα»u 33 cα»§a Bα» luαΊt nΓ y.<br>3. Nhα»―ng tranh chαΊ₯p, yΓͺu cαΊ§u quy Δα»nh tαΊ‘i khoαΊ£n 1 vΓ khoαΊ£n 2 Δiα»u nΓ y mΓ cΓ³ ΔΖ°Ζ‘ng sα»± hoαΊ·c tΓ i sαΊ£n α» nΖ°α»c ngoΓ i hoαΊ·c cαΊ§n phαΊ£i α»§y thΓ‘c tΖ° phΓ‘p cho cΖ‘ quan ΔαΊ‘i diα»n nΖ°α»c Cα»ng hΓ²a xΓ£ hα»i chα»§ nghΔ©a Viα»t Nam α» nΖ°α»c ngoΓ i, cho TΓ²a Γ‘n, cΖ‘ quan cΓ³ thαΊ©m quyα»n cα»§a nΖ°α»c ngoΓ i khΓ΄ng thuα»c thαΊ©m quyα»n giαΊ£i quyαΊΏt cα»§a TΓ²a Γ‘n nhΓ’n dΓ’n cαΊ₯p huyα»n, trα»« trΖ°α»ng hợp quy Δα»nh tαΊ‘i khoαΊ£n 4 Δiα»u nΓ y.<br>4. TΓ²a Γ‘n nhΓ’n dΓ’n cαΊ₯p huyα»n nΖ‘i cΖ° trΓΊ cα»§a cΓ΄ng dΓ’n Viα»t Nam hα»§y viα»c kαΊΏt hΓ΄n trΓ‘i phΓ‘p luαΊt, giαΊ£i quyαΊΏt viα»c ly hΓ΄n, cΓ‘c tranh chαΊ₯p vα» quyα»n vΓ nghΔ©a vα»₯ cα»§a vợ chα»ng, cha mαΊΉ vΓ con, vα» nhαΊn cha, mαΊΉ, con, nuΓ΄i con nuΓ΄i vΓ giΓ‘m hα» giα»―a cΓ΄ng dΓ’n Viα»t Nam cΖ° trΓΊ α» khu vα»±c biΓͺn giα»i vα»i cΓ΄ng dΓ’n cα»§a nΖ°α»c lΓ‘ng giα»ng cΓΉng cΖ° trΓΊ α» khu vα»±c biΓͺn giα»i vα»i Viα»t Nam theo quy Δα»nh cα»§a Bα» luαΊt nΓ y vΓ cΓ‘c quy Δα»nh khΓ‘c cα»§a phΓ‘p luαΊt Viα»t Nam."</code> |
| <code>Nhα»―ng phiαΊΏu bαΊ§u nΓ o Δược xem lΓ khΓ΄ng hợp lα»?</code> | <code>PhiαΊΏu bαΊ§u khΓ΄ng hợp lα»<br>1. Nhα»―ng phiαΊΏu bαΊ§u sau ΔΓ’y lΓ phiαΊΏu bαΊ§u khΓ΄ng hợp lα»:<br>a) PhiαΊΏu khΓ΄ng theo mαΊ«u quy Δα»nh do Tα» bαΊ§u cα» phΓ‘t ra;<br>b) PhiαΊΏu khΓ΄ng cΓ³ dαΊ₯u cα»§a Tα» bαΊ§u cα»;<br>c) PhiαΊΏu Δα» sα» ngΖ°α»i Δược bαΊ§u nhiα»u hΖ‘n sα» lượng ΔαΊ‘i biα»u Δược bαΊ§u ΔΓ£ αΊ₯n Δα»nh cho ΔΖ‘n vα» bαΊ§u cα»;<br>d) PhiαΊΏu gαΊ‘ch xΓ³a hαΊΏt tΓͺn nhα»―ng ngΖ°α»i α»©ng cα»;<br>Δ) PhiαΊΏu ghi thΓͺm tΓͺn ngΖ°α»i ngoΓ i danh sΓ‘ch nhα»―ng ngΖ°α»i α»©ng cα» hoαΊ·c phiαΊΏu cΓ³ ghi thΓͺm nα»i dung khΓ‘c.<br>2. TrΖ°α»ng hợp cΓ³ phiαΊΏu bαΊ§u Δược cho lΓ khΓ΄ng hợp lα» thΓ¬ Tα» trΖ°α»ng Tα» bαΊ§u cα» ΔΖ°a ra Δα» toΓ n Tα» xem xΓ©t, quyαΊΏt Δα»nh. Tα» bαΊ§u cα» khΓ΄ng Δược gαΊ‘ch xΓ³a hoαΊ·c sα»a cΓ‘c tΓͺn ghi trΓͺn phiαΊΏu bαΊ§u.</code> |
| <code>Δα» nghα» tαΊ‘m ΔΓ¬nh chα» chαΊ₯p hΓ nh quyαΊΏt Δα»nh Γ‘p dα»₯ng biα»n phΓ‘p ΔΖ°a vΓ o trΖ°α»ng giΓ‘o dΖ°α»‘ng cho hα»c sinh cαΊ§n ΔαΊ£m bαΊ£o nguyΓͺn tαΊ―c gΓ¬?</code> | <code>NguyΓͺn tαΊ―c xΓ©t duyα»t, Δα» nghα» giαΊ£m thα»i hαΊ‘n, tαΊ‘m ΔΓ¬nh chα» chαΊ₯p hΓ nh quyαΊΏt Δα»nh, miα»
n chαΊ₯p hΓ nh phαΊ§n thα»i gian cΓ²n lαΊ‘i cho hα»c sinh trΖ°α»ng giΓ‘o dΖ°α»‘ng, trαΊ‘i viΓͺn cΖ‘ sα» giΓ‘o dα»₯c bαΊ―t buα»c<br>1. TuΓ’n thα»§ quy Δα»nh cα»§a phΓ‘p luαΊt vα» thi hΓ nh biα»n phΓ‘p xα» lΓ½ hΓ nh chΓnh ΔΖ°a vΓ o trΖ°α»ng giΓ‘o dΖ°α»‘ng, cΖ‘ sα» giΓ‘o dα»₯c bαΊ―t buα»c, quy Δα»nh tαΊ‘i ThΓ΄ng tΖ° nΓ y vΓ quy Δα»nh cα»§a phΓ‘p luαΊt cΓ³ liΓͺn quan.<br>2. BαΊ£o ΔαΊ£m khΓ‘ch quan, cΓ΄ng khai, minh bαΊ‘ch, ΔΓΊng trΓ¬nh tα»±, thα»§ tα»₯c, thαΊ©m quyα»n; tΓ΄n trα»ng vΓ bαΊ£o vα» quyα»n, lợi Γch hợp phΓ‘p cα»§a hα»c sinh trΖ°α»ng giΓ‘o dΖ°α»‘ng, trαΊ‘i viΓͺn cΖ‘ sα» giΓ‘o dα»₯c bαΊ―t buα»c.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | train loss |
|:------:|:-----:|:-------------:|:----------:|
| 0.1331 | 500 | 0.3247 | 0.2239 |
| 0.2662 | 1000 | 0.1513 | 0.1605 |
| 0.3993 | 1500 | 0.119 | 0.1664 |
| 0.5323 | 2000 | 0.1047 | 0.1384 |
| 0.6654 | 2500 | 0.0915 | 0.1269 |
| 0.7985 | 3000 | 0.0861 | 0.1140 |
| 0.9316 | 3500 | 0.0839 | 0.1091 |
| 1.0647 | 4000 | 0.0693 | 0.0989 |
| 1.1978 | 4500 | 0.0582 | 0.0931 |
| 1.3308 | 5000 | 0.0457 | 0.0953 |
| 1.4639 | 5500 | 0.0284 | 0.0826 |
| 1.5970 | 6000 | 0.0233 | 0.0848 |
| 1.7301 | 6500 | 0.0256 | 0.0785 |
| 1.8632 | 7000 | 0.0236 | 0.0829 |
| 1.9963 | 7500 | 0.0203 | 0.0827 |
| 2.1294 | 8000 | 0.0182 | 0.0730 |
| 2.2624 | 8500 | 0.0143 | 0.0718 |
| 2.3955 | 9000 | 0.0103 | 0.0720 |
| 2.5286 | 9500 | 0.0086 | 0.0720 |
| 2.6617 | 10000 | 0.0058 | 0.0706 |
| 2.7948 | 10500 | 0.0074 | 0.0675 |
| 2.9279 | 11000 | 0.0073 | 0.0650 |
| 3.0610 | 11500 | 0.0054 | 0.0651 |
| 3.1940 | 12000 | 0.0043 | 0.0639 |
| 3.3271 | 12500 | 0.004 | 0.0626 |
| 3.4602 | 13000 | 0.0035 | 0.0617 |
| 3.5933 | 13500 | 0.0022 | 0.0614 |
| 3.7264 | 14000 | 0.003 | 0.0624 |
| 3.8595 | 14500 | 0.0022 | 0.0616 |
| 3.9925 | 15000 | 0.0028 | 0.0606 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.1
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
g-assismoraes/deberta-large-semeval25_EN08_fold3 | g-assismoraes | 2024-10-28T14:37:33Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T14:23:46Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
model-index:
- name: deberta-large-semeval25_EN08_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-large-semeval25_EN08_fold3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2442
- Precision Samples: 0.1144
- Recall Samples: 0.7997
- F1 Samples: 0.1930
- Precision Macro: 0.3896
- Recall Macro: 0.6167
- F1 Macro: 0.2236
- Precision Micro: 0.1104
- Recall Micro: 0.7507
- F1 Micro: 0.1924
- Precision Weighted: 0.2237
- Recall Weighted: 0.7507
- F1 Weighted: 0.2130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 8.2513 | 1.0 | 73 | 9.8237 | 0.1093 | 0.4542 | 0.1631 | 0.8860 | 0.2559 | 0.1808 | 0.1110 | 0.3399 | 0.1674 | 0.6512 | 0.3399 | 0.0922 |
| 6.8883 | 2.0 | 146 | 9.3725 | 0.1082 | 0.6445 | 0.1710 | 0.7810 | 0.3726 | 0.1997 | 0.0995 | 0.5637 | 0.1692 | 0.4732 | 0.5637 | 0.1274 |
| 8.4363 | 3.0 | 219 | 8.8450 | 0.1195 | 0.7090 | 0.1933 | 0.6684 | 0.4525 | 0.2167 | 0.1073 | 0.6374 | 0.1837 | 0.3811 | 0.6374 | 0.1603 |
| 8.6787 | 4.0 | 292 | 8.5427 | 0.1068 | 0.7465 | 0.1790 | 0.5303 | 0.5162 | 0.1950 | 0.0967 | 0.6941 | 0.1697 | 0.2823 | 0.6941 | 0.1599 |
| 6.8889 | 5.0 | 365 | 8.5407 | 0.1100 | 0.7823 | 0.1854 | 0.4867 | 0.5780 | 0.2249 | 0.1022 | 0.7337 | 0.1794 | 0.2365 | 0.7337 | 0.1842 |
| 7.9121 | 6.0 | 438 | 8.4019 | 0.1096 | 0.7957 | 0.1858 | 0.4441 | 0.5804 | 0.2166 | 0.1041 | 0.7365 | 0.1825 | 0.2387 | 0.7365 | 0.1936 |
| 7.1827 | 7.0 | 511 | 8.3315 | 0.1085 | 0.8046 | 0.1846 | 0.4158 | 0.6204 | 0.2251 | 0.1042 | 0.7507 | 0.1831 | 0.2210 | 0.7507 | 0.2018 |
| 5.9674 | 8.0 | 584 | 8.1923 | 0.1100 | 0.8047 | 0.1857 | 0.3929 | 0.6172 | 0.2292 | 0.1046 | 0.7620 | 0.1839 | 0.2236 | 0.7620 | 0.2136 |
| 6.397 | 9.0 | 657 | 8.2536 | 0.1113 | 0.8023 | 0.1884 | 0.3999 | 0.6139 | 0.2328 | 0.1077 | 0.7507 | 0.1883 | 0.2269 | 0.7507 | 0.2148 |
| 6.4848 | 10.0 | 730 | 8.2442 | 0.1144 | 0.7997 | 0.1930 | 0.3896 | 0.6167 | 0.2236 | 0.1104 | 0.7507 | 0.1924 | 0.2237 | 0.7507 | 0.2130 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
devilteo911/whisper-small-ita-ct2 | devilteo911 | 2024-10-28T14:35:05Z | 20 | 0 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"it",
"en",
"arxiv:2212.04356",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2024-10-28T14:21:50Z | ---
license: apache-2.0
language:
- it
- en
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
library_name: ctranslate2
---
# Litus whisper-small-ita for CTranslate2
La repo contiene la conversione di [litus-ai/whisper-small-ita](https://huggingface.co/litus-ai/whisper-small-ita/) al formato di [CTranslate2](https://github.com/OpenNMT/CTranslate2).
Questo modello puΓ² essere usato su CTranslate2 o su progetti affini tipo:[faster-whisper](https://github.com/systran/faster-whisper).
# Descrizione del Modello
Questo modello Γ¨ una versione di [openai/whisper-small](https://huggingface.co/openai/whisper-small) ottimizzata per la lingua italiana, addestrata utilizzando una parte dei dati proprietari di [Litus AI](https://litus.ai/it/).
`litus-ai/whisper-small-ita` rappresenta un ottimo compromesso value/cost ed Γ¨ ottimale per contesti in cui il budget computazionale Γ¨ limitato,
ma Γ¨ comunque necessaria una trascrizione accurata del parlato.
# ParticolaritΓ del Modello
La peculiaritΓ principale del modello Γ¨ l'integrazione di token speciali che arricchiscono la trascrizione con meta-informazioni:
- Elementi paralinguistici: `[LAUGH]`, `[MHMH]`, `[SIGH]`, `[UHM]`
- QualitΓ audio: `[NOISE]`, `[UNINT]` (non intelligibile)
- Caratteristiche del parlato: `[AUTOCOR]` (autocorrezioni), `[L-EN]` (code-switching inglese)
Questi token consentono una trascrizione piΓΉ ricca che cattura non solo il contenuto verbale ma anche elementi contestuali rilevanti.
# Evaluation
Nel seguente grafico puoi trovare l'Accuracy di `openai/whisper-small`, `openai/whisper-medium`, `litus-ai/whisper-small-ita` e il modello proprietario di Litus AI, `litus-proprietary`,
su benchmark proprietari per meeting e chiamate vocali in lingua italiana.
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/litus-ai/whisper-small-ita/resolve/main/Models%20Accuracy.png" alt="Litus AI eval">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Come usare il modello
Puoi utlizzare devilteo911/whisper-small-ita-ct2 tramite faster-whisper:
```python
from faster_whisper import WhisperModel
model = WhisperModel("devilteo911/whisper-small-ita-ct2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Dettagli sulla conversione
Il modello originale Γ¨ stato convertito usando questo comando:
```
ct2-transformers-converter --model litus-ai/whisper-small-ita --output_dir whisper-small-ita-ct2 \
--copy_files tokenizer_config.json preprocessor_config.json vocab.json normalizer.json merges.txt \
added_tokens.json generation_config.json special_tokens_map.json --quantization float16
```
Nota che i pesi del modello sono salvati in FP16. Questo tipo puΓ² essere cambiato al momento del caricamento del modello usando il parametro [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
# Conclusions
Per qualsiasi informazione sull'architettura sui dati utilizzati per il pretraining e l'intended use ti preghiamo di
rivolgerti al [Paper](https://arxiv.org/abs/2212.04356), la [Model Card](https://huggingface.co/openai/whisper-small) e la [Repository](https://github.com/openai/whisper) originali. |
aarontseng/fair-nmt-zh_hant-en | aarontseng | 2024-10-28T14:28:52Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"zh",
"en",
"base_model:Helsinki-NLP/opus-mt-zh-en",
"base_model:finetune:Helsinki-NLP/opus-mt-zh-en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-10-25T03:39:43Z | ---
license: mit
language:
- zh
- en
base_model:
- Helsinki-NLP/opus-mt-zh-en
pipeline_tag: translation
library_name: transformers
---
- ckp: 1995000
- bleu (flores200-dev): 56.84109
- bleu (flores200-devtest): 13.7635
- comet (flores200-dev): 0.853607586029181
- comet (flores200-devtest): 0.8553770352964816
|
RichardErkhov/bn999_-_mistral-4.2B-gguf | RichardErkhov | 2024-10-28T14:23:46Z | 40 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T12:09:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-4.2B - GGUF
- Model creator: https://huggingface.co/bn999/
- Original model: https://huggingface.co/bn999/mistral-4.2B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-4.2B.Q2_K.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q2_K.gguf) | Q2_K | 1.58GB |
| [mistral-4.2B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q3_K_S.gguf) | Q3_K_S | 1.82GB |
| [mistral-4.2B.Q3_K.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q3_K.gguf) | Q3_K | 2.03GB |
| [mistral-4.2B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q3_K_M.gguf) | Q3_K_M | 2.03GB |
| [mistral-4.2B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q3_K_L.gguf) | Q3_K_L | 2.21GB |
| [mistral-4.2B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.IQ4_XS.gguf) | IQ4_XS | 2.26GB |
| [mistral-4.2B.Q4_0.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q4_0.gguf) | Q4_0 | 2.35GB |
| [mistral-4.2B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.IQ4_NL.gguf) | IQ4_NL | 2.38GB |
| [mistral-4.2B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q4_K_S.gguf) | Q4_K_S | 2.37GB |
| [mistral-4.2B.Q4_K.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q4_K.gguf) | Q4_K | 2.48GB |
| [mistral-4.2B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q4_K_M.gguf) | Q4_K_M | 2.48GB |
| [mistral-4.2B.Q4_1.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q4_1.gguf) | Q4_1 | 2.6GB |
| [mistral-4.2B.Q5_0.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q5_0.gguf) | Q5_0 | 2.85GB |
| [mistral-4.2B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q5_K_S.gguf) | Q5_K_S | 2.85GB |
| [mistral-4.2B.Q5_K.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q5_K.gguf) | Q5_K | 2.92GB |
| [mistral-4.2B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q5_K_M.gguf) | Q5_K_M | 2.92GB |
| [mistral-4.2B.Q5_1.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q5_1.gguf) | Q5_1 | 3.1GB |
| [mistral-4.2B.Q6_K.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q6_K.gguf) | Q6_K | 3.38GB |
| [mistral-4.2B.Q8_0.gguf](https://huggingface.co/RichardErkhov/bn999_-_mistral-4.2B-gguf/blob/main/mistral-4.2B.Q8_0.gguf) | Q8_0 | 4.38GB |
Original model description:
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
---
Selectively pruned and re-trained Mistral-7B for reduced size, targeting only MPT layers.
|
g-assismoraes/deberta-large-semeval25_EN08_fold2 | g-assismoraes | 2024-10-28T14:23:31Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T14:10:17Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
model-index:
- name: deberta-large-semeval25_EN08_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-large-semeval25_EN08_fold2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.7224
- Precision Samples: 0.1123
- Recall Samples: 0.7856
- F1 Samples: 0.1886
- Precision Macro: 0.3681
- Recall Macro: 0.6639
- F1 Macro: 0.2792
- Precision Micro: 0.1054
- Recall Micro: 0.7394
- F1 Micro: 0.1844
- Precision Weighted: 0.1953
- Recall Weighted: 0.7394
- F1 Weighted: 0.2101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.1108 | 1.0 | 73 | 9.1532 | 0.1078 | 0.4896 | 0.1658 | 0.8477 | 0.3196 | 0.2145 | 0.1041 | 0.3879 | 0.1641 | 0.6101 | 0.3879 | 0.0866 |
| 8.9481 | 2.0 | 146 | 8.6922 | 0.1037 | 0.6457 | 0.1672 | 0.7149 | 0.4241 | 0.2235 | 0.0947 | 0.5515 | 0.1616 | 0.4524 | 0.5515 | 0.1227 |
| 9.1563 | 3.0 | 219 | 8.6496 | 0.0968 | 0.7189 | 0.1614 | 0.5923 | 0.5060 | 0.2388 | 0.0875 | 0.6485 | 0.1542 | 0.3085 | 0.6485 | 0.1447 |
| 8.7006 | 4.0 | 292 | 8.2522 | 0.1016 | 0.7955 | 0.1617 | 0.5424 | 0.5864 | 0.2606 | 0.0877 | 0.7333 | 0.1567 | 0.2756 | 0.7333 | 0.1672 |
| 8.1242 | 5.0 | 365 | 7.9321 | 0.1011 | 0.7940 | 0.1721 | 0.4725 | 0.6190 | 0.2653 | 0.0945 | 0.7364 | 0.1675 | 0.2425 | 0.7364 | 0.1754 |
| 7.4891 | 6.0 | 438 | 8.0728 | 0.1081 | 0.7863 | 0.1824 | 0.4759 | 0.6115 | 0.2650 | 0.0989 | 0.7303 | 0.1743 | 0.2454 | 0.7303 | 0.1816 |
| 8.3973 | 7.0 | 511 | 7.8203 | 0.1074 | 0.7803 | 0.1817 | 0.3908 | 0.6341 | 0.2637 | 0.1002 | 0.7424 | 0.1765 | 0.1962 | 0.7424 | 0.1906 |
| 7.0048 | 8.0 | 584 | 7.7429 | 0.1097 | 0.7953 | 0.1849 | 0.3862 | 0.6590 | 0.2731 | 0.1017 | 0.7515 | 0.1791 | 0.2017 | 0.7515 | 0.2014 |
| 6.3856 | 9.0 | 657 | 7.7281 | 0.1081 | 0.7852 | 0.1823 | 0.3555 | 0.6382 | 0.2597 | 0.1016 | 0.7424 | 0.1788 | 0.1924 | 0.7424 | 0.2033 |
| 5.8015 | 10.0 | 730 | 7.7224 | 0.1123 | 0.7856 | 0.1886 | 0.3681 | 0.6639 | 0.2792 | 0.1054 | 0.7394 | 0.1844 | 0.1953 | 0.7394 | 0.2101 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Bonbone/ML5-fine-tuning-xsum | Bonbone | 2024-10-28T14:20:50Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-28T13:15:45Z | ---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: ML5-fine-tuning-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.5714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ML5-fine-tuning-xsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4333
- Rouge1: 0.5714
- Rouge2: 0.0
- Rougel: 0.5714
- Rougelsum: 0.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 18.7065 | 1.0 | 7 | 9.6966 | 0.0 | 0.0 | 0.0 | 0.0 |
| 10.3198 | 2.0 | 14 | 7.4333 | 0.5714 | 0.0 | 0.5714 | 0.5714 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
joelem/autotrain-publicradio-3 | joelem | 2024-10-28T14:20:14Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:joelem/autotrain-publicradio-2",
"base_model:finetune:joelem/autotrain-publicradio-2",
"region:us"
] | text-classification | 2024-10-25T22:26:42Z |
---
tags:
- autotrain
- text-classification
base_model: joelem/autotrain-publicradio-2
widget:
- text: "I love AutoTrain"
---
# Notes
This is an intermittent fine tuning of the errors from autotrain-publicradio-2 based on Phil and Chris's corrections.
Just to note, the accuracies here cant be compared with PR0 to PR2 bc it is different training data:
- Specifically prediction data from PR2 that showed the lowest probability scores (.75 and below), so this is noisy data
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6207979917526245
f1_macro: 0.659698275862069
f1_micro: 0.737410071942446
f1_weighted: 0.7260737099975193
precision_macro: 0.6876875812359683
precision_micro: 0.737410071942446
precision_weighted: 0.7253259852238735
recall_macro: 0.648967753098562
recall_micro: 0.737410071942446
recall_weighted: 0.737410071942446
accuracy: 0.737410071942446
|
Kartoshkina/laBSE-khakas | Kartoshkina | 2024-10-28T14:20:07Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-17T16:18:14Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {Kartoshkina/laBSE-khakas}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9980 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 100,
"evaluator": "__main__.ChainScoreEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
LightDestory/upernetconvnext-finetuned-segments-food-oct-28 | LightDestory | 2024-10-28T14:10:18Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"upernet",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T11:23:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jialei12138/task-13-Qwen-Qwen1.5-0.5B | jialei12138 | 2024-10-28T14:09:31Z | 7 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-10-06T08:26:34Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
bobox/DeBERTa3-s-CustomPoolin-toytest-step1 | bobox | 2024-10-28T14:01:20Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:32500",
"loss:GISTEmbedLoss",
"arxiv:1908.10084",
"arxiv:2402.16829",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-28T14:00:56Z | ---
base_model: microsoft/deberta-v3-small
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:32500
- loss:GISTEmbedLoss
widget:
- source_sentence: A picture of a white gas range with figurines above.
sentences:
- A nerdy woman brushing her teeth with a friend nearby.
- a white stove turned off with a digital clock
- The plasma membrane also contains other molecules, primarily other lipids and
proteins. The green molecules in Figure above , for example, are the lipid cholesterol.
Molecules of cholesterol help the plasma membrane keep its shape. Many of the
proteins in the plasma membrane assist other substances in crossing the membrane.
- source_sentence: who makes the kentucky derby garland of roses
sentences:
- Accrington strengthened their position in the play-off places with a hard-fought
win over struggling Dagenham.
- "tidal energy can be used to produce electricity. Ocean thermal is energy derived\
\ from waves and also from tidal waves. \n Ocean thermal energy can be used to\
\ produce electricity."
- Kentucky Derby Trophy The Kroger Company has been the official florist of the
Kentucky Derby since 1987. After taking over the duties from the Kingsley Walker
florist, Kroger began constructing the prestigious garland in one of its local
stores for the public to view on Derby Eve. The preservation of the garland and
crowds of spectators watching its construction are a testament to the prestige
and mystique of the Garland of Roses.
- source_sentence: what is the difference between a general sense and a special sense?
sentences:
- 'Ian Curtis ( of Touching from a distance) Ian Kevin Curtis was an English musician
and singer-songwriter. He is best known as the lead singer and lyricist of the
post-punk band Joy Division. Joy Division released its debut album, Unknown Pleasures,
in 1979 and recorded its follow-up, Closer, in 1980. Curtis, who suffered from
epilepsy and depression, committed suicide on 18 May 1980, on the eve of Joy Division''s
first North American tour, resulting in the band''s dissolution and the subsequent
formation of New Order. Curtis was known for his baritone voice, dance style,
and songwriting filled with imagery of desolation, emptiness and alienation. In
1995, Curtis''s widow Deborah published Touching from a Distance: Ian Curtis and
Joy Division, a biography of the singer. His life and death Ian Kevin Curtis was
an English musician and singer-songwriter. He is best known as the lead singer
and lyricist of the post-punk band Joy Division. Joy Division released its debut
album, Unknown Pleasures, in 1979 and recorded its follow-up, Closer, in 1980.
Curtis, who suffered from epilepsy and depression, committed suicide on 18 May
1980, on the eve of Joy Division''s first North American tour, resulting in the
band''s dissolution and the subsequent formation of New Order. Curtis was known
for his baritone voice, dance style, and songwriting filled with imagery of desolation,
emptiness and alienation. In 1995, Curtis''s widow Deborah published Touching
from a Distance: Ian Curtis and Joy Division, a biography of the singer. His life
and death have been dramatised in the films 24 Hour Party People (2002) and Control
(2007). ...more'
- The human body has two basic types of senses, called special senses and general
senses. Special senses have specialized sense organs that gather sensory information
and change it into nerve impulses. ... General senses, in contrast, are all associated
with the sense of touch. They lack special sense organs.
- Captain Hook Barrie states in the novel that "Hook was not his true name. To reveal
who he really was would even at this date set the country in a blaze", and relates
that Peter Pan began their rivalry by feeding the pirate's hand to the crocodile.
He is said to be "Blackbeard's bo'sun" and "the only man of whom Barbecue was
afraid".[5] (In Robert Louis Stevenson's Treasure Island, one of the names Long
John Silver goes by is Barbecue.)[6]
- source_sentence: Retzius was born in Stockholm , son of the anatomist Anders Jahan
Retzius ( and grandson of the naturalist and chemist Anders Retzius ) .
sentences:
- Retzius was born in Stockholm , the son of anatomist Anders Jahan Retzius ( and
grandson of the naturalist and chemist Anders Retzius ) .
- As of 14 March , over 156,000 cases of COVID-19 have been reported in around 140
countries and territories ; more than 5,800 people have died from the disease
and around 75,000 have recovered .
- A person sitting on a stool on the street.
- source_sentence: who was the first person who made the violin
sentences:
- Alice in Chains Alice in Chains is an American rock band from Seattle, Washington,
formed in 1987 by guitarist and vocalist Jerry Cantrell and drummer Sean Kinney,[1]
who recruited bassist Mike Starr[1] and lead vocalist Layne Staley.[1][2][3] Starr
was replaced by Mike Inez in 1993.[4] After Staley's death in 2002, William DuVall
joined in 2006 as co-lead vocalist and rhythm guitarist. The band took its name
from Staley's previous group, the glam metal band Alice N' Chains.[5][2]
- as distance from an object decreases , that object will appear larger
- Violin The first makers of violins probably borrowed from various developments
of the Byzantine lira. These included the rebec;[13] the Arabic rebab; the vielle
(also known as the fidel or viuola); and the lira da braccio[11][14] The violin
in its present form emerged in early 16th-century northern Italy. The earliest
pictures of violins, albeit with three strings, are seen in northern Italy around
1530, at around the same time as the words "violino" and "vyollon" are seen in
Italian and French documents. One of the earliest explicit descriptions of the
instrument, including its tuning, is from the Epitome musical by Jambe de Fer,
published in Lyon in 1556.[15] By this time, the violin had already begun to spread
throughout Europe.
model-index:
- name: SentenceTransformer based on microsoft/deberta-v3-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.6699634563461265
name: Pearson Cosine
- type: spearman_cosine
value: 0.6740052367487698
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6846904230572102
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.676461767740328
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6819532604363933
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6744353858732639
name: Spearman Euclidean
- type: pearson_dot
value: 0.6677964772074442
name: Pearson Dot
- type: spearman_dot
value: 0.6714885153106404
name: Spearman Dot
- type: pearson_max
value: 0.6846904230572102
name: Pearson Max
- type: spearman_max
value: 0.676461767740328
name: Spearman Max
- task:
type: binary-classification
name: Binary Classification
dataset:
name: allNLI dev
type: allNLI-dev
metrics:
- type: cosine_accuracy
value: 0.697265625
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.9149889349937439
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.5579399141630902
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8168730735778809
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.44368600682593856
name: Cosine Precision
- type: cosine_recall
value: 0.7514450867052023
name: Cosine Recall
- type: cosine_ap
value: 0.5242647012381595
name: Cosine Ap
- type: dot_accuracy
value: 0.6953125
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 700.5377197265625
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.5545851528384279
name: Dot F1
- type: dot_f1_threshold
value: 623.9097900390625
name: Dot F1 Threshold
- type: dot_precision
value: 0.4456140350877193
name: Dot Precision
- type: dot_recall
value: 0.7341040462427746
name: Dot Recall
- type: dot_ap
value: 0.5241554075174903
name: Dot Ap
- type: manhattan_accuracy
value: 0.6953125
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 235.2859344482422
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.5517241379310345
name: Manhattan F1
- type: manhattan_f1_threshold
value: 347.6478271484375
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.4580152671755725
name: Manhattan Precision
- type: manhattan_recall
value: 0.6936416184971098
name: Manhattan Recall
- type: manhattan_ap
value: 0.5239028585462809
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.697265625
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 11.389955520629883
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.5567451820128478
name: Euclidean F1
- type: euclidean_f1_threshold
value: 16.685447692871094
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.4421768707482993
name: Euclidean Precision
- type: euclidean_recall
value: 0.7514450867052023
name: Euclidean Recall
- type: euclidean_ap
value: 0.5247420500207234
name: Euclidean Ap
- type: max_accuracy
value: 0.697265625
name: Max Accuracy
- type: max_accuracy_threshold
value: 700.5377197265625
name: Max Accuracy Threshold
- type: max_f1
value: 0.5579399141630902
name: Max F1
- type: max_f1_threshold
value: 623.9097900390625
name: Max F1 Threshold
- type: max_precision
value: 0.4580152671755725
name: Max Precision
- type: max_recall
value: 0.7514450867052023
name: Max Recall
- type: max_ap
value: 0.5247420500207234
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: Qnli dev
type: Qnli-dev
metrics:
- type: cosine_accuracy
value: 0.66796875
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.804556131362915
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.684297520661157
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7130892276763916
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.5609756097560976
name: Cosine Precision
- type: cosine_recall
value: 0.8771186440677966
name: Cosine Recall
- type: cosine_ap
value: 0.6982323361009166
name: Cosine Ap
- type: dot_accuracy
value: 0.669921875
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 609.73779296875
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.6845637583892616
name: Dot F1
- type: dot_f1_threshold
value: 546.085205078125
name: Dot F1 Threshold
- type: dot_precision
value: 0.5666666666666667
name: Dot Precision
- type: dot_recall
value: 0.864406779661017
name: Dot Recall
- type: dot_ap
value: 0.6969471595240038
name: Dot Ap
- type: manhattan_accuracy
value: 0.67578125
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 363.409423828125
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.687392055267703
name: Manhattan F1
- type: manhattan_f1_threshold
value: 430.9031982421875
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.5801749271137027
name: Manhattan Precision
- type: manhattan_recall
value: 0.8432203389830508
name: Manhattan Recall
- type: manhattan_ap
value: 0.7021641064533223
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.666015625
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 17.237049102783203
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.6844741235392321
name: Euclidean F1
- type: euclidean_f1_threshold
value: 20.860803604125977
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.5647382920110193
name: Euclidean Precision
- type: euclidean_recall
value: 0.8686440677966102
name: Euclidean Recall
- type: euclidean_ap
value: 0.6983440307123455
name: Euclidean Ap
- type: max_accuracy
value: 0.67578125
name: Max Accuracy
- type: max_accuracy_threshold
value: 609.73779296875
name: Max Accuracy Threshold
- type: max_f1
value: 0.687392055267703
name: Max F1
- type: max_f1_threshold
value: 546.085205078125
name: Max F1 Threshold
- type: max_precision
value: 0.5801749271137027
name: Max Precision
- type: max_recall
value: 0.8771186440677966
name: Max Recall
- type: max_ap
value: 0.7021641064533223
name: Max Ap
---
# SentenceTransformer based on microsoft/deberta-v3-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) <!-- at revision a36c739020e01763fe789b4b85e2df55d6180012 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): AdvancedWeightedPooling(
(linear_cls_pj): Linear(in_features=768, out_features=768, bias=True)
(linear_cls_Qpj): Linear(in_features=768, out_features=768, bias=True)
(linear_mean_pj): Linear(in_features=768, out_features=768, bias=True)
(linear_attnOut): Linear(in_features=768, out_features=768, bias=True)
(mha): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(layernorm_output): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layernorm_weightedPooing): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layernorm_pjCls): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layernorm_pjMean): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layernorm_attnOut): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("bobox/DeBERTa3-s-CustomPoolin-toytest-step1")
# Run inference
sentences = [
'who was the first person who made the violin',
'Violin The first makers of violins probably borrowed from various developments of the Byzantine lira. These included the rebec;[13] the Arabic rebab; the vielle (also known as the fidel or viuola); and the lira da braccio[11][14] The violin in its present form emerged in early 16th-century northern Italy. The earliest pictures of violins, albeit with three strings, are seen in northern Italy around 1530, at around the same time as the words "violino" and "vyollon" are seen in Italian and French documents. One of the earliest explicit descriptions of the instrument, including its tuning, is from the Epitome musical by Jambe de Fer, published in Lyon in 1556.[15] By this time, the violin had already begun to spread throughout Europe.',
"Alice in Chains Alice in Chains is an American rock band from Seattle, Washington, formed in 1987 by guitarist and vocalist Jerry Cantrell and drummer Sean Kinney,[1] who recruited bassist Mike Starr[1] and lead vocalist Layne Staley.[1][2][3] Starr was replaced by Mike Inez in 1993.[4] After Staley's death in 2002, William DuVall joined in 2006 as co-lead vocalist and rhythm guitarist. The band took its name from Staley's previous group, the glam metal band Alice N' Chains.[5][2]",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.67 |
| **spearman_cosine** | **0.674** |
| pearson_manhattan | 0.6847 |
| spearman_manhattan | 0.6765 |
| pearson_euclidean | 0.682 |
| spearman_euclidean | 0.6744 |
| pearson_dot | 0.6678 |
| spearman_dot | 0.6715 |
| pearson_max | 0.6847 |
| spearman_max | 0.6765 |
#### Binary Classification
* Dataset: `allNLI-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.6973 |
| cosine_accuracy_threshold | 0.915 |
| cosine_f1 | 0.5579 |
| cosine_f1_threshold | 0.8169 |
| cosine_precision | 0.4437 |
| cosine_recall | 0.7514 |
| cosine_ap | 0.5243 |
| dot_accuracy | 0.6953 |
| dot_accuracy_threshold | 700.5377 |
| dot_f1 | 0.5546 |
| dot_f1_threshold | 623.9098 |
| dot_precision | 0.4456 |
| dot_recall | 0.7341 |
| dot_ap | 0.5242 |
| manhattan_accuracy | 0.6953 |
| manhattan_accuracy_threshold | 235.2859 |
| manhattan_f1 | 0.5517 |
| manhattan_f1_threshold | 347.6478 |
| manhattan_precision | 0.458 |
| manhattan_recall | 0.6936 |
| manhattan_ap | 0.5239 |
| euclidean_accuracy | 0.6973 |
| euclidean_accuracy_threshold | 11.39 |
| euclidean_f1 | 0.5567 |
| euclidean_f1_threshold | 16.6854 |
| euclidean_precision | 0.4422 |
| euclidean_recall | 0.7514 |
| euclidean_ap | 0.5247 |
| max_accuracy | 0.6973 |
| max_accuracy_threshold | 700.5377 |
| max_f1 | 0.5579 |
| max_f1_threshold | 623.9098 |
| max_precision | 0.458 |
| max_recall | 0.7514 |
| **max_ap** | **0.5247** |
#### Binary Classification
* Dataset: `Qnli-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.668 |
| cosine_accuracy_threshold | 0.8046 |
| cosine_f1 | 0.6843 |
| cosine_f1_threshold | 0.7131 |
| cosine_precision | 0.561 |
| cosine_recall | 0.8771 |
| cosine_ap | 0.6982 |
| dot_accuracy | 0.6699 |
| dot_accuracy_threshold | 609.7378 |
| dot_f1 | 0.6846 |
| dot_f1_threshold | 546.0852 |
| dot_precision | 0.5667 |
| dot_recall | 0.8644 |
| dot_ap | 0.6969 |
| manhattan_accuracy | 0.6758 |
| manhattan_accuracy_threshold | 363.4094 |
| manhattan_f1 | 0.6874 |
| manhattan_f1_threshold | 430.9032 |
| manhattan_precision | 0.5802 |
| manhattan_recall | 0.8432 |
| manhattan_ap | 0.7022 |
| euclidean_accuracy | 0.666 |
| euclidean_accuracy_threshold | 17.237 |
| euclidean_f1 | 0.6845 |
| euclidean_f1_threshold | 20.8608 |
| euclidean_precision | 0.5647 |
| euclidean_recall | 0.8686 |
| euclidean_ap | 0.6983 |
| max_accuracy | 0.6758 |
| max_accuracy_threshold | 609.7378 |
| max_f1 | 0.6874 |
| max_f1_threshold | 546.0852 |
| max_precision | 0.5802 |
| max_recall | 0.8771 |
| **max_ap** | **0.7022** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 32,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 29.3 tokens</li><li>max: 343 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 57.53 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>A Slippery Dick is what type of creature?</code> | <code>The Slippery Dick (Juvenile) - Whats That Fish! Description Also known as Sand-reef Wrasses and Slippery Dick Wrasse. Found singly or in pairs or in groups constantly circling around reefs, sea grass beds and sandy areas. Colours highly variable especially between juvenile to adult. They feed on hard shell invertebrates. Length - 18cm Depth - 2-12m Widespread Western Atlantic & Caribbean Most reef fish seen by divers during the day are grazers, that cruise around just above the surface of the coral or snoop into crevices looking for algae, worms and small crustaceans. Wrasses have small protruding teeth and graze the bottom taking in a variety of snails, worms, crabs, shrimps and eggs. Any hard coats or thick shells are then ground down by their pharyngeal jaws and the delicacies inside digested. From juvenile to adult wrasses dramatically alter their colour and body shapes. Wrasses are always on the go during the day, but are the first to go to bed and the last to rise. Small wrasses dive below the sand to sleep and larger wrasses wedge themselves in crevasses. Related creatures Heads up! Many creatures change during their life. Juvenile fish become adults and some change shape or their colour. Some species change sex and others just get older. The following creature(s) are known relatives of the Slippery Dick (Juvenile). Click the image(s) to explore further or hover over to get a better view! Slippery Dick</code> |
| <code>e.	in solids the atoms are closely locked in position and can only vibrate, in liquids the atoms and molecules are more loosely connected and can collide with and move past one another, while in gases the atoms or molecules are free to move independently, colliding frequently.</code> | <code>Within a substance, atoms that collide frequently and move independently of one another are most likely in a gas</code> |
| <code>In December 2015 , the film was ranked # 192 on IMDb .</code> | <code>As of December 2015 , it is the # 192 highest rated film on IMDb.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.025}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,664 evaluation samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 28.74 tokens</li><li>max: 330 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 56.55 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What component of an organism, made up of many cells, in turn makes up an organ?</code> | <code></code> |
| <code>Diffusion Diffusion is a process where atoms or molecules move from areas of high concentration to areas of low concentration.</code> | <code>Diffusion is the process in which a substance naturally moves from an area of higher to lower concentration.</code> |
| <code>In the 1966 movie The Good, The Bad And The Ugly, Clint Eastwood played the Good" and Lee van Cleef played "the Bad", but who played "the Ugly"?</code> | <code>View All Photos (10) Movie Info In the last and the best installment of his so-called "Dollars" trilogy of Sergio Leone-directed "spaghetti westerns," Clint Eastwood reprised the role of a taciturn, enigmatic loner. Here he searches for a cache of stolen gold against rivals the Bad (Lee Van Cleef), a ruthless bounty hunter, and the Ugly (Eli Wallach), a Mexican bandit. Though dubbed "the Good," Eastwood's character is not much better than his opponents -- he is just smarter and shoots faster. The film's title reveals its ironic attitude toward the canonized heroes of the classical western. "The real West was the world of violence, fear, and brutal instincts," claimed Leone. "In pursuit of profit there is no such thing as good and evil, generosity or deviousness; everything depends on chance, and not the best wins but the luckiest." Immensely entertaining and beautifully shot in Techniscope by Tonino Delli Colli, the movie is a virtually definitive "spaghetti western," rivaled only by Leone's own Once Upon a Time in the West (1968). The main musical theme by Ennio Morricone hit #1 on the British pop charts. Originally released in Italy at 177 minutes, the movie was later cut for its international release. ~ Yuri German, Rovi Rating:</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.025}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 256
- `lr_scheduler_type`: cosine_with_min_lr
- `lr_scheduler_kwargs`: {'num_cycles': 0.5, 'min_lr': 3.3333333333333337e-06}
- `warmup_ratio`: 0.33
- `save_safetensors`: False
- `fp16`: True
- `push_to_hub`: True
- `hub_model_id`: bobox/DeBERTa3-s-CustomPoolin-toytest-step1-checkpoints-tmp
- `hub_strategy`: all_checkpoints
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine_with_min_lr
- `lr_scheduler_kwargs`: {'num_cycles': 0.5, 'min_lr': 3.3333333333333337e-06}
- `warmup_ratio`: 0.33
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: False
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: bobox/DeBERTa3-s-CustomPoolin-toytest-step1-checkpoints-tmp
- `hub_strategy`: all_checkpoints
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | sts-test_spearman_cosine | allNLI-dev_max_ap | Qnli-dev_max_ap |
|:------:|:----:|:-------------:|:---------------:|:------------------------:|:-----------------:|:---------------:|
| 0.0010 | 1 | 4.9603 | - | - | - | - |
| 0.0020 | 2 | 28.2529 | - | - | - | - |
| 0.0030 | 3 | 27.6365 | - | - | - | - |
| 0.0039 | 4 | 6.1387 | - | - | - | - |
| 0.0049 | 5 | 5.5753 | - | - | - | - |
| 0.0059 | 6 | 5.6951 | - | - | - | - |
| 0.0069 | 7 | 6.3533 | - | - | - | - |
| 0.0079 | 8 | 27.3848 | - | - | - | - |
| 0.0089 | 9 | 3.8501 | - | - | - | - |
| 0.0098 | 10 | 27.911 | - | - | - | - |
| 0.0108 | 11 | 4.9042 | - | - | - | - |
| 0.0118 | 12 | 6.8003 | - | - | - | - |
| 0.0128 | 13 | 5.7317 | - | - | - | - |
| 0.0138 | 14 | 20.261 | - | - | - | - |
| 0.0148 | 15 | 27.9051 | - | - | - | - |
| 0.0157 | 16 | 5.5959 | - | - | - | - |
| 0.0167 | 17 | 5.8052 | - | - | - | - |
| 0.0177 | 18 | 4.5088 | - | - | - | - |
| 0.0187 | 19 | 7.3472 | - | - | - | - |
| 0.0197 | 20 | 5.8668 | - | - | - | - |
| 0.0207 | 21 | 6.4083 | - | - | - | - |
| 0.0217 | 22 | 6.011 | - | - | - | - |
| 0.0226 | 23 | 5.2394 | - | - | - | - |
| 0.0236 | 24 | 4.2966 | - | - | - | - |
| 0.0246 | 25 | 26.605 | - | - | - | - |
| 0.0256 | 26 | 6.2067 | - | - | - | - |
| 0.0266 | 27 | 6.0346 | - | - | - | - |
| 0.0276 | 28 | 5.4676 | - | - | - | - |
| 0.0285 | 29 | 6.4292 | - | - | - | - |
| 0.0295 | 30 | 26.6452 | - | - | - | - |
| 0.0305 | 31 | 18.8401 | - | - | - | - |
| 0.0315 | 32 | 7.4531 | - | - | - | - |
| 0.0325 | 33 | 4.8286 | - | - | - | - |
| 0.0335 | 34 | 5.0078 | - | - | - | - |
| 0.0344 | 35 | 5.4115 | - | - | - | - |
| 0.0354 | 36 | 5.4196 | - | - | - | - |
| 0.0364 | 37 | 4.5023 | - | - | - | - |
| 0.0374 | 38 | 5.376 | - | - | - | - |
| 0.0384 | 39 | 5.2303 | - | - | - | - |
| 0.0394 | 40 | 5.6694 | - | - | - | - |
| 0.0404 | 41 | 4.7825 | - | - | - | - |
| 0.0413 | 42 | 4.6507 | - | - | - | - |
| 0.0423 | 43 | 24.2072 | - | - | - | - |
| 0.0433 | 44 | 4.9285 | - | - | - | - |
| 0.0443 | 45 | 6.326 | - | - | - | - |
| 0.0453 | 46 | 4.5724 | - | - | - | - |
| 0.0463 | 47 | 4.754 | - | - | - | - |
| 0.0472 | 48 | 5.5443 | - | - | - | - |
| 0.0482 | 49 | 4.5764 | - | - | - | - |
| 0.0492 | 50 | 5.1434 | - | - | - | - |
| 0.0502 | 51 | 22.6991 | - | - | - | - |
| 0.0512 | 52 | 5.4277 | - | - | - | - |
| 0.0522 | 53 | 5.0178 | - | - | - | - |
| 0.0531 | 54 | 4.8779 | - | - | - | - |
| 0.0541 | 55 | 4.2884 | - | - | - | - |
| 0.0551 | 56 | 16.0994 | - | - | - | - |
| 0.0561 | 57 | 21.31 | - | - | - | - |
| 0.0571 | 58 | 4.9721 | - | - | - | - |
| 0.0581 | 59 | 5.143 | - | - | - | - |
| 0.0591 | 60 | 3.5933 | - | - | - | - |
| 0.0600 | 61 | 5.2559 | - | - | - | - |
| 0.0610 | 62 | 4.0757 | - | - | - | - |
| 0.0620 | 63 | 3.6612 | - | - | - | - |
| 0.0630 | 64 | 4.7505 | - | - | - | - |
| 0.0640 | 65 | 4.1979 | - | - | - | - |
| 0.0650 | 66 | 3.9982 | - | - | - | - |
| 0.0659 | 67 | 4.7065 | - | - | - | - |
| 0.0669 | 68 | 5.3413 | - | - | - | - |
| 0.0679 | 69 | 3.6964 | - | - | - | - |
| 0.0689 | 70 | 17.8774 | - | - | - | - |
| 0.0699 | 71 | 4.8154 | - | - | - | - |
| 0.0709 | 72 | 4.8356 | - | - | - | - |
| 0.0719 | 73 | 4.568 | - | - | - | - |
| 0.0728 | 74 | 4.0898 | - | - | - | - |
| 0.0738 | 75 | 3.4502 | - | - | - | - |
| 0.0748 | 76 | 3.7733 | - | - | - | - |
| 0.0758 | 77 | 4.5204 | - | - | - | - |
| 0.0768 | 78 | 4.2526 | - | - | - | - |
| 0.0778 | 79 | 4.4398 | - | - | - | - |
| 0.0787 | 80 | 4.0988 | - | - | - | - |
| 0.0797 | 81 | 3.9704 | - | - | - | - |
| 0.0807 | 82 | 4.3343 | - | - | - | - |
| 0.0817 | 83 | 4.2587 | - | - | - | - |
| 0.0827 | 84 | 15.0149 | - | - | - | - |
| 0.0837 | 85 | 14.6599 | - | - | - | - |
| 0.0846 | 86 | 4.0623 | - | - | - | - |
| 0.0856 | 87 | 3.7597 | - | - | - | - |
| 0.0866 | 88 | 4.3433 | - | - | - | - |
| 0.0876 | 89 | 4.0287 | - | - | - | - |
| 0.0886 | 90 | 4.6257 | - | - | - | - |
| 0.0896 | 91 | 13.4689 | - | - | - | - |
| 0.0906 | 92 | 4.6583 | - | - | - | - |
| 0.0915 | 93 | 4.2682 | - | - | - | - |
| 0.0925 | 94 | 4.468 | - | - | - | - |
| 0.0935 | 95 | 3.4333 | - | - | - | - |
| 0.0945 | 96 | 12.7654 | - | - | - | - |
| 0.0955 | 97 | 3.5577 | - | - | - | - |
| 0.0965 | 98 | 12.5875 | - | - | - | - |
| 0.0974 | 99 | 4.2206 | - | - | - | - |
| 0.0984 | 100 | 3.5981 | - | - | - | - |
| 0.0994 | 101 | 3.5575 | - | - | - | - |
| 0.1004 | 102 | 4.0271 | - | - | - | - |
| 0.1014 | 103 | 4.0803 | - | - | - | - |
| 0.1024 | 104 | 4.0886 | - | - | - | - |
| 0.1033 | 105 | 4.176 | - | - | - | - |
| 0.1043 | 106 | 4.6653 | - | - | - | - |
| 0.1053 | 107 | 4.3076 | - | - | - | - |
| 0.1063 | 108 | 8.7282 | - | - | - | - |
| 0.1073 | 109 | 3.4192 | - | - | - | - |
| 0.1083 | 110 | 10.6027 | - | - | - | - |
| 0.1093 | 111 | 4.0959 | - | - | - | - |
| 0.1102 | 112 | 4.2785 | - | - | - | - |
| 0.1112 | 113 | 3.9945 | - | - | - | - |
| 0.1122 | 114 | 10.0652 | - | - | - | - |
| 0.1132 | 115 | 3.8621 | - | - | - | - |
| 0.1142 | 116 | 4.3975 | - | - | - | - |
| 0.1152 | 117 | 9.7899 | - | - | - | - |
| 0.1161 | 118 | 4.3812 | - | - | - | - |
| 0.1171 | 119 | 3.8715 | - | - | - | - |
| 0.1181 | 120 | 3.8327 | - | - | - | - |
| 0.1191 | 121 | 3.5103 | - | - | - | - |
| 0.1201 | 122 | 9.3158 | - | - | - | - |
| 0.1211 | 123 | 3.7201 | - | - | - | - |
| 0.1220 | 124 | 3.4311 | - | - | - | - |
| 0.1230 | 125 | 3.7946 | - | - | - | - |
| 0.1240 | 126 | 4.0456 | - | - | - | - |
| 0.125 | 127 | 3.482 | - | - | - | - |
| 0.1260 | 128 | 3.1901 | - | - | - | - |
| 0.1270 | 129 | 3.414 | - | - | - | - |
| 0.1280 | 130 | 3.4967 | - | - | - | - |
| 0.1289 | 131 | 3.6594 | - | - | - | - |
| 0.1299 | 132 | 8.066 | - | - | - | - |
| 0.1309 | 133 | 3.7872 | - | - | - | - |
| 0.1319 | 134 | 4.0023 | - | - | - | - |
| 0.1329 | 135 | 3.7728 | - | - | - | - |
| 0.1339 | 136 | 3.1893 | - | - | - | - |
| 0.1348 | 137 | 3.3635 | - | - | - | - |
| 0.1358 | 138 | 4.0195 | - | - | - | - |
| 0.1368 | 139 | 4.1097 | - | - | - | - |
| 0.1378 | 140 | 3.7903 | - | - | - | - |
| 0.1388 | 141 | 3.5748 | - | - | - | - |
| 0.1398 | 142 | 3.8104 | - | - | - | - |
| 0.1407 | 143 | 8.0411 | - | - | - | - |
| 0.1417 | 144 | 3.4819 | - | - | - | - |
| 0.1427 | 145 | 3.452 | - | - | - | - |
| 0.1437 | 146 | 3.5861 | - | - | - | - |
| 0.1447 | 147 | 3.4324 | - | - | - | - |
| 0.1457 | 148 | 3.521 | - | - | - | - |
| 0.1467 | 149 | 3.8868 | - | - | - | - |
| 0.1476 | 150 | 8.1191 | - | - | - | - |
| 0.1486 | 151 | 3.6447 | - | - | - | - |
| 0.1496 | 152 | 2.9436 | - | - | - | - |
| 0.1506 | 153 | 8.1535 | 2.2032 | 0.2236 | 0.4009 | 0.5892 |
| 0.1516 | 154 | 3.9619 | - | - | - | - |
| 0.1526 | 155 | 3.1301 | - | - | - | - |
| 0.1535 | 156 | 3.0478 | - | - | - | - |
| 0.1545 | 157 | 3.2986 | - | - | - | - |
| 0.1555 | 158 | 3.2847 | - | - | - | - |
| 0.1565 | 159 | 3.6599 | - | - | - | - |
| 0.1575 | 160 | 3.2238 | - | - | - | - |
| 0.1585 | 161 | 2.8897 | - | - | - | - |
| 0.1594 | 162 | 3.9443 | - | - | - | - |
| 0.1604 | 163 | 3.3733 | - | - | - | - |
| 0.1614 | 164 | 3.7444 | - | - | - | - |
| 0.1624 | 165 | 3.4813 | - | - | - | - |
| 0.1634 | 166 | 2.6865 | - | - | - | - |
| 0.1644 | 167 | 2.7587 | - | - | - | - |
| 0.1654 | 168 | 3.3628 | - | - | - | - |
| 0.1663 | 169 | 3.0035 | - | - | - | - |
| 0.1673 | 170 | 10.1591 | - | - | - | - |
| 0.1683 | 171 | 3.5366 | - | - | - | - |
| 0.1693 | 172 | 8.4047 | - | - | - | - |
| 0.1703 | 173 | 3.8643 | - | - | - | - |
| 0.1713 | 174 | 3.3529 | - | - | - | - |
| 0.1722 | 175 | 3.7143 | - | - | - | - |
| 0.1732 | 176 | 3.3323 | - | - | - | - |
| 0.1742 | 177 | 3.1206 | - | - | - | - |
| 0.1752 | 178 | 3.1348 | - | - | - | - |
| 0.1762 | 179 | 7.6011 | - | - | - | - |
| 0.1772 | 180 | 3.7025 | - | - | - | - |
| 0.1781 | 181 | 10.5662 | - | - | - | - |
| 0.1791 | 182 | 8.966 | - | - | - | - |
| 0.1801 | 183 | 9.426 | - | - | - | - |
| 0.1811 | 184 | 3.0025 | - | - | - | - |
| 0.1821 | 185 | 7.0984 | - | - | - | - |
| 0.1831 | 186 | 7.3808 | - | - | - | - |
| 0.1841 | 187 | 2.8657 | - | - | - | - |
| 0.1850 | 188 | 6.5636 | - | - | - | - |
| 0.1860 | 189 | 3.4702 | - | - | - | - |
| 0.1870 | 190 | 5.9302 | - | - | - | - |
| 0.1880 | 191 | 3.2406 | - | - | - | - |
| 0.1890 | 192 | 3.4459 | - | - | - | - |
| 0.1900 | 193 | 5.269 | - | - | - | - |
| 0.1909 | 194 | 4.8605 | - | - | - | - |
| 0.1919 | 195 | 2.9891 | - | - | - | - |
| 0.1929 | 196 | 3.6681 | - | - | - | - |
| 0.1939 | 197 | 3.1589 | - | - | - | - |
| 0.1949 | 198 | 3.1835 | - | - | - | - |
| 0.1959 | 199 | 3.7561 | - | - | - | - |
| 0.1969 | 200 | 4.0891 | - | - | - | - |
| 0.1978 | 201 | 3.563 | - | - | - | - |
| 0.1988 | 202 | 3.7433 | - | - | - | - |
| 0.1998 | 203 | 3.3813 | - | - | - | - |
| 0.2008 | 204 | 5.2311 | - | - | - | - |
| 0.2018 | 205 | 3.3494 | - | - | - | - |
| 0.2028 | 206 | 3.3533 | - | - | - | - |
| 0.2037 | 207 | 3.688 | - | - | - | - |
| 0.2047 | 208 | 3.5342 | - | - | - | - |
| 0.2057 | 209 | 4.9381 | - | - | - | - |
| 0.2067 | 210 | 3.1839 | - | - | - | - |
| 0.2077 | 211 | 3.0465 | - | - | - | - |
| 0.2087 | 212 | 3.1232 | - | - | - | - |
| 0.2096 | 213 | 4.6297 | - | - | - | - |
| 0.2106 | 214 | 2.9834 | - | - | - | - |
| 0.2116 | 215 | 4.2231 | - | - | - | - |
| 0.2126 | 216 | 3.1458 | - | - | - | - |
| 0.2136 | 217 | 3.2525 | - | - | - | - |
| 0.2146 | 218 | 3.5971 | - | - | - | - |
| 0.2156 | 219 | 3.5616 | - | - | - | - |
| 0.2165 | 220 | 3.2378 | - | - | - | - |
| 0.2175 | 221 | 2.9075 | - | - | - | - |
| 0.2185 | 222 | 3.0391 | - | - | - | - |
| 0.2195 | 223 | 3.5573 | - | - | - | - |
| 0.2205 | 224 | 3.2092 | - | - | - | - |
| 0.2215 | 225 | 3.2646 | - | - | - | - |
| 0.2224 | 226 | 3.0886 | - | - | - | - |
| 0.2234 | 227 | 3.5241 | - | - | - | - |
| 0.2244 | 228 | 3.0111 | - | - | - | - |
| 0.2254 | 229 | 3.707 | - | - | - | - |
| 0.2264 | 230 | 5.3822 | - | - | - | - |
| 0.2274 | 231 | 3.2646 | - | - | - | - |
| 0.2283 | 232 | 2.7021 | - | - | - | - |
| 0.2293 | 233 | 3.5131 | - | - | - | - |
| 0.2303 | 234 | 3.103 | - | - | - | - |
| 0.2313 | 235 | 2.9535 | - | - | - | - |
| 0.2323 | 236 | 2.9631 | - | - | - | - |
| 0.2333 | 237 | 2.8068 | - | - | - | - |
| 0.2343 | 238 | 3.4251 | - | - | - | - |
| 0.2352 | 239 | 2.8495 | - | - | - | - |
| 0.2362 | 240 | 2.9972 | - | - | - | - |
| 0.2372 | 241 | 3.3509 | - | - | - | - |
| 0.2382 | 242 | 2.9234 | - | - | - | - |
| 0.2392 | 243 | 2.4086 | - | - | - | - |
| 0.2402 | 244 | 3.1282 | - | - | - | - |
| 0.2411 | 245 | 2.3352 | - | - | - | - |
| 0.2421 | 246 | 2.4706 | - | - | - | - |
| 0.2431 | 247 | 3.5449 | - | - | - | - |
| 0.2441 | 248 | 2.8963 | - | - | - | - |
| 0.2451 | 249 | 2.773 | - | - | - | - |
| 0.2461 | 250 | 2.355 | - | - | - | - |
| 0.2470 | 251 | 2.656 | - | - | - | - |
| 0.2480 | 252 | 2.6221 | - | - | - | - |
| 0.2490 | 253 | 8.6739 | - | - | - | - |
| 0.25 | 254 | 10.8242 | - | - | - | - |
| 0.2510 | 255 | 2.3408 | - | - | - | - |
| 0.2520 | 256 | 2.1221 | - | - | - | - |
| 0.2530 | 257 | 3.295 | - | - | - | - |
| 0.2539 | 258 | 2.5896 | - | - | - | - |
| 0.2549 | 259 | 2.1215 | - | - | - | - |
| 0.2559 | 260 | 9.4851 | - | - | - | - |
| 0.2569 | 261 | 2.1982 | - | - | - | - |
| 0.2579 | 262 | 3.0568 | - | - | - | - |
| 0.2589 | 263 | 2.6269 | - | - | - | - |
| 0.2598 | 264 | 2.4792 | - | - | - | - |
| 0.2608 | 265 | 1.9445 | - | - | - | - |
| 0.2618 | 266 | 2.4061 | - | - | - | - |
| 0.2628 | 267 | 8.3116 | - | - | - | - |
| 0.2638 | 268 | 8.0804 | - | - | - | - |
| 0.2648 | 269 | 2.1674 | - | - | - | - |
| 0.2657 | 270 | 7.1975 | - | - | - | - |
| 0.2667 | 271 | 5.9104 | - | - | - | - |
| 0.2677 | 272 | 2.498 | - | - | - | - |
| 0.2687 | 273 | 2.5249 | - | - | - | - |
| 0.2697 | 274 | 2.7152 | - | - | - | - |
| 0.2707 | 275 | 2.7904 | - | - | - | - |
| 0.2717 | 276 | 2.7745 | - | - | - | - |
| 0.2726 | 277 | 2.9741 | - | - | - | - |
| 0.2736 | 278 | 1.8215 | - | - | - | - |
| 0.2746 | 279 | 4.6844 | - | - | - | - |
| 0.2756 | 280 | 2.8613 | - | - | - | - |
| 0.2766 | 281 | 2.7147 | - | - | - | - |
| 0.2776 | 282 | 2.814 | - | - | - | - |
| 0.2785 | 283 | 2.3569 | - | - | - | - |
| 0.2795 | 284 | 2.672 | - | - | - | - |
| 0.2805 | 285 | 3.2052 | - | - | - | - |
| 0.2815 | 286 | 2.8056 | - | - | - | - |
| 0.2825 | 287 | 2.6268 | - | - | - | - |
| 0.2835 | 288 | 2.5641 | - | - | - | - |
| 0.2844 | 289 | 2.4475 | - | - | - | - |
| 0.2854 | 290 | 2.7377 | - | - | - | - |
| 0.2864 | 291 | 2.3831 | - | - | - | - |
| 0.2874 | 292 | 8.8069 | - | - | - | - |
| 0.2884 | 293 | 2.186 | - | - | - | - |
| 0.2894 | 294 | 2.3389 | - | - | - | - |
| 0.2904 | 295 | 1.9744 | - | - | - | - |
| 0.2913 | 296 | 2.4491 | - | - | - | - |
| 0.2923 | 297 | 2.5668 | - | - | - | - |
| 0.2933 | 298 | 2.1939 | - | - | - | - |
| 0.2943 | 299 | 2.2832 | - | - | - | - |
| 0.2953 | 300 | 2.7508 | - | - | - | - |
| 0.2963 | 301 | 2.5206 | - | - | - | - |
| 0.2972 | 302 | 2.3522 | - | - | - | - |
| 0.2982 | 303 | 2.7186 | - | - | - | - |
| 0.2992 | 304 | 2.1369 | - | - | - | - |
| 0.3002 | 305 | 9.7972 | - | - | - | - |
| 0.3012 | 306 | 1.9378 | 1.5786 | 0.2924 | 0.4272 | 0.6159 |
| 0.3022 | 307 | 2.5365 | - | - | - | - |
| 0.3031 | 308 | 2.0346 | - | - | - | - |
| 0.3041 | 309 | 2.0721 | - | - | - | - |
| 0.3051 | 310 | 2.6966 | - | - | - | - |
| 0.3061 | 311 | 2.6757 | - | - | - | - |
| 0.3071 | 312 | 10.6395 | - | - | - | - |
| 0.3081 | 313 | 2.8671 | - | - | - | - |
| 0.3091 | 314 | 2.0144 | - | - | - | - |
| 0.3100 | 315 | 9.9338 | - | - | - | - |
| 0.3110 | 316 | 2.6167 | - | - | - | - |
| 0.3120 | 317 | 2.1342 | - | - | - | - |
| 0.3130 | 318 | 9.0369 | - | - | - | - |
| 0.3140 | 319 | 2.0182 | - | - | - | - |
| 0.3150 | 320 | 2.2189 | - | - | - | - |
| 0.3159 | 321 | 1.9667 | - | - | - | - |
| 0.3169 | 322 | 2.3371 | - | - | - | - |
| 0.3179 | 323 | 6.9866 | - | - | - | - |
| 0.3189 | 324 | 1.6119 | - | - | - | - |
| 0.3199 | 325 | 1.8615 | - | - | - | - |
| 0.3209 | 326 | 2.1708 | - | - | - | - |
| 0.3219 | 327 | 2.0174 | - | - | - | - |
| 0.3228 | 328 | 6.7891 | - | - | - | - |
| 0.3238 | 329 | 2.155 | - | - | - | - |
| 0.3248 | 330 | 2.4636 | - | - | - | - |
| 0.3258 | 331 | 1.9844 | - | - | - | - |
| 0.3268 | 332 | 1.9035 | - | - | - | - |
| 0.3278 | 333 | 2.0729 | - | - | - | - |
| 0.3287 | 334 | 1.5715 | - | - | - | - |
| 0.3297 | 335 | 2.7211 | - | - | - | - |
| 0.3307 | 336 | 2.0351 | - | - | - | - |
| 0.3317 | 337 | 2.4049 | - | - | - | - |
| 0.3327 | 338 | 2.3939 | - | - | - | - |
| 0.3337 | 339 | 1.7353 | - | - | - | - |
| 0.3346 | 340 | 1.8393 | - | - | - | - |
| 0.3356 | 341 | 2.2874 | - | - | - | - |
| 0.3366 | 342 | 1.8566 | - | - | - | - |
| 0.3376 | 343 | 2.2676 | - | - | - | - |
| 0.3386 | 344 | 1.7895 | - | - | - | - |
| 0.3396 | 345 | 2.2506 | - | - | - | - |
| 0.3406 | 346 | 1.5613 | - | - | - | - |
| 0.3415 | 347 | 2.3531 | - | - | - | - |
| 0.3425 | 348 | 1.99 | - | - | - | - |
| 0.3435 | 349 | 12.0831 | - | - | - | - |
| 0.3445 | 350 | 2.0959 | - | - | - | - |
| 0.3455 | 351 | 2.0641 | - | - | - | - |
| 0.3465 | 352 | 1.9197 | - | - | - | - |
| 0.3474 | 353 | 1.9382 | - | - | - | - |
| 0.3484 | 354 | 2.3819 | - | - | - | - |
| 0.3494 | 355 | 1.6053 | - | - | - | - |
| 0.3504 | 356 | 2.4719 | - | - | - | - |
| 0.3514 | 357 | 1.5602 | - | - | - | - |
| 0.3524 | 358 | 2.1675 | - | - | - | - |
| 0.3533 | 359 | 11.5856 | - | - | - | - |
| 0.3543 | 360 | 9.3718 | - | - | - | - |
| 0.3553 | 361 | 1.8952 | - | - | - | - |
| 0.3563 | 362 | 1.701 | - | - | - | - |
| 0.3573 | 363 | 1.46 | - | - | - | - |
| 0.3583 | 364 | 1.7913 | - | - | - | - |
| 0.3593 | 365 | 9.1152 | - | - | - | - |
| 0.3602 | 366 | 9.2681 | - | - | - | - |
| 0.3612 | 367 | 2.2932 | - | - | - | - |
| 0.3622 | 368 | 1.7176 | - | - | - | - |
| 0.3632 | 369 | 2.2559 | - | - | - | - |
| 0.3642 | 370 | 1.9846 | - | - | - | - |
| 0.3652 | 371 | 1.8022 | - | - | - | - |
| 0.3661 | 372 | 8.1128 | - | - | - | - |
| 0.3671 | 373 | 6.929 | - | - | - | - |
| 0.3681 | 374 | 1.9038 | - | - | - | - |
| 0.3691 | 375 | 1.3899 | - | - | - | - |
| 0.3701 | 376 | 1.5677 | - | - | - | - |
| 0.3711 | 377 | 5.2357 | - | - | - | - |
| 0.3720 | 378 | 2.2304 | - | - | - | - |
| 0.3730 | 379 | 2.1727 | - | - | - | - |
| 0.3740 | 380 | 2.2941 | - | - | - | - |
| 0.375 | 381 | 2.2257 | - | - | - | - |
| 0.3760 | 382 | 1.7489 | - | - | - | - |
| 0.3770 | 383 | 1.5027 | - | - | - | - |
| 0.3780 | 384 | 1.6917 | - | - | - | - |
| 0.3789 | 385 | 5.7867 | - | - | - | - |
| 0.3799 | 386 | 1.6871 | - | - | - | - |
| 0.3809 | 387 | 1.5652 | - | - | - | - |
| 0.3819 | 388 | 2.1691 | - | - | - | - |
| 0.3829 | 389 | 1.869 | - | - | - | - |
| 0.3839 | 390 | 2.1934 | - | - | - | - |
| 0.3848 | 391 | 7.0152 | - | - | - | - |
| 0.3858 | 392 | 2.0454 | - | - | - | - |
| 0.3868 | 393 | 1.8098 | - | - | - | - |
| 0.3878 | 394 | 5.7529 | - | - | - | - |
| 0.3888 | 395 | 1.3949 | - | - | - | - |
| 0.3898 | 396 | 1.5962 | - | - | - | - |
| 0.3907 | 397 | 6.1436 | - | - | - | - |
| 0.3917 | 398 | 5.2979 | - | - | - | - |
| 0.3927 | 399 | 1.2422 | - | - | - | - |
| 0.3937 | 400 | 2.1152 | - | - | - | - |
| 0.3947 | 401 | 1.6679 | - | - | - | - |
| 0.3957 | 402 | 4.2978 | - | - | - | - |
| 0.3967 | 403 | 1.624 | - | - | - | - |
| 0.3976 | 404 | 2.0267 | - | - | - | - |
| 0.3986 | 405 | 1.3975 | - | - | - | - |
| 0.3996 | 406 | 1.905 | - | - | - | - |
| 0.4006 | 407 | 5.4419 | - | - | - | - |
| 0.4016 | 408 | 2.0008 | - | - | - | - |
| 0.4026 | 409 | 1.8387 | - | - | - | - |
| 0.4035 | 410 | 2.2391 | - | - | - | - |
| 0.4045 | 411 | 1.7153 | - | - | - | - |
| 0.4055 | 412 | 2.1533 | - | - | - | - |
| 0.4065 | 413 | 1.788 | - | - | - | - |
| 0.4075 | 414 | 3.482 | - | - | - | - |
| 0.4085 | 415 | 1.8376 | - | - | - | - |
| 0.4094 | 416 | 4.8811 | - | - | - | - |
| 0.4104 | 417 | 1.9421 | - | - | - | - |
| 0.4114 | 418 | 1.4796 | - | - | - | - |
| 0.4124 | 419 | 1.6209 | - | - | - | - |
| 0.4134 | 420 | 1.8734 | - | - | - | - |
| 0.4144 | 421 | 1.9444 | - | - | - | - |
| 0.4154 | 422 | 1.9581 | - | - | - | - |
| 0.4163 | 423 | 1.5175 | - | - | - | - |
| 0.4173 | 424 | 1.2831 | - | - | - | - |
| 0.4183 | 425 | 1.1355 | - | - | - | - |
| 0.4193 | 426 | 1.864 | - | - | - | - |
| 0.4203 | 427 | 5.1574 | - | - | - | - |
| 0.4213 | 428 | 5.323 | - | - | - | - |
| 0.4222 | 429 | 1.385 | - | - | - | - |
| 0.4232 | 430 | 1.1691 | - | - | - | - |
| 0.4242 | 431 | 1.8994 | - | - | - | - |
| 0.4252 | 432 | 5.4254 | - | - | - | - |
| 0.4262 | 433 | 1.9113 | - | - | - | - |
| 0.4272 | 434 | 2.1108 | - | - | - | - |
| 0.4281 | 435 | 1.7012 | - | - | - | - |
| 0.4291 | 436 | 1.5722 | - | - | - | - |
| 0.4301 | 437 | 1.5967 | - | - | - | - |
| 0.4311 | 438 | 5.609 | - | - | - | - |
| 0.4321 | 439 | 1.4444 | - | - | - | - |
| 0.4331 | 440 | 5.3153 | - | - | - | - |
| 0.4341 | 441 | 5.0934 | - | - | - | - |
| 0.4350 | 442 | 1.3028 | - | - | - | - |
| 0.4360 | 443 | 1.263 | - | - | - | - |
| 0.4370 | 444 | 1.8462 | - | - | - | - |
| 0.4380 | 445 | 2.1533 | - | - | - | - |
| 0.4390 | 446 | 1.5467 | - | - | - | - |
| 0.4400 | 447 | 1.4331 | - | - | - | - |
| 0.4409 | 448 | 1.4416 | - | - | - | - |
| 0.4419 | 449 | 1.5976 | - | - | - | - |
| 0.4429 | 450 | 1.8723 | - | - | - | - |
| 0.4439 | 451 | 1.1753 | - | - | - | - |
| 0.4449 | 452 | 2.3205 | - | - | - | - |
| 0.4459 | 453 | 1.6467 | - | - | - | - |
| 0.4469 | 454 | 0.9322 | - | - | - | - |
| 0.4478 | 455 | 1.958 | - | - | - | - |
| 0.4488 | 456 | 1.8746 | - | - | - | - |
| 0.4498 | 457 | 1.4546 | - | - | - | - |
| 0.4508 | 458 | 0.9795 | - | - | - | - |
| 0.4518 | 459 | 1.5458 | 1.2676 | 0.2751 | 0.4485 | 0.6433 |
| 0.4528 | 460 | 1.6558 | - | - | - | - |
| 0.4537 | 461 | 1.389 | - | - | - | - |
| 0.4547 | 462 | 1.5608 | - | - | - | - |
| 0.4557 | 463 | 1.6618 | - | - | - | - |
| 0.4567 | 464 | 1.5122 | - | - | - | - |
| 0.4577 | 465 | 1.3602 | - | - | - | - |
| 0.4587 | 466 | 1.6714 | - | - | - | - |
| 0.4596 | 467 | 1.0644 | - | - | - | - |
| 0.4606 | 468 | 7.6421 | - | - | - | - |
| 0.4616 | 469 | 1.2987 | - | - | - | - |
| 0.4626 | 470 | 1.4231 | - | - | - | - |
| 0.4636 | 471 | 7.7424 | - | - | - | - |
| 0.4646 | 472 | 1.6811 | - | - | - | - |
| 0.4656 | 473 | 1.1814 | - | - | - | - |
| 0.4665 | 474 | 1.4486 | - | - | - | - |
| 0.4675 | 475 | 1.3892 | - | - | - | - |
| 0.4685 | 476 | 1.3681 | - | - | - | - |
| 0.4695 | 477 | 1.3081 | - | - | - | - |
| 0.4705 | 478 | 0.9102 | - | - | - | - |
| 0.4715 | 479 | 1.0992 | - | - | - | - |
| 0.4724 | 480 | 6.018 | - | - | - | - |
| 0.4734 | 481 | 6.0908 | - | - | - | - |
| 0.4744 | 482 | 1.2245 | - | - | - | - |
| 0.4754 | 483 | 1.4825 | - | - | - | - |
| 0.4764 | 484 | 1.8037 | - | - | - | - |
| 0.4774 | 485 | 1.3611 | - | - | - | - |
| 0.4783 | 486 | 1.7482 | - | - | - | - |
| 0.4793 | 487 | 1.6385 | - | - | - | - |
| 0.4803 | 488 | 1.3245 | - | - | - | - |
| 0.4813 | 489 | 1.5638 | - | - | - | - |
| 0.4823 | 490 | 1.566 | - | - | - | - |
| 0.4833 | 491 | 1.9482 | - | - | - | - |
| 0.4843 | 492 | 6.0859 | - | - | - | - |
| 0.4852 | 493 | 5.8754 | - | - | - | - |
| 0.4862 | 494 | 0.9964 | - | - | - | - |
| 0.4872 | 495 | 1.5949 | - | - | - | - |
| 0.4882 | 496 | 1.3167 | - | - | - | - |
| 0.4892 | 497 | 3.9345 | - | - | - | - |
| 0.4902 | 498 | 4.3886 | - | - | - | - |
| 0.4911 | 499 | 1.6124 | - | - | - | - |
| 0.4921 | 500 | 1.2145 | - | - | - | - |
| 0.4931 | 501 | 3.5499 | - | - | - | - |
| 0.4941 | 502 | 1.2999 | - | - | - | - |
| 0.4951 | 503 | 1.2375 | - | - | - | - |
| 0.4961 | 504 | 1.1606 | - | - | - | - |
| 0.4970 | 505 | 1.4634 | - | - | - | - |
| 0.4980 | 506 | 1.35 | - | - | - | - |
| 0.4990 | 507 | 1.7187 | - | - | - | - |
| 0.5 | 508 | 1.5915 | - | - | - | - |
| 0.5010 | 509 | 1.2357 | - | - | - | - |
| 0.5020 | 510 | 3.4122 | - | - | - | - |
| 0.5030 | 511 | 4.244 | - | - | - | - |
| 0.5039 | 512 | 0.9151 | - | - | - | - |
| 0.5049 | 513 | 1.4323 | - | - | - | - |
| 0.5059 | 514 | 1.4824 | - | - | - | - |
| 0.5069 | 515 | 1.339 | - | - | - | - |
| 0.5079 | 516 | 4.1658 | - | - | - | - |
| 0.5089 | 517 | 1.3062 | - | - | - | - |
| 0.5098 | 518 | 1.2905 | - | - | - | - |
| 0.5108 | 519 | 1.1487 | - | - | - | - |
| 0.5118 | 520 | 2.8652 | - | - | - | - |
| 0.5128 | 521 | 1.2634 | - | - | - | - |
| 0.5138 | 522 | 1.6745 | - | - | - | - |
| 0.5148 | 523 | 1.6548 | - | - | - | - |
| 0.5157 | 524 | 2.4204 | - | - | - | - |
| 0.5167 | 525 | 1.7201 | - | - | - | - |
| 0.5177 | 526 | 1.761 | - | - | - | - |
| 0.5187 | 527 | 2.7098 | - | - | - | - |
| 0.5197 | 528 | 1.6425 | - | - | - | - |
| 0.5207 | 529 | 1.2466 | - | - | - | - |
| 0.5217 | 530 | 1.3339 | - | - | - | - |
| 0.5226 | 531 | 1.2398 | - | - | - | - |
| 0.5236 | 532 | 3.5325 | - | - | - | - |
| 0.5246 | 533 | 1.1303 | - | - | - | - |
| 0.5256 | 534 | 1.2601 | - | - | - | - |
| 0.5266 | 535 | 1.5762 | - | - | - | - |
| 0.5276 | 536 | 1.3992 | - | - | - | - |
| 0.5285 | 537 | 1.7125 | - | - | - | - |
| 0.5295 | 538 | 3.6759 | - | - | - | - |
| 0.5305 | 539 | 1.5468 | - | - | - | - |
| 0.5315 | 540 | 1.4316 | - | - | - | - |
| 0.5325 | 541 | 1.2797 | - | - | - | - |
| 0.5335 | 542 | 1.9122 | - | - | - | - |
| 0.5344 | 543 | 2.0367 | - | - | - | - |
| 0.5354 | 544 | 3.3029 | - | - | - | - |
| 0.5364 | 545 | 3.9263 | - | - | - | - |
| 0.5374 | 546 | 3.0101 | - | - | - | - |
| 0.5384 | 547 | 3.3555 | - | - | - | - |
| 0.5394 | 548 | 1.2068 | - | - | - | - |
| 0.5404 | 549 | 1.1566 | - | - | - | - |
| 0.5413 | 550 | 1.2773 | - | - | - | - |
| 0.5423 | 551 | 1.4047 | - | - | - | - |
| 0.5433 | 552 | 1.6048 | - | - | - | - |
| 0.5443 | 553 | 1.217 | - | - | - | - |
| 0.5453 | 554 | 1.8104 | - | - | - | - |
| 0.5463 | 555 | 1.687 | - | - | - | - |
| 0.5472 | 556 | 1.6702 | - | - | - | - |
| 0.5482 | 557 | 1.7011 | - | - | - | - |
| 0.5492 | 558 | 1.7341 | - | - | - | - |
| 0.5502 | 559 | 1.5006 | - | - | - | - |
| 0.5512 | 560 | 1.2778 | - | - | - | - |
| 0.5522 | 561 | 1.5081 | - | - | - | - |
| 0.5531 | 562 | 1.2398 | - | - | - | - |
| 0.5541 | 563 | 1.1054 | - | - | - | - |
| 0.5551 | 564 | 4.0185 | - | - | - | - |
| 0.5561 | 565 | 1.0427 | - | - | - | - |
| 0.5571 | 566 | 1.3934 | - | - | - | - |
| 0.5581 | 567 | 1.2378 | - | - | - | - |
| 0.5591 | 568 | 1.022 | - | - | - | - |
| 0.5600 | 569 | 0.9001 | - | - | - | - |
| 0.5610 | 570 | 1.3279 | - | - | - | - |
| 0.5620 | 571 | 1.2889 | - | - | - | - |
| 0.5630 | 572 | 0.9383 | - | - | - | - |
| 0.5640 | 573 | 1.749 | - | - | - | - |
| 0.5650 | 574 | 0.7669 | - | - | - | - |
| 0.5659 | 575 | 0.9355 | - | - | - | - |
| 0.5669 | 576 | 1.3596 | - | - | - | - |
| 0.5679 | 577 | 5.5102 | - | - | - | - |
| 0.5689 | 578 | 0.7984 | - | - | - | - |
| 0.5699 | 579 | 0.8871 | - | - | - | - |
| 0.5709 | 580 | 1.1151 | - | - | - | - |
| 0.5719 | 581 | 0.9502 | - | - | - | - |
| 0.5728 | 582 | 3.6492 | - | - | - | - |
| 0.5738 | 583 | 3.4262 | - | - | - | - |
| 0.5748 | 584 | 1.3362 | - | - | - | - |
| 0.5758 | 585 | 0.9015 | - | - | - | - |
| 0.5768 | 586 | 1.5884 | - | - | - | - |
| 0.5778 | 587 | 1.109 | - | - | - | - |
| 0.5787 | 588 | 1.041 | - | - | - | - |
| 0.5797 | 589 | 1.4892 | - | - | - | - |
| 0.5807 | 590 | 1.2623 | - | - | - | - |
| 0.5817 | 591 | 1.5302 | - | - | - | - |
| 0.5827 | 592 | 1.3517 | - | - | - | - |
| 0.5837 | 593 | 0.6166 | - | - | - | - |
| 0.5846 | 594 | 1.6761 | - | - | - | - |
| 0.5856 | 595 | 1.1115 | - | - | - | - |
| 0.5866 | 596 | 1.2945 | - | - | - | - |
| 0.5876 | 597 | 1.4378 | - | - | - | - |
| 0.5886 | 598 | 0.9928 | - | - | - | - |
| 0.5896 | 599 | 0.9898 | - | - | - | - |
| 0.5906 | 600 | 4.6887 | - | - | - | - |
| 0.5915 | 601 | 1.2254 | - | - | - | - |
| 0.5925 | 602 | 1.2707 | - | - | - | - |
| 0.5935 | 603 | 1.8289 | - | - | - | - |
| 0.5945 | 604 | 0.7801 | - | - | - | - |
| 0.5955 | 605 | 0.9111 | - | - | - | - |
| 0.5965 | 606 | 1.1405 | - | - | - | - |
| 0.5974 | 607 | 1.0497 | - | - | - | - |
| 0.5984 | 608 | 1.0792 | - | - | - | - |
| 0.5994 | 609 | 0.9699 | - | - | - | - |
| 0.6004 | 610 | 0.9398 | - | - | - | - |
| 0.6014 | 611 | 1.5483 | - | - | - | - |
| 0.6024 | 612 | 0.997 | 1.0047 | 0.3980 | 0.4554 | 0.6701 |
| 0.6033 | 613 | 0.8358 | - | - | - | - |
| 0.6043 | 614 | 1.211 | - | - | - | - |
| 0.6053 | 615 | 6.7813 | - | - | - | - |
| 0.6063 | 616 | 1.1229 | - | - | - | - |
| 0.6073 | 617 | 1.0317 | - | - | - | - |
| 0.6083 | 618 | 1.2123 | - | - | - | - |
| 0.6093 | 619 | 1.4073 | - | - | - | - |
| 0.6102 | 620 | 0.9951 | - | - | - | - |
| 0.6112 | 621 | 1.3166 | - | - | - | - |
| 0.6122 | 622 | 4.5204 | - | - | - | - |
| 0.6132 | 623 | 0.6539 | - | - | - | - |
| 0.6142 | 624 | 1.1959 | - | - | - | - |
| 0.6152 | 625 | 4.2551 | - | - | - | - |
| 0.6161 | 626 | 1.2459 | - | - | - | - |
| 0.6171 | 627 | 1.3758 | - | - | - | - |
| 0.6181 | 628 | 1.0524 | - | - | - | - |
| 0.6191 | 629 | 1.5197 | - | - | - | - |
| 0.6201 | 630 | 1.0201 | - | - | - | - |
| 0.6211 | 631 | 0.9007 | - | - | - | - |
| 0.6220 | 632 | 0.8418 | - | - | - | - |
| 0.6230 | 633 | 1.4343 | - | - | - | - |
| 0.6240 | 634 | 0.5292 | - | - | - | - |
| 0.625 | 635 | 0.8549 | - | - | - | - |
| 0.6260 | 636 | 0.8703 | - | - | - | - |
| 0.6270 | 637 | 0.9911 | - | - | - | - |
| 0.6280 | 638 | 1.3342 | - | - | - | - |
| 0.6289 | 639 | 1.1332 | - | - | - | - |
| 0.6299 | 640 | 3.9965 | - | - | - | - |
| 0.6309 | 641 | 0.7236 | - | - | - | - |
| 0.6319 | 642 | 0.9079 | - | - | - | - |
| 0.6329 | 643 | 1.0967 | - | - | - | - |
| 0.6339 | 644 | 1.4183 | - | - | - | - |
| 0.6348 | 645 | 1.3841 | - | - | - | - |
| 0.6358 | 646 | 1.2982 | - | - | - | - |
| 0.6368 | 647 | 0.9048 | - | - | - | - |
| 0.6378 | 648 | 0.7918 | - | - | - | - |
| 0.6388 | 649 | 0.3685 | - | - | - | - |
| 0.6398 | 650 | 0.6949 | - | - | - | - |
| 0.6407 | 651 | 5.1568 | - | - | - | - |
| 0.6417 | 652 | 1.3943 | - | - | - | - |
| 0.6427 | 653 | 0.8608 | - | - | - | - |
| 0.6437 | 654 | 0.8197 | - | - | - | - |
| 0.6447 | 655 | 0.822 | - | - | - | - |
| 0.6457 | 656 | 3.2918 | - | - | - | - |
| 0.6467 | 657 | 0.5596 | - | - | - | - |
| 0.6476 | 658 | 4.1499 | - | - | - | - |
| 0.6486 | 659 | 1.0279 | - | - | - | - |
| 0.6496 | 660 | 1.1506 | - | - | - | - |
| 0.6506 | 661 | 1.1673 | - | - | - | - |
| 0.6516 | 662 | 0.96 | - | - | - | - |
| 0.6526 | 663 | 3.5414 | - | - | - | - |
| 0.6535 | 664 | 0.6599 | - | - | - | - |
| 0.6545 | 665 | 3.5518 | - | - | - | - |
| 0.6555 | 666 | 1.1906 | - | - | - | - |
| 0.6565 | 667 | 2.1353 | - | - | - | - |
| 0.6575 | 668 | 0.7083 | - | - | - | - |
| 0.6585 | 669 | 2.9425 | - | - | - | - |
| 0.6594 | 670 | 0.9433 | - | - | - | - |
| 0.6604 | 671 | 1.8499 | - | - | - | - |
| 0.6614 | 672 | 1.1614 | - | - | - | - |
| 0.6624 | 673 | 1.0474 | - | - | - | - |
| 0.6634 | 674 | 1.2895 | - | - | - | - |
| 0.6644 | 675 | 0.9789 | - | - | - | - |
| 0.6654 | 676 | 0.7719 | - | - | - | - |
| 0.6663 | 677 | 1.2203 | - | - | - | - |
| 0.6673 | 678 | 1.0516 | - | - | - | - |
| 0.6683 | 679 | 2.5514 | - | - | - | - |
| 0.6693 | 680 | 0.7346 | - | - | - | - |
| 0.6703 | 681 | 1.0245 | - | - | - | - |
| 0.6713 | 682 | 2.8005 | - | - | - | - |
| 0.6722 | 683 | 1.3212 | - | - | - | - |
| 0.6732 | 684 | 0.95 | - | - | - | - |
| 0.6742 | 685 | 1.0483 | - | - | - | - |
| 0.6752 | 686 | 0.8504 | - | - | - | - |
| 0.6762 | 687 | 2.281 | - | - | - | - |
| 0.6772 | 688 | 1.8153 | - | - | - | - |
| 0.6781 | 689 | 1.3652 | - | - | - | - |
| 0.6791 | 690 | 1.0949 | - | - | - | - |
| 0.6801 | 691 | 1.2196 | - | - | - | - |
| 0.6811 | 692 | 0.7995 | - | - | - | - |
| 0.6821 | 693 | 1.5108 | - | - | - | - |
| 0.6831 | 694 | 0.7933 | - | - | - | - |
| 0.6841 | 695 | 1.2367 | - | - | - | - |
| 0.6850 | 696 | 1.0352 | - | - | - | - |
| 0.6860 | 697 | 1.1709 | - | - | - | - |
| 0.6870 | 698 | 1.452 | - | - | - | - |
| 0.6880 | 699 | 0.8497 | - | - | - | - |
| 0.6890 | 700 | 2.8109 | - | - | - | - |
| 0.6900 | 701 | 2.6196 | - | - | - | - |
| 0.6909 | 702 | 1.4556 | - | - | - | - |
| 0.6919 | 703 | 1.3494 | - | - | - | - |
| 0.6929 | 704 | 1.6624 | - | - | - | - |
| 0.6939 | 705 | 1.6169 | - | - | - | - |
| 0.6949 | 706 | 0.5565 | - | - | - | - |
| 0.6959 | 707 | 0.8594 | - | - | - | - |
| 0.6969 | 708 | 0.8551 | - | - | - | - |
| 0.6978 | 709 | 1.1693 | - | - | - | - |
| 0.6988 | 710 | 1.0514 | - | - | - | - |
| 0.6998 | 711 | 1.1862 | - | - | - | - |
| 0.7008 | 712 | 0.8359 | - | - | - | - |
| 0.7018 | 713 | 0.7692 | - | - | - | - |
| 0.7028 | 714 | 1.815 | - | - | - | - |
| 0.7037 | 715 | 1.0751 | - | - | - | - |
| 0.7047 | 716 | 0.6526 | - | - | - | - |
| 0.7057 | 717 | 1.1617 | - | - | - | - |
| 0.7067 | 718 | 1.0783 | - | - | - | - |
| 0.7077 | 719 | 0.7916 | - | - | - | - |
| 0.7087 | 720 | 1.3039 | - | - | - | - |
| 0.7096 | 721 | 1.1156 | - | - | - | - |
| 0.7106 | 722 | 1.0529 | - | - | - | - |
| 0.7116 | 723 | 0.8265 | - | - | - | - |
| 0.7126 | 724 | 0.8019 | - | - | - | - |
| 0.7136 | 725 | 0.6116 | - | - | - | - |
| 0.7146 | 726 | 1.135 | - | - | - | - |
| 0.7156 | 727 | 0.7692 | - | - | - | - |
| 0.7165 | 728 | 2.3559 | - | - | - | - |
| 0.7175 | 729 | 1.352 | - | - | - | - |
| 0.7185 | 730 | 2.823 | - | - | - | - |
| 0.7195 | 731 | 1.0067 | - | - | - | - |
| 0.7205 | 732 | 0.9077 | - | - | - | - |
| 0.7215 | 733 | 1.0933 | - | - | - | - |
| 0.7224 | 734 | 0.8174 | - | - | - | - |
| 0.7234 | 735 | 1.2212 | - | - | - | - |
| 0.7244 | 736 | 1.1557 | - | - | - | - |
| 0.7254 | 737 | 0.6191 | - | - | - | - |
| 0.7264 | 738 | 1.7437 | - | - | - | - |
| 0.7274 | 739 | 0.8977 | - | - | - | - |
| 0.7283 | 740 | 1.0782 | - | - | - | - |
| 0.7293 | 741 | 0.8985 | - | - | - | - |
| 0.7303 | 742 | 1.4867 | - | - | - | - |
| 0.7313 | 743 | 0.7497 | - | - | - | - |
| 0.7323 | 744 | 0.6433 | - | - | - | - |
| 0.7333 | 745 | 1.4175 | - | - | - | - |
| 0.7343 | 746 | 1.1896 | - | - | - | - |
| 0.7352 | 747 | 1.9867 | - | - | - | - |
| 0.7362 | 748 | 0.8968 | - | - | - | - |
| 0.7372 | 749 | 0.7265 | - | - | - | - |
| 0.7382 | 750 | 0.9418 | - | - | - | - |
| 0.7392 | 751 | 1.3717 | - | - | - | - |
| 0.7402 | 752 | 2.1774 | - | - | - | - |
| 0.7411 | 753 | 1.0854 | - | - | - | - |
| 0.7421 | 754 | 0.9777 | - | - | - | - |
| 0.7431 | 755 | 1.2721 | - | - | - | - |
| 0.7441 | 756 | 0.7114 | - | - | - | - |
| 0.7451 | 757 | 1.4036 | - | - | - | - |
| 0.7461 | 758 | 1.1742 | - | - | - | - |
| 0.7470 | 759 | 0.9351 | - | - | - | - |
| 0.7480 | 760 | 0.5537 | - | - | - | - |
| 0.7490 | 761 | 0.8688 | - | - | - | - |
| 0.75 | 762 | 3.0053 | - | - | - | - |
| 0.7510 | 763 | 3.3743 | - | - | - | - |
| 0.7520 | 764 | 1.9928 | - | - | - | - |
| 0.7530 | 765 | 1.5118 | 0.9342 | 0.4514 | 0.4792 | 0.6782 |
| 0.7539 | 766 | 1.1213 | - | - | - | - |
| 0.7549 | 767 | 2.1312 | - | - | - | - |
| 0.7559 | 768 | 1.3739 | - | - | - | - |
| 0.7569 | 769 | 0.8819 | - | - | - | - |
| 0.7579 | 770 | 0.9069 | - | - | - | - |
| 0.7589 | 771 | 0.935 | - | - | - | - |
| 0.7598 | 772 | 0.7874 | - | - | - | - |
| 0.7608 | 773 | 1.9942 | - | - | - | - |
| 0.7618 | 774 | 1.1711 | - | - | - | - |
| 0.7628 | 775 | 0.8407 | - | - | - | - |
| 0.7638 | 776 | 1.5171 | - | - | - | - |
| 0.7648 | 777 | 0.5308 | - | - | - | - |
| 0.7657 | 778 | 1.4107 | - | - | - | - |
| 0.7667 | 779 | 1.1766 | - | - | - | - |
| 0.7677 | 780 | 1.326 | - | - | - | - |
| 0.7687 | 781 | 0.7371 | - | - | - | - |
| 0.7697 | 782 | 1.0504 | - | - | - | - |
| 0.7707 | 783 | 1.1458 | - | - | - | - |
| 0.7717 | 784 | 0.7242 | - | - | - | - |
| 0.7726 | 785 | 0.8113 | - | - | - | - |
| 0.7736 | 786 | 1.3808 | - | - | - | - |
| 0.7746 | 787 | 0.7584 | - | - | - | - |
| 0.7756 | 788 | 1.226 | - | - | - | - |
| 0.7766 | 789 | 1.0599 | - | - | - | - |
| 0.7776 | 790 | 2.9348 | - | - | - | - |
| 0.7785 | 791 | 1.0849 | - | - | - | - |
| 0.7795 | 792 | 0.5362 | - | - | - | - |
| 0.7805 | 793 | 1.3765 | - | - | - | - |
| 0.7815 | 794 | 0.6824 | - | - | - | - |
| 0.7825 | 795 | 0.6009 | - | - | - | - |
| 0.7835 | 796 | 2.3853 | - | - | - | - |
| 0.7844 | 797 | 1.0571 | - | - | - | - |
| 0.7854 | 798 | 0.9172 | - | - | - | - |
| 0.7864 | 799 | 0.7915 | - | - | - | - |
| 0.7874 | 800 | 0.827 | - | - | - | - |
| 0.7884 | 801 | 0.8465 | - | - | - | - |
| 0.7894 | 802 | 2.3489 | - | - | - | - |
| 0.7904 | 803 | 0.6506 | - | - | - | - |
| 0.7913 | 804 | 0.8346 | - | - | - | - |
| 0.7923 | 805 | 0.6249 | - | - | - | - |
| 0.7933 | 806 | 1.0557 | - | - | - | - |
| 0.7943 | 807 | 0.7552 | - | - | - | - |
| 0.7953 | 808 | 1.281 | - | - | - | - |
| 0.7963 | 809 | 0.7846 | - | - | - | - |
| 0.7972 | 810 | 2.6403 | - | - | - | - |
| 0.7982 | 811 | 0.3679 | - | - | - | - |
| 0.7992 | 812 | 1.9118 | - | - | - | - |
| 0.8002 | 813 | 2.5911 | - | - | - | - |
| 0.8012 | 814 | 1.1783 | - | - | - | - |
| 0.8022 | 815 | 0.9347 | - | - | - | - |
| 0.8031 | 816 | 0.5311 | - | - | - | - |
| 0.8041 | 817 | 0.7092 | - | - | - | - |
| 0.8051 | 818 | 0.8384 | - | - | - | - |
| 0.8061 | 819 | 0.514 | - | - | - | - |
| 0.8071 | 820 | 0.3638 | - | - | - | - |
| 0.8081 | 821 | 1.9376 | - | - | - | - |
| 0.8091 | 822 | 0.9177 | - | - | - | - |
| 0.8100 | 823 | 0.8293 | - | - | - | - |
| 0.8110 | 824 | 0.7269 | - | - | - | - |
| 0.8120 | 825 | 0.664 | - | - | - | - |
| 0.8130 | 826 | 0.6205 | - | - | - | - |
| 0.8140 | 827 | 0.6562 | - | - | - | - |
| 0.8150 | 828 | 0.6576 | - | - | - | - |
| 0.8159 | 829 | 0.9931 | - | - | - | - |
| 0.8169 | 830 | 1.1707 | - | - | - | - |
| 0.8179 | 831 | 0.8635 | - | - | - | - |
| 0.8189 | 832 | 0.7274 | - | - | - | - |
| 0.8199 | 833 | 1.6808 | - | - | - | - |
| 0.8209 | 834 | 1.8309 | - | - | - | - |
| 0.8219 | 835 | 0.6191 | - | - | - | - |
| 0.8228 | 836 | 1.0789 | - | - | - | - |
| 0.8238 | 837 | 1.1637 | - | - | - | - |
| 0.8248 | 838 | 0.7813 | - | - | - | - |
| 0.8258 | 839 | 1.0403 | - | - | - | - |
| 0.8268 | 840 | 0.7656 | - | - | - | - |
| 0.8278 | 841 | 0.9994 | - | - | - | - |
| 0.8287 | 842 | 1.009 | - | - | - | - |
| 0.8297 | 843 | 0.8585 | - | - | - | - |
| 0.8307 | 844 | 0.8847 | - | - | - | - |
| 0.8317 | 845 | 0.8321 | - | - | - | - |
| 0.8327 | 846 | 1.2605 | - | - | - | - |
| 0.8337 | 847 | 1.0609 | - | - | - | - |
| 0.8346 | 848 | 2.0115 | - | - | - | - |
| 0.8356 | 849 | 1.2952 | - | - | - | - |
| 0.8366 | 850 | 0.6999 | - | - | - | - |
| 0.8376 | 851 | 0.7006 | - | - | - | - |
| 0.8386 | 852 | 0.927 | - | - | - | - |
| 0.8396 | 853 | 1.2083 | - | - | - | - |
| 0.8406 | 854 | 0.608 | - | - | - | - |
| 0.8415 | 855 | 0.8478 | - | - | - | - |
| 0.8425 | 856 | 1.5731 | - | - | - | - |
| 0.8435 | 857 | 1.6353 | - | - | - | - |
| 0.8445 | 858 | 0.7862 | - | - | - | - |
| 0.8455 | 859 | 0.8909 | - | - | - | - |
| 0.8465 | 860 | 1.1719 | - | - | - | - |
| 0.8474 | 861 | 1.2722 | - | - | - | - |
| 0.8484 | 862 | 1.0022 | - | - | - | - |
| 0.8494 | 863 | 1.5307 | - | - | - | - |
| 0.8504 | 864 | 1.0162 | - | - | - | - |
| 0.8514 | 865 | 0.6827 | - | - | - | - |
| 0.8524 | 866 | 0.7744 | - | - | - | - |
| 0.8533 | 867 | 1.2011 | - | - | - | - |
| 0.8543 | 868 | 0.9219 | - | - | - | - |
| 0.8553 | 869 | 0.7636 | - | - | - | - |
| 0.8563 | 870 | 1.5061 | - | - | - | - |
| 0.8573 | 871 | 1.5569 | - | - | - | - |
| 0.8583 | 872 | 0.5896 | - | - | - | - |
| 0.8593 | 873 | 1.1918 | - | - | - | - |
| 0.8602 | 874 | 0.8572 | - | - | - | - |
| 0.8612 | 875 | 1.0421 | - | - | - | - |
| 0.8622 | 876 | 2.4599 | - | - | - | - |
| 0.8632 | 877 | 0.55 | - | - | - | - |
| 0.8642 | 878 | 1.2829 | - | - | - | - |
| 0.8652 | 879 | 0.7808 | - | - | - | - |
| 0.8661 | 880 | 1.7712 | - | - | - | - |
| 0.8671 | 881 | 0.7456 | - | - | - | - |
| 0.8681 | 882 | 1.2805 | - | - | - | - |
| 0.8691 | 883 | 2.1927 | - | - | - | - |
| 0.8701 | 884 | 0.855 | - | - | - | - |
| 0.8711 | 885 | 0.667 | - | - | - | - |
| 0.8720 | 886 | 1.1097 | - | - | - | - |
| 0.8730 | 887 | 1.8795 | - | - | - | - |
| 0.8740 | 888 | 0.6767 | - | - | - | - |
| 0.875 | 889 | 0.7549 | - | - | - | - |
| 0.8760 | 890 | 0.8616 | - | - | - | - |
| 0.8770 | 891 | 1.9461 | - | - | - | - |
| 0.8780 | 892 | 1.2694 | - | - | - | - |
| 0.8789 | 893 | 1.825 | - | - | - | - |
| 0.8799 | 894 | 0.9218 | - | - | - | - |
| 0.8809 | 895 | 1.0297 | - | - | - | - |
| 0.8819 | 896 | 0.609 | - | - | - | - |
| 0.8829 | 897 | 0.9638 | - | - | - | - |
| 0.8839 | 898 | 0.5521 | - | - | - | - |
| 0.8848 | 899 | 1.3365 | - | - | - | - |
| 0.8858 | 900 | 0.8443 | - | - | - | - |
| 0.8868 | 901 | 0.7848 | - | - | - | - |
| 0.8878 | 902 | 1.0733 | - | - | - | - |
| 0.8888 | 903 | 0.5657 | - | - | - | - |
| 0.8898 | 904 | 1.8081 | - | - | - | - |
| 0.8907 | 905 | 0.8232 | - | - | - | - |
| 0.8917 | 906 | 0.6159 | - | - | - | - |
| 0.8927 | 907 | 0.9832 | - | - | - | - |
| 0.8937 | 908 | 1.1375 | - | - | - | - |
| 0.8947 | 909 | 1.4182 | - | - | - | - |
| 0.8957 | 910 | 1.2287 | - | - | - | - |
| 0.8967 | 911 | 1.0915 | - | - | - | - |
| 0.8976 | 912 | 0.8116 | - | - | - | - |
| 0.8986 | 913 | 0.6824 | - | - | - | - |
| 0.8996 | 914 | 0.8888 | - | - | - | - |
| 0.9006 | 915 | 0.5974 | - | - | - | - |
| 0.9016 | 916 | 1.1766 | - | - | - | - |
| 0.9026 | 917 | 0.9415 | - | - | - | - |
| 0.9035 | 918 | 0.6387 | 0.7856 | 0.5147 | 0.4835 | 0.6934 |
| 0.9045 | 919 | 0.7342 | - | - | - | - |
| 0.9055 | 920 | 1.2232 | - | - | - | - |
| 0.9065 | 921 | 1.4883 | - | - | - | - |
| 0.9075 | 922 | 1.4453 | - | - | - | - |
| 0.9085 | 923 | 0.665 | - | - | - | - |
| 0.9094 | 924 | 0.8973 | - | - | - | - |
| 0.9104 | 925 | 0.7578 | - | - | - | - |
| 0.9114 | 926 | 0.8693 | - | - | - | - |
| 0.9124 | 927 | 1.0055 | - | - | - | - |
| 0.9134 | 928 | 0.4451 | - | - | - | - |
| 0.9144 | 929 | 1.3435 | - | - | - | - |
| 0.9154 | 930 | 1.0979 | - | - | - | - |
| 0.9163 | 931 | 1.0552 | - | - | - | - |
| 0.9173 | 932 | 0.8224 | - | - | - | - |
| 0.9183 | 933 | 2.824 | - | - | - | - |
| 0.9193 | 934 | 1.3514 | - | - | - | - |
| 0.9203 | 935 | 1.3339 | - | - | - | - |
| 0.9213 | 936 | 0.8439 | - | - | - | - |
| 0.9222 | 937 | 0.6325 | - | - | - | - |
| 0.9232 | 938 | 0.7714 | - | - | - | - |
| 0.9242 | 939 | 0.4552 | - | - | - | - |
| 0.9252 | 940 | 1.3962 | - | - | - | - |
| 0.9262 | 941 | 1.3079 | - | - | - | - |
| 0.9272 | 942 | 0.8963 | - | - | - | - |
| 0.9281 | 943 | 0.7712 | - | - | - | - |
| 0.9291 | 944 | 0.7079 | - | - | - | - |
| 0.9301 | 945 | 1.2151 | - | - | - | - |
| 0.9311 | 946 | 0.5961 | - | - | - | - |
| 0.9321 | 947 | 1.5555 | - | - | - | - |
| 0.9331 | 948 | 0.6374 | - | - | - | - |
| 0.9341 | 949 | 0.8514 | - | - | - | - |
| 0.9350 | 950 | 1.0144 | - | - | - | - |
| 0.9360 | 951 | 0.346 | - | - | - | - |
| 0.9370 | 952 | 0.7938 | - | - | - | - |
| 0.9380 | 953 | 0.7822 | - | - | - | - |
| 0.9390 | 954 | 2.5079 | - | - | - | - |
| 0.9400 | 955 | 0.4717 | - | - | - | - |
| 0.9409 | 956 | 2.047 | - | - | - | - |
| 0.9419 | 957 | 1.4548 | - | - | - | - |
| 0.9429 | 958 | 0.6623 | - | - | - | - |
| 0.9439 | 959 | 0.8172 | - | - | - | - |
| 0.9449 | 960 | 0.9362 | - | - | - | - |
| 0.9459 | 961 | 1.6731 | - | - | - | - |
| 0.9469 | 962 | 0.4495 | - | - | - | - |
| 0.9478 | 963 | 0.5375 | - | - | - | - |
| 0.9488 | 964 | 1.3343 | - | - | - | - |
| 0.9498 | 965 | 0.5332 | - | - | - | - |
| 0.9508 | 966 | 1.0183 | - | - | - | - |
| 0.9518 | 967 | 0.6058 | - | - | - | - |
| 0.9528 | 968 | 0.6536 | - | - | - | - |
| 0.9537 | 969 | 1.0448 | - | - | - | - |
| 0.9547 | 970 | 0.9479 | - | - | - | - |
| 0.9557 | 971 | 0.8316 | - | - | - | - |
| 0.9567 | 972 | 1.0847 | - | - | - | - |
| 0.9577 | 973 | 1.3262 | - | - | - | - |
| 0.9587 | 974 | 0.6488 | - | - | - | - |
| 0.9596 | 975 | 0.7577 | - | - | - | - |
| 0.9606 | 976 | 1.0546 | - | - | - | - |
| 0.9616 | 977 | 0.9759 | - | - | - | - |
| 0.9626 | 978 | 0.526 | - | - | - | - |
| 0.9636 | 979 | 0.9726 | - | - | - | - |
| 0.9646 | 980 | 0.7035 | - | - | - | - |
| 0.9656 | 981 | 0.4028 | - | - | - | - |
| 0.9665 | 982 | 0.889 | - | - | - | - |
| 0.9675 | 983 | 0.6391 | - | - | - | - |
| 0.9685 | 984 | 2.2124 | - | - | - | - |
| 0.9695 | 985 | 2.5108 | - | - | - | - |
| 0.9705 | 986 | 0.5352 | - | - | - | - |
| 0.9715 | 987 | 0.7982 | - | - | - | - |
| 0.9724 | 988 | 0.8057 | - | - | - | - |
| 0.9734 | 989 | 0.6363 | - | - | - | - |
| 0.9744 | 990 | 1.4105 | - | - | - | - |
| 0.9754 | 991 | 0.6527 | - | - | - | - |
| 0.9764 | 992 | 0.7418 | - | - | - | - |
| 0.9774 | 993 | 1.5734 | - | - | - | - |
| 0.9783 | 994 | 0.512 | - | - | - | - |
| 0.9793 | 995 | 0.7346 | - | - | - | - |
| 0.9803 | 996 | 0.6094 | - | - | - | - |
| 0.9813 | 997 | 0.9234 | - | - | - | - |
| 0.9823 | 998 | 2.1518 | - | - | - | - |
| 0.9833 | 999 | 0.458 | - | - | - | - |
| 0.9843 | 1000 | 1.0281 | - | - | - | - |
| 0.9852 | 1001 | 0.735 | - | - | - | - |
| 0.9862 | 1002 | 1.1242 | - | - | - | - |
| 0.9872 | 1003 | 1.3979 | - | - | - | - |
| 0.9882 | 1004 | 0.8926 | - | - | - | - |
| 0.9892 | 1005 | 2.1105 | - | - | - | - |
| 0.9902 | 1006 | 0.6443 | - | - | - | - |
| 0.9911 | 1007 | 1.6493 | - | - | - | - |
| 0.9921 | 1008 | 1.125 | - | - | - | - |
| 0.9931 | 1009 | 0.3277 | - | - | - | - |
| 0.9941 | 1010 | 0.8848 | - | - | - | - |
| 0.9951 | 1011 | 0.6624 | - | - | - | - |
| 0.9961 | 1012 | 0.7913 | - | - | - | - |
| 0.9970 | 1013 | 1.2572 | - | - | - | - |
| 0.9980 | 1014 | 1.2533 | - | - | - | - |
| 0.9990 | 1015 | 0.7953 | - | - | - | - |
| 1.0 | 1016 | 0.3578 | - | - | - | - |
| 1.0010 | 1017 | 1.1694 | - | - | - | - |
| 1.0020 | 1018 | 1.0959 | - | - | - | - |
| 1.0030 | 1019 | 0.8922 | - | - | - | - |
| 1.0039 | 1020 | 0.7743 | - | - | - | - |
| 1.0049 | 1021 | 0.5631 | - | - | - | - |
| 1.0059 | 1022 | 1.2144 | - | - | - | - |
| 1.0069 | 1023 | 0.5034 | - | - | - | - |
| 1.0079 | 1024 | 0.7687 | - | - | - | - |
| 1.0089 | 1025 | 0.7181 | - | - | - | - |
| 1.0098 | 1026 | 1.0367 | - | - | - | - |
| 1.0108 | 1027 | 0.8523 | - | - | - | - |
| 1.0118 | 1028 | 1.1932 | - | - | - | - |
| 1.0128 | 1029 | 1.3118 | - | - | - | - |
| 1.0138 | 1030 | 0.8769 | - | - | - | - |
| 1.0148 | 1031 | 0.8931 | - | - | - | - |
| 1.0157 | 1032 | 0.8208 | - | - | - | - |
| 1.0167 | 1033 | 0.7876 | - | - | - | - |
| 1.0177 | 1034 | 1.1651 | - | - | - | - |
| 1.0187 | 1035 | 0.8233 | - | - | - | - |
| 1.0197 | 1036 | 0.7586 | - | - | - | - |
| 1.0207 | 1037 | 0.8531 | - | - | - | - |
| 1.0217 | 1038 | 1.81 | - | - | - | - |
| 1.0226 | 1039 | 0.601 | - | - | - | - |
| 1.0236 | 1040 | 0.6086 | - | - | - | - |
| 1.0246 | 1041 | 0.6538 | - | - | - | - |
| 1.0256 | 1042 | 0.5518 | - | - | - | - |
| 1.0266 | 1043 | 1.249 | - | - | - | - |
| 1.0276 | 1044 | 0.5059 | - | - | - | - |
| 1.0285 | 1045 | 0.6202 | - | - | - | - |
| 1.0295 | 1046 | 0.8073 | - | - | - | - |
| 1.0305 | 1047 | 0.4438 | - | - | - | - |
| 1.0315 | 1048 | 1.4425 | - | - | - | - |
| 1.0325 | 1049 | 0.3772 | - | - | - | - |
| 1.0335 | 1050 | 0.4225 | - | - | - | - |
| 1.0344 | 1051 | 0.7363 | - | - | - | - |
| 1.0354 | 1052 | 0.4342 | - | - | - | - |
| 1.0364 | 1053 | 0.8763 | - | - | - | - |
| 1.0374 | 1054 | 0.8974 | - | - | - | - |
| 1.0384 | 1055 | 0.9175 | - | - | - | - |
| 1.0394 | 1056 | 0.9145 | - | - | - | - |
| 1.0404 | 1057 | 0.7247 | - | - | - | - |
| 1.0413 | 1058 | 0.6066 | - | - | - | - |
| 1.0423 | 1059 | 0.5892 | - | - | - | - |
| 1.0433 | 1060 | 2.1779 | - | - | - | - |
| 1.0443 | 1061 | 0.7973 | - | - | - | - |
| 1.0453 | 1062 | 0.4354 | - | - | - | - |
| 1.0463 | 1063 | 1.2032 | - | - | - | - |
| 1.0472 | 1064 | 1.088 | - | - | - | - |
| 1.0482 | 1065 | 0.3944 | - | - | - | - |
| 1.0492 | 1066 | 0.5178 | - | - | - | - |
| 1.0502 | 1067 | 1.0818 | - | - | - | - |
| 1.0512 | 1068 | 0.8308 | - | - | - | - |
| 1.0522 | 1069 | 1.54 | - | - | - | - |
| 1.0531 | 1070 | 0.8444 | - | - | - | - |
| 1.0541 | 1071 | 0.4829 | 0.7322 | 0.5793 | 0.4943 | 0.6916 |
| 1.0551 | 1072 | 0.495 | - | - | - | - |
| 1.0561 | 1073 | 0.8591 | - | - | - | - |
| 1.0571 | 1074 | 0.327 | - | - | - | - |
| 1.0581 | 1075 | 0.7161 | - | - | - | - |
| 1.0591 | 1076 | 0.6374 | - | - | - | - |
| 1.0600 | 1077 | 1.1748 | - | - | - | - |
| 1.0610 | 1078 | 1.7501 | - | - | - | - |
| 1.0620 | 1079 | 0.5544 | - | - | - | - |
| 1.0630 | 1080 | 0.6265 | - | - | - | - |
| 1.0640 | 1081 | 1.6517 | - | - | - | - |
| 1.0650 | 1082 | 0.7457 | - | - | - | - |
| 1.0659 | 1083 | 0.7492 | - | - | - | - |
| 1.0669 | 1084 | 0.8013 | - | - | - | - |
| 1.0679 | 1085 | 0.1619 | - | - | - | - |
| 1.0689 | 1086 | 0.5057 | - | - | - | - |
| 1.0699 | 1087 | 0.4712 | - | - | - | - |
| 1.0709 | 1088 | 0.8382 | - | - | - | - |
| 1.0719 | 1089 | 0.6045 | - | - | - | - |
| 1.0728 | 1090 | 0.6117 | - | - | - | - |
| 1.0738 | 1091 | 0.7028 | - | - | - | - |
| 1.0748 | 1092 | 1.2376 | - | - | - | - |
| 1.0758 | 1093 | 1.045 | - | - | - | - |
| 1.0768 | 1094 | 1.1152 | - | - | - | - |
| 1.0778 | 1095 | 0.5572 | - | - | - | - |
| 1.0787 | 1096 | 0.7047 | - | - | - | - |
| 1.0797 | 1097 | 1.4233 | - | - | - | - |
| 1.0807 | 1098 | 0.8478 | - | - | - | - |
| 1.0817 | 1099 | 0.6851 | - | - | - | - |
| 1.0827 | 1100 | 0.4462 | - | - | - | - |
| 1.0837 | 1101 | 2.1139 | - | - | - | - |
| 1.0846 | 1102 | 0.8097 | - | - | - | - |
| 1.0856 | 1103 | 1.0912 | - | - | - | - |
| 1.0866 | 1104 | 1.1922 | - | - | - | - |
| 1.0876 | 1105 | 0.3888 | - | - | - | - |
| 1.0886 | 1106 | 0.7842 | - | - | - | - |
| 1.0896 | 1107 | 0.1422 | - | - | - | - |
| 1.0906 | 1108 | 0.6949 | - | - | - | - |
| 1.0915 | 1109 | 0.819 | - | - | - | - |
| 1.0925 | 1110 | 0.4947 | - | - | - | - |
| 1.0935 | 1111 | 0.3346 | - | - | - | - |
| 1.0945 | 1112 | 1.1459 | - | - | - | - |
| 1.0955 | 1113 | 0.3276 | - | - | - | - |
| 1.0965 | 1114 | 0.7464 | - | - | - | - |
| 1.0974 | 1115 | 0.8906 | - | - | - | - |
| 1.0984 | 1116 | 1.9711 | - | - | - | - |
| 1.0994 | 1117 | 0.6403 | - | - | - | - |
| 1.1004 | 1118 | 1.3684 | - | - | - | - |
| 1.1014 | 1119 | 1.2074 | - | - | - | - |
| 1.1024 | 1120 | 0.5098 | - | - | - | - |
| 1.1033 | 1121 | 0.5498 | - | - | - | - |
| 1.1043 | 1122 | 0.3848 | - | - | - | - |
| 1.1053 | 1123 | 2.0202 | - | - | - | - |
| 1.1063 | 1124 | 0.5944 | - | - | - | - |
| 1.1073 | 1125 | 0.3266 | - | - | - | - |
| 1.1083 | 1126 | 1.0289 | - | - | - | - |
| 1.1093 | 1127 | 1.0807 | - | - | - | - |
| 1.1102 | 1128 | 0.6155 | - | - | - | - |
| 1.1112 | 1129 | 1.1686 | - | - | - | - |
| 1.1122 | 1130 | 1.0762 | - | - | - | - |
| 1.1132 | 1131 | 0.6781 | - | - | - | - |
| 1.1142 | 1132 | 0.6144 | - | - | - | - |
| 1.1152 | 1133 | 0.8022 | - | - | - | - |
| 1.1161 | 1134 | 0.5213 | - | - | - | - |
| 1.1171 | 1135 | 0.6014 | - | - | - | - |
| 1.1181 | 1136 | 0.901 | - | - | - | - |
| 1.1191 | 1137 | 0.9938 | - | - | - | - |
| 1.1201 | 1138 | 1.8173 | - | - | - | - |
| 1.1211 | 1139 | 0.5572 | - | - | - | - |
| 1.1220 | 1140 | 0.7489 | - | - | - | - |
| 1.1230 | 1141 | 0.4338 | - | - | - | - |
| 1.1240 | 1142 | 0.3086 | - | - | - | - |
| 1.125 | 1143 | 0.6942 | - | - | - | - |
| 1.1260 | 1144 | 0.7665 | - | - | - | - |
| 1.1270 | 1145 | 0.2734 | - | - | - | - |
| 1.1280 | 1146 | 0.9961 | - | - | - | - |
| 1.1289 | 1147 | 0.5258 | - | - | - | - |
| 1.1299 | 1148 | 0.7122 | - | - | - | - |
| 1.1309 | 1149 | 0.3747 | - | - | - | - |
| 1.1319 | 1150 | 0.6397 | - | - | - | - |
| 1.1329 | 1151 | 0.5504 | - | - | - | - |
| 1.1339 | 1152 | 0.5572 | - | - | - | - |
| 1.1348 | 1153 | 0.7828 | - | - | - | - |
| 1.1358 | 1154 | 1.0443 | - | - | - | - |
| 1.1368 | 1155 | 1.0731 | - | - | - | - |
| 1.1378 | 1156 | 1.1341 | - | - | - | - |
| 1.1388 | 1157 | 0.391 | - | - | - | - |
| 1.1398 | 1158 | 1.462 | - | - | - | - |
| 1.1407 | 1159 | 0.8131 | - | - | - | - |
| 1.1417 | 1160 | 0.7323 | - | - | - | - |
| 1.1427 | 1161 | 0.5473 | - | - | - | - |
| 1.1437 | 1162 | 0.7973 | - | - | - | - |
| 1.1447 | 1163 | 0.5875 | - | - | - | - |
| 1.1457 | 1164 | 0.9248 | - | - | - | - |
| 1.1467 | 1165 | 0.6898 | - | - | - | - |
| 1.1476 | 1166 | 1.4924 | - | - | - | - |
| 1.1486 | 1167 | 0.8908 | - | - | - | - |
| 1.1496 | 1168 | 0.564 | - | - | - | - |
| 1.1506 | 1169 | 0.3779 | - | - | - | - |
| 1.1516 | 1170 | 1.0715 | - | - | - | - |
| 1.1526 | 1171 | 0.4366 | - | - | - | - |
| 1.1535 | 1172 | 0.6391 | - | - | - | - |
| 1.1545 | 1173 | 1.2133 | - | - | - | - |
| 1.1555 | 1174 | 1.4135 | - | - | - | - |
| 1.1565 | 1175 | 0.7748 | - | - | - | - |
| 1.1575 | 1176 | 0.544 | - | - | - | - |
| 1.1585 | 1177 | 0.5168 | - | - | - | - |
| 1.1594 | 1178 | 0.6931 | - | - | - | - |
| 1.1604 | 1179 | 0.87 | - | - | - | - |
| 1.1614 | 1180 | 0.9842 | - | - | - | - |
| 1.1624 | 1181 | 0.3614 | - | - | - | - |
| 1.1634 | 1182 | 0.4167 | - | - | - | - |
| 1.1644 | 1183 | 0.3688 | - | - | - | - |
| 1.1654 | 1184 | 0.5431 | - | - | - | - |
| 1.1663 | 1185 | 0.6127 | - | - | - | - |
| 1.1673 | 1186 | 0.8693 | - | - | - | - |
| 1.1683 | 1187 | 0.7596 | - | - | - | - |
| 1.1693 | 1188 | 0.724 | - | - | - | - |
| 1.1703 | 1189 | 0.9105 | - | - | - | - |
| 1.1713 | 1190 | 0.3941 | - | - | - | - |
| 1.1722 | 1191 | 1.1768 | - | - | - | - |
| 1.1732 | 1192 | 0.5509 | - | - | - | - |
| 1.1742 | 1193 | 1.1616 | - | - | - | - |
| 1.1752 | 1194 | 0.6835 | - | - | - | - |
| 1.1762 | 1195 | 0.4379 | - | - | - | - |
| 1.1772 | 1196 | 0.5453 | - | - | - | - |
| 1.1781 | 1197 | 0.5505 | - | - | - | - |
| 1.1791 | 1198 | 0.7472 | - | - | - | - |
| 1.1801 | 1199 | 0.3541 | - | - | - | - |
| 1.1811 | 1200 | 0.796 | - | - | - | - |
| 1.1821 | 1201 | 0.558 | - | - | - | - |
| 1.1831 | 1202 | 0.8679 | - | - | - | - |
| 1.1841 | 1203 | 0.7619 | - | - | - | - |
| 1.1850 | 1204 | 0.7039 | - | - | - | - |
| 1.1860 | 1205 | 0.7166 | - | - | - | - |
| 1.1870 | 1206 | 0.6982 | - | - | - | - |
| 1.1880 | 1207 | 0.4206 | - | - | - | - |
| 1.1890 | 1208 | 0.6361 | - | - | - | - |
| 1.1900 | 1209 | 0.6248 | - | - | - | - |
| 1.1909 | 1210 | 0.7933 | - | - | - | - |
| 1.1919 | 1211 | 0.5985 | - | - | - | - |
| 1.1929 | 1212 | 0.6147 | - | - | - | - |
| 1.1939 | 1213 | 0.6085 | - | - | - | - |
| 1.1949 | 1214 | 0.6713 | - | - | - | - |
| 1.1959 | 1215 | 1.0315 | - | - | - | - |
| 1.1969 | 1216 | 2.0024 | - | - | - | - |
| 1.1978 | 1217 | 1.6034 | - | - | - | - |
| 1.1988 | 1218 | 1.7407 | - | - | - | - |
| 1.1998 | 1219 | 1.2014 | - | - | - | - |
| 1.2008 | 1220 | 1.8377 | - | - | - | - |
| 1.2018 | 1221 | 0.6652 | - | - | - | - |
| 1.2028 | 1222 | 0.2618 | - | - | - | - |
| 1.2037 | 1223 | 1.4023 | - | - | - | - |
| 1.2047 | 1224 | 0.2575 | 0.6752 | 0.5964 | 0.4982 | 0.7087 |
| 1.2057 | 1225 | 0.6646 | - | - | - | - |
| 1.2067 | 1226 | 0.8142 | - | - | - | - |
| 1.2077 | 1227 | 0.7552 | - | - | - | - |
| 1.2087 | 1228 | 0.8724 | - | - | - | - |
| 1.2096 | 1229 | 0.92 | - | - | - | - |
| 1.2106 | 1230 | 0.8513 | - | - | - | - |
| 1.2116 | 1231 | 0.5221 | - | - | - | - |
| 1.2126 | 1232 | 0.8456 | - | - | - | - |
| 1.2136 | 1233 | 0.3728 | - | - | - | - |
| 1.2146 | 1234 | 1.1982 | - | - | - | - |
| 1.2156 | 1235 | 0.4944 | - | - | - | - |
| 1.2165 | 1236 | 0.454 | - | - | - | - |
| 1.2175 | 1237 | 0.8594 | - | - | - | - |
| 1.2185 | 1238 | 0.8604 | - | - | - | - |
| 1.2195 | 1239 | 0.9616 | - | - | - | - |
| 1.2205 | 1240 | 0.9257 | - | - | - | - |
| 1.2215 | 1241 | 0.8514 | - | - | - | - |
| 1.2224 | 1242 | 0.6498 | - | - | - | - |
| 1.2234 | 1243 | 1.0719 | - | - | - | - |
| 1.2244 | 1244 | 1.2279 | - | - | - | - |
| 1.2254 | 1245 | 1.0294 | - | - | - | - |
| 1.2264 | 1246 | 0.7619 | - | - | - | - |
| 1.2274 | 1247 | 0.3707 | - | - | - | - |
| 1.2283 | 1248 | 0.3229 | - | - | - | - |
| 1.2293 | 1249 | 0.9892 | - | - | - | - |
| 1.2303 | 1250 | 0.7125 | - | - | - | - |
| 1.2313 | 1251 | 0.3682 | - | - | - | - |
| 1.2323 | 1252 | 0.5191 | - | - | - | - |
| 1.2333 | 1253 | 0.5471 | - | - | - | - |
| 1.2343 | 1254 | 0.3635 | - | - | - | - |
| 1.2352 | 1255 | 0.5368 | - | - | - | - |
| 1.2362 | 1256 | 0.4115 | - | - | - | - |
| 1.2372 | 1257 | 0.3883 | - | - | - | - |
| 1.2382 | 1258 | 0.4394 | - | - | - | - |
| 1.2392 | 1259 | 0.6474 | - | - | - | - |
| 1.2402 | 1260 | 1.0838 | - | - | - | - |
| 1.2411 | 1261 | 0.7188 | - | - | - | - |
| 1.2421 | 1262 | 0.5869 | - | - | - | - |
| 1.2431 | 1263 | 2.6805 | - | - | - | - |
| 1.2441 | 1264 | 0.7447 | - | - | - | - |
| 1.2451 | 1265 | 1.1048 | - | - | - | - |
| 1.2461 | 1266 | 0.4745 | - | - | - | - |
| 1.2470 | 1267 | 1.3479 | - | - | - | - |
| 1.2480 | 1268 | 0.4079 | - | - | - | - |
| 1.2490 | 1269 | 0.3326 | - | - | - | - |
| 1.25 | 1270 | 0.5237 | - | - | - | - |
| 1.2510 | 1271 | 0.2571 | - | - | - | - |
| 1.2520 | 1272 | 0.7165 | - | - | - | - |
| 1.2530 | 1273 | 0.5696 | - | - | - | - |
| 1.2539 | 1274 | 0.8936 | - | - | - | - |
| 1.2549 | 1275 | 0.3444 | - | - | - | - |
| 1.2559 | 1276 | 0.785 | - | - | - | - |
| 1.2569 | 1277 | 0.3361 | - | - | - | - |
| 1.2579 | 1278 | 0.3905 | - | - | - | - |
| 1.2589 | 1279 | 0.8173 | - | - | - | - |
| 1.2598 | 1280 | 0.4759 | - | - | - | - |
| 1.2608 | 1281 | 0.3544 | - | - | - | - |
| 1.2618 | 1282 | 0.4727 | - | - | - | - |
| 1.2628 | 1283 | 0.5195 | - | - | - | - |
| 1.2638 | 1284 | 0.5446 | - | - | - | - |
| 1.2648 | 1285 | 0.585 | - | - | - | - |
| 1.2657 | 1286 | 0.4068 | - | - | - | - |
| 1.2667 | 1287 | 1.4534 | - | - | - | - |
| 1.2677 | 1288 | 0.3907 | - | - | - | - |
| 1.2687 | 1289 | 0.8361 | - | - | - | - |
| 1.2697 | 1290 | 1.1358 | - | - | - | - |
| 1.2707 | 1291 | 0.6607 | - | - | - | - |
| 1.2717 | 1292 | 0.5284 | - | - | - | - |
| 1.2726 | 1293 | 0.8732 | - | - | - | - |
| 1.2736 | 1294 | 0.4414 | - | - | - | - |
| 1.2746 | 1295 | 0.9862 | - | - | - | - |
| 1.2756 | 1296 | 0.5916 | - | - | - | - |
| 1.2766 | 1297 | 0.4013 | - | - | - | - |
| 1.2776 | 1298 | 0.5889 | - | - | - | - |
| 1.2785 | 1299 | 0.7337 | - | - | - | - |
| 1.2795 | 1300 | 0.4836 | - | - | - | - |
| 1.2805 | 1301 | 0.6721 | - | - | - | - |
| 1.2815 | 1302 | 0.622 | - | - | - | - |
| 1.2825 | 1303 | 0.4463 | - | - | - | - |
| 1.2835 | 1304 | 1.0106 | - | - | - | - |
| 1.2844 | 1305 | 0.9205 | - | - | - | - |
| 1.2854 | 1306 | 1.0984 | - | - | - | - |
| 1.2864 | 1307 | 0.3085 | - | - | - | - |
| 1.2874 | 1308 | 0.4345 | - | - | - | - |
| 1.2884 | 1309 | 0.3946 | - | - | - | - |
| 1.2894 | 1310 | 1.6366 | - | - | - | - |
| 1.2904 | 1311 | 0.909 | - | - | - | - |
| 1.2913 | 1312 | 1.0468 | - | - | - | - |
| 1.2923 | 1313 | 1.0732 | - | - | - | - |
| 1.2933 | 1314 | 0.5856 | - | - | - | - |
| 1.2943 | 1315 | 0.8502 | - | - | - | - |
| 1.2953 | 1316 | 0.8886 | - | - | - | - |
| 1.2963 | 1317 | 0.7551 | - | - | - | - |
| 1.2972 | 1318 | 0.7487 | - | - | - | - |
| 1.2982 | 1319 | 0.9703 | - | - | - | - |
| 1.2992 | 1320 | 0.4291 | - | - | - | - |
| 1.3002 | 1321 | 0.7965 | - | - | - | - |
| 1.3012 | 1322 | 0.811 | - | - | - | - |
| 1.3022 | 1323 | 0.9556 | - | - | - | - |
| 1.3031 | 1324 | 0.8323 | - | - | - | - |
| 1.3041 | 1325 | 0.327 | - | - | - | - |
| 1.3051 | 1326 | 0.7244 | - | - | - | - |
| 1.3061 | 1327 | 1.088 | - | - | - | - |
| 1.3071 | 1328 | 0.9094 | - | - | - | - |
| 1.3081 | 1329 | 0.7003 | - | - | - | - |
| 1.3091 | 1330 | 0.8419 | - | - | - | - |
| 1.3100 | 1331 | 0.6017 | - | - | - | - |
| 1.3110 | 1332 | 0.4095 | - | - | - | - |
| 1.3120 | 1333 | 0.8019 | - | - | - | - |
| 1.3130 | 1334 | 0.7212 | - | - | - | - |
| 1.3140 | 1335 | 0.6535 | - | - | - | - |
| 1.3150 | 1336 | 1.2404 | - | - | - | - |
| 1.3159 | 1337 | 0.8993 | - | - | - | - |
| 1.3169 | 1338 | 0.5882 | - | - | - | - |
| 1.3179 | 1339 | 0.6385 | - | - | - | - |
| 1.3189 | 1340 | 0.5562 | - | - | - | - |
| 1.3199 | 1341 | 0.2869 | - | - | - | - |
| 1.3209 | 1342 | 0.3641 | - | - | - | - |
| 1.3219 | 1343 | 0.4218 | - | - | - | - |
| 1.3228 | 1344 | 0.606 | - | - | - | - |
| 1.3238 | 1345 | 0.3806 | - | - | - | - |
| 1.3248 | 1346 | 0.8854 | - | - | - | - |
| 1.3258 | 1347 | 0.4355 | - | - | - | - |
| 1.3268 | 1348 | 0.1498 | - | - | - | - |
| 1.3278 | 1349 | 1.2401 | - | - | - | - |
| 1.3287 | 1350 | 0.3354 | - | - | - | - |
| 1.3297 | 1351 | 0.9802 | - | - | - | - |
| 1.3307 | 1352 | 0.3976 | - | - | - | - |
| 1.3317 | 1353 | 1.476 | - | - | - | - |
| 1.3327 | 1354 | 1.0131 | - | - | - | - |
| 1.3337 | 1355 | 0.6467 | - | - | - | - |
| 1.3346 | 1356 | 0.6601 | - | - | - | - |
| 1.3356 | 1357 | 0.5619 | - | - | - | - |
| 1.3366 | 1358 | 0.5519 | - | - | - | - |
| 1.3376 | 1359 | 0.2673 | - | - | - | - |
| 1.3386 | 1360 | 0.7003 | - | - | - | - |
| 1.3396 | 1361 | 0.4145 | - | - | - | - |
| 1.3406 | 1362 | 0.9338 | - | - | - | - |
| 1.3415 | 1363 | 1.6307 | - | - | - | - |
| 1.3425 | 1364 | 0.353 | - | - | - | - |
| 1.3435 | 1365 | 0.6528 | - | - | - | - |
| 1.3445 | 1366 | 0.7904 | - | - | - | - |
| 1.3455 | 1367 | 0.7177 | - | - | - | - |
| 1.3465 | 1368 | 0.2139 | - | - | - | - |
| 1.3474 | 1369 | 0.6728 | - | - | - | - |
| 1.3484 | 1370 | 0.9091 | - | - | - | - |
| 1.3494 | 1371 | 0.5011 | - | - | - | - |
| 1.3504 | 1372 | 0.8399 | - | - | - | - |
| 1.3514 | 1373 | 0.5121 | - | - | - | - |
| 1.3524 | 1374 | 1.4742 | - | - | - | - |
| 1.3533 | 1375 | 0.4506 | - | - | - | - |
| 1.3543 | 1376 | 0.3336 | - | - | - | - |
| 1.3553 | 1377 | 0.4187 | 0.6560 | 0.6240 | 0.5022 | 0.7068 |
| 1.3563 | 1378 | 0.5715 | - | - | - | - |
| 1.3573 | 1379 | 0.5358 | - | - | - | - |
| 1.3583 | 1380 | 0.5081 | - | - | - | - |
| 1.3593 | 1381 | 0.8904 | - | - | - | - |
| 1.3602 | 1382 | 0.8929 | - | - | - | - |
| 1.3612 | 1383 | 0.658 | - | - | - | - |
| 1.3622 | 1384 | 0.7433 | - | - | - | - |
| 1.3632 | 1385 | 1.4056 | - | - | - | - |
| 1.3642 | 1386 | 0.3945 | - | - | - | - |
| 1.3652 | 1387 | 0.5946 | - | - | - | - |
| 1.3661 | 1388 | 0.6706 | - | - | - | - |
| 1.3671 | 1389 | 0.7309 | - | - | - | - |
| 1.3681 | 1390 | 0.5186 | - | - | - | - |
| 1.3691 | 1391 | 0.5135 | - | - | - | - |
| 1.3701 | 1392 | 1.2628 | - | - | - | - |
| 1.3711 | 1393 | 0.4493 | - | - | - | - |
| 1.3720 | 1394 | 1.0504 | - | - | - | - |
| 1.3730 | 1395 | 0.5056 | - | - | - | - |
| 1.3740 | 1396 | 0.7245 | - | - | - | - |
| 1.375 | 1397 | 0.7753 | - | - | - | - |
| 1.3760 | 1398 | 0.5531 | - | - | - | - |
| 1.3770 | 1399 | 0.6692 | - | - | - | - |
| 1.3780 | 1400 | 0.5516 | - | - | - | - |
| 1.3789 | 1401 | 0.637 | - | - | - | - |
| 1.3799 | 1402 | 0.3756 | - | - | - | - |
| 1.3809 | 1403 | 0.7963 | - | - | - | - |
| 1.3819 | 1404 | 0.623 | - | - | - | - |
| 1.3829 | 1405 | 0.5124 | - | - | - | - |
| 1.3839 | 1406 | 0.5348 | - | - | - | - |
| 1.3848 | 1407 | 0.5751 | - | - | - | - |
| 1.3858 | 1408 | 0.6647 | - | - | - | - |
| 1.3868 | 1409 | 0.5282 | - | - | - | - |
| 1.3878 | 1410 | 0.678 | - | - | - | - |
| 1.3888 | 1411 | 0.9675 | - | - | - | - |
| 1.3898 | 1412 | 0.8766 | - | - | - | - |
| 1.3907 | 1413 | 0.5828 | - | - | - | - |
| 1.3917 | 1414 | 0.5702 | - | - | - | - |
| 1.3927 | 1415 | 0.1859 | - | - | - | - |
| 1.3937 | 1416 | 1.3485 | - | - | - | - |
| 1.3947 | 1417 | 0.5655 | - | - | - | - |
| 1.3957 | 1418 | 0.389 | - | - | - | - |
| 1.3967 | 1419 | 0.3533 | - | - | - | - |
| 1.3976 | 1420 | 0.4214 | - | - | - | - |
| 1.3986 | 1421 | 0.2939 | - | - | - | - |
| 1.3996 | 1422 | 0.5645 | - | - | - | - |
| 1.4006 | 1423 | 0.7114 | - | - | - | - |
| 1.4016 | 1424 | 0.3381 | - | - | - | - |
| 1.4026 | 1425 | 0.3896 | - | - | - | - |
| 1.4035 | 1426 | 0.7151 | - | - | - | - |
| 1.4045 | 1427 | 0.8335 | - | - | - | - |
| 1.4055 | 1428 | 0.5981 | - | - | - | - |
| 1.4065 | 1429 | 0.8689 | - | - | - | - |
| 1.4075 | 1430 | 0.3731 | - | - | - | - |
| 1.4085 | 1431 | 0.8882 | - | - | - | - |
| 1.4094 | 1432 | 0.7825 | - | - | - | - |
| 1.4104 | 1433 | 0.6815 | - | - | - | - |
| 1.4114 | 1434 | 0.2557 | - | - | - | - |
| 1.4124 | 1435 | 0.777 | - | - | - | - |
| 1.4134 | 1436 | 0.2612 | - | - | - | - |
| 1.4144 | 1437 | 0.9318 | - | - | - | - |
| 1.4154 | 1438 | 0.5541 | - | - | - | - |
| 1.4163 | 1439 | 0.7122 | - | - | - | - |
| 1.4173 | 1440 | 0.8204 | - | - | - | - |
| 1.4183 | 1441 | 0.4663 | - | - | - | - |
| 1.4193 | 1442 | 0.5459 | - | - | - | - |
| 1.4203 | 1443 | 0.6332 | - | - | - | - |
| 1.4213 | 1444 | 0.5651 | - | - | - | - |
| 1.4222 | 1445 | 0.6551 | - | - | - | - |
| 1.4232 | 1446 | 0.2372 | - | - | - | - |
| 1.4242 | 1447 | 0.4671 | - | - | - | - |
| 1.4252 | 1448 | 0.5134 | - | - | - | - |
| 1.4262 | 1449 | 0.6305 | - | - | - | - |
| 1.4272 | 1450 | 1.5586 | - | - | - | - |
| 1.4281 | 1451 | 0.294 | - | - | - | - |
| 1.4291 | 1452 | 1.0767 | - | - | - | - |
| 1.4301 | 1453 | 0.8044 | - | - | - | - |
| 1.4311 | 1454 | 1.206 | - | - | - | - |
| 1.4321 | 1455 | 0.3643 | - | - | - | - |
| 1.4331 | 1456 | 1.0759 | - | - | - | - |
| 1.4341 | 1457 | 0.2343 | - | - | - | - |
| 1.4350 | 1458 | 0.5088 | - | - | - | - |
| 1.4360 | 1459 | 0.7708 | - | - | - | - |
| 1.4370 | 1460 | 0.5081 | - | - | - | - |
| 1.4380 | 1461 | 1.1688 | - | - | - | - |
| 1.4390 | 1462 | 0.4619 | - | - | - | - |
| 1.4400 | 1463 | 0.6047 | - | - | - | - |
| 1.4409 | 1464 | 0.4521 | - | - | - | - |
| 1.4419 | 1465 | 0.4313 | - | - | - | - |
| 1.4429 | 1466 | 0.781 | - | - | - | - |
| 1.4439 | 1467 | 0.4163 | - | - | - | - |
| 1.4449 | 1468 | 1.0091 | - | - | - | - |
| 1.4459 | 1469 | 0.9163 | - | - | - | - |
| 1.4469 | 1470 | 0.297 | - | - | - | - |
| 1.4478 | 1471 | 0.6652 | - | - | - | - |
| 1.4488 | 1472 | 0.51 | - | - | - | - |
| 1.4498 | 1473 | 0.4238 | - | - | - | - |
| 1.4508 | 1474 | 0.2851 | - | - | - | - |
| 1.4518 | 1475 | 0.7563 | - | - | - | - |
| 1.4528 | 1476 | 1.5687 | - | - | - | - |
| 1.4537 | 1477 | 0.4711 | - | - | - | - |
| 1.4547 | 1478 | 0.3604 | - | - | - | - |
| 1.4557 | 1479 | 0.4551 | - | - | - | - |
| 1.4567 | 1480 | 0.5354 | - | - | - | - |
| 1.4577 | 1481 | 0.6896 | - | - | - | - |
| 1.4587 | 1482 | 0.9103 | - | - | - | - |
| 1.4596 | 1483 | 0.2517 | - | - | - | - |
| 1.4606 | 1484 | 1.1375 | - | - | - | - |
| 1.4616 | 1485 | 0.6002 | - | - | - | - |
| 1.4626 | 1486 | 0.483 | - | - | - | - |
| 1.4636 | 1487 | 0.5464 | - | - | - | - |
| 1.4646 | 1488 | 0.4677 | - | - | - | - |
| 1.4656 | 1489 | 0.673 | - | - | - | - |
| 1.4665 | 1490 | 1.1392 | - | - | - | - |
| 1.4675 | 1491 | 0.69 | - | - | - | - |
| 1.4685 | 1492 | 0.5697 | - | - | - | - |
| 1.4695 | 1493 | 0.3707 | - | - | - | - |
| 1.4705 | 1494 | 0.7141 | - | - | - | - |
| 1.4715 | 1495 | 0.4173 | - | - | - | - |
| 1.4724 | 1496 | 1.0088 | - | - | - | - |
| 1.4734 | 1497 | 0.5028 | - | - | - | - |
| 1.4744 | 1498 | 0.6502 | - | - | - | - |
| 1.4754 | 1499 | 0.5432 | - | - | - | - |
| 1.4764 | 1500 | 0.7481 | - | - | - | - |
| 1.4774 | 1501 | 0.6316 | - | - | - | - |
| 1.4783 | 1502 | 0.5775 | - | - | - | - |
| 1.4793 | 1503 | 0.5893 | - | - | - | - |
| 1.4803 | 1504 | 0.8438 | - | - | - | - |
| 1.4813 | 1505 | 0.4522 | - | - | - | - |
| 1.4823 | 1506 | 0.5695 | - | - | - | - |
| 1.4833 | 1507 | 0.9334 | - | - | - | - |
| 1.4843 | 1508 | 0.8144 | - | - | - | - |
| 1.4852 | 1509 | 0.6911 | - | - | - | - |
| 1.4862 | 1510 | 0.2779 | - | - | - | - |
| 1.4872 | 1511 | 0.7079 | - | - | - | - |
| 1.4882 | 1512 | 0.4727 | - | - | - | - |
| 1.4892 | 1513 | 0.3663 | - | - | - | - |
| 1.4902 | 1514 | 0.5314 | - | - | - | - |
| 1.4911 | 1515 | 0.2767 | - | - | - | - |
| 1.4921 | 1516 | 0.3167 | - | - | - | - |
| 1.4931 | 1517 | 0.4638 | - | - | - | - |
| 1.4941 | 1518 | 0.675 | - | - | - | - |
| 1.4951 | 1519 | 0.5539 | - | - | - | - |
| 1.4961 | 1520 | 1.0517 | - | - | - | - |
| 1.4970 | 1521 | 0.5162 | - | - | - | - |
| 1.4980 | 1522 | 0.6293 | - | - | - | - |
| 1.4990 | 1523 | 0.5688 | - | - | - | - |
| 1.5 | 1524 | 0.3404 | - | - | - | - |
| 1.5010 | 1525 | 0.512 | - | - | - | - |
| 1.5020 | 1526 | 0.5594 | - | - | - | - |
| 1.5030 | 1527 | 0.894 | - | - | - | - |
| 1.5039 | 1528 | 0.6125 | - | - | - | - |
| 1.5049 | 1529 | 0.6056 | - | - | - | - |
| 1.5059 | 1530 | 0.7177 | 0.6076 | 0.6309 | 0.4971 | 0.6999 |
| 1.5069 | 1531 | 0.3312 | - | - | - | - |
| 1.5079 | 1532 | 0.4585 | - | - | - | - |
| 1.5089 | 1533 | 0.4917 | - | - | - | - |
| 1.5098 | 1534 | 0.614 | - | - | - | - |
| 1.5108 | 1535 | 0.1733 | - | - | - | - |
| 1.5118 | 1536 | 0.7729 | - | - | - | - |
| 1.5128 | 1537 | 0.2272 | - | - | - | - |
| 1.5138 | 1538 | 0.4664 | - | - | - | - |
| 1.5148 | 1539 | 0.4116 | - | - | - | - |
| 1.5157 | 1540 | 0.2704 | - | - | - | - |
| 1.5167 | 1541 | 1.8474 | - | - | - | - |
| 1.5177 | 1542 | 0.91 | - | - | - | - |
| 1.5187 | 1543 | 0.1718 | - | - | - | - |
| 1.5197 | 1544 | 0.528 | - | - | - | - |
| 1.5207 | 1545 | 0.3511 | - | - | - | - |
| 1.5217 | 1546 | 0.7824 | - | - | - | - |
| 1.5226 | 1547 | 0.2457 | - | - | - | - |
| 1.5236 | 1548 | 1.3333 | - | - | - | - |
| 1.5246 | 1549 | 0.3311 | - | - | - | - |
| 1.5256 | 1550 | 0.9244 | - | - | - | - |
| 1.5266 | 1551 | 0.8461 | - | - | - | - |
| 1.5276 | 1552 | 0.5966 | - | - | - | - |
| 1.5285 | 1553 | 0.6486 | - | - | - | - |
| 1.5295 | 1554 | 0.3623 | - | - | - | - |
| 1.5305 | 1555 | 1.0995 | - | - | - | - |
| 1.5315 | 1556 | 0.6517 | - | - | - | - |
| 1.5325 | 1557 | 0.3321 | - | - | - | - |
| 1.5335 | 1558 | 0.5902 | - | - | - | - |
| 1.5344 | 1559 | 2.0103 | - | - | - | - |
| 1.5354 | 1560 | 0.6423 | - | - | - | - |
| 1.5364 | 1561 | 0.6593 | - | - | - | - |
| 1.5374 | 1562 | 1.1699 | - | - | - | - |
| 1.5384 | 1563 | 0.4871 | - | - | - | - |
| 1.5394 | 1564 | 1.2181 | - | - | - | - |
| 1.5404 | 1565 | 0.6265 | - | - | - | - |
| 1.5413 | 1566 | 0.3751 | - | - | - | - |
| 1.5423 | 1567 | 0.3528 | - | - | - | - |
| 1.5433 | 1568 | 0.3335 | - | - | - | - |
| 1.5443 | 1569 | 0.3162 | - | - | - | - |
| 1.5453 | 1570 | 0.9398 | - | - | - | - |
| 1.5463 | 1571 | 0.567 | - | - | - | - |
| 1.5472 | 1572 | 0.5336 | - | - | - | - |
| 1.5482 | 1573 | 1.33 | - | - | - | - |
| 1.5492 | 1574 | 1.4235 | - | - | - | - |
| 1.5502 | 1575 | 0.9983 | - | - | - | - |
| 1.5512 | 1576 | 0.4337 | - | - | - | - |
| 1.5522 | 1577 | 0.4167 | - | - | - | - |
| 1.5531 | 1578 | 0.2232 | - | - | - | - |
| 1.5541 | 1579 | 0.3178 | - | - | - | - |
| 1.5551 | 1580 | 0.3089 | - | - | - | - |
| 1.5561 | 1581 | 0.4723 | - | - | - | - |
| 1.5571 | 1582 | 0.9546 | - | - | - | - |
| 1.5581 | 1583 | 0.5077 | - | - | - | - |
| 1.5591 | 1584 | 0.8998 | - | - | - | - |
| 1.5600 | 1585 | 0.2729 | - | - | - | - |
| 1.5610 | 1586 | 0.8975 | - | - | - | - |
| 1.5620 | 1587 | 0.5164 | - | - | - | - |
| 1.5630 | 1588 | 0.4061 | - | - | - | - |
| 1.5640 | 1589 | 0.6179 | - | - | - | - |
| 1.5650 | 1590 | 0.2995 | - | - | - | - |
| 1.5659 | 1591 | 0.2999 | - | - | - | - |
| 1.5669 | 1592 | 0.7981 | - | - | - | - |
| 1.5679 | 1593 | 0.646 | - | - | - | - |
| 1.5689 | 1594 | 0.2591 | - | - | - | - |
| 1.5699 | 1595 | 0.3448 | - | - | - | - |
| 1.5709 | 1596 | 0.3245 | - | - | - | - |
| 1.5719 | 1597 | 0.713 | - | - | - | - |
| 1.5728 | 1598 | 0.565 | - | - | - | - |
| 1.5738 | 1599 | 0.5098 | - | - | - | - |
| 1.5748 | 1600 | 1.2973 | - | - | - | - |
| 1.5758 | 1601 | 0.2531 | - | - | - | - |
| 1.5768 | 1602 | 0.6581 | - | - | - | - |
| 1.5778 | 1603 | 0.9468 | - | - | - | - |
| 1.5787 | 1604 | 0.4272 | - | - | - | - |
| 1.5797 | 1605 | 0.5431 | - | - | - | - |
| 1.5807 | 1606 | 0.8867 | - | - | - | - |
| 1.5817 | 1607 | 0.8721 | - | - | - | - |
| 1.5827 | 1608 | 0.6227 | - | - | - | - |
| 1.5837 | 1609 | 0.1811 | - | - | - | - |
| 1.5846 | 1610 | 0.7213 | - | - | - | - |
| 1.5856 | 1611 | 0.2797 | - | - | - | - |
| 1.5866 | 1612 | 0.6565 | - | - | - | - |
| 1.5876 | 1613 | 0.7022 | - | - | - | - |
| 1.5886 | 1614 | 0.7888 | - | - | - | - |
| 1.5896 | 1615 | 0.709 | - | - | - | - |
| 1.5906 | 1616 | 0.7434 | - | - | - | - |
| 1.5915 | 1617 | 0.53 | - | - | - | - |
| 1.5925 | 1618 | 0.4844 | - | - | - | - |
| 1.5935 | 1619 | 0.5643 | - | - | - | - |
| 1.5945 | 1620 | 0.3544 | - | - | - | - |
| 1.5955 | 1621 | 0.2189 | - | - | - | - |
| 1.5965 | 1622 | 0.4058 | - | - | - | - |
| 1.5974 | 1623 | 0.7974 | - | - | - | - |
| 1.5984 | 1624 | 0.5026 | - | - | - | - |
| 1.5994 | 1625 | 0.5145 | - | - | - | - |
| 1.6004 | 1626 | 0.7416 | - | - | - | - |
| 1.6014 | 1627 | 0.7841 | - | - | - | - |
| 1.6024 | 1628 | 0.7778 | - | - | - | - |
| 1.6033 | 1629 | 0.3109 | - | - | - | - |
| 1.6043 | 1630 | 0.2943 | - | - | - | - |
| 1.6053 | 1631 | 0.3306 | - | - | - | - |
| 1.6063 | 1632 | 0.4688 | - | - | - | - |
| 1.6073 | 1633 | 0.319 | - | - | - | - |
| 1.6083 | 1634 | 0.4538 | - | - | - | - |
| 1.6093 | 1635 | 0.5982 | - | - | - | - |
| 1.6102 | 1636 | 0.3236 | - | - | - | - |
| 1.6112 | 1637 | 0.5368 | - | - | - | - |
| 1.6122 | 1638 | 0.5106 | - | - | - | - |
| 1.6132 | 1639 | 0.4051 | - | - | - | - |
| 1.6142 | 1640 | 0.6246 | - | - | - | - |
| 1.6152 | 1641 | 0.3804 | - | - | - | - |
| 1.6161 | 1642 | 0.3031 | - | - | - | - |
| 1.6171 | 1643 | 0.6316 | - | - | - | - |
| 1.6181 | 1644 | 0.2239 | - | - | - | - |
| 1.6191 | 1645 | 1.37 | - | - | - | - |
| 1.6201 | 1646 | 0.2093 | - | - | - | - |
| 1.6211 | 1647 | 0.4044 | - | - | - | - |
| 1.6220 | 1648 | 0.3808 | - | - | - | - |
| 1.6230 | 1649 | 0.4414 | - | - | - | - |
| 1.6240 | 1650 | 0.7992 | - | - | - | - |
| 1.625 | 1651 | 0.4573 | - | - | - | - |
| 1.6260 | 1652 | 0.2918 | - | - | - | - |
| 1.6270 | 1653 | 0.423 | - | - | - | - |
| 1.6280 | 1654 | 0.367 | - | - | - | - |
| 1.6289 | 1655 | 0.4115 | - | - | - | - |
| 1.6299 | 1656 | 0.3583 | - | - | - | - |
| 1.6309 | 1657 | 0.3222 | - | - | - | - |
| 1.6319 | 1658 | 0.8085 | - | - | - | - |
| 1.6329 | 1659 | 0.2026 | - | - | - | - |
| 1.6339 | 1660 | 0.5456 | - | - | - | - |
| 1.6348 | 1661 | 0.8468 | - | - | - | - |
| 1.6358 | 1662 | 1.1053 | - | - | - | - |
| 1.6368 | 1663 | 0.7123 | - | - | - | - |
| 1.6378 | 1664 | 0.2607 | - | - | - | - |
| 1.6388 | 1665 | 0.0968 | - | - | - | - |
| 1.6398 | 1666 | 0.2164 | - | - | - | - |
| 1.6407 | 1667 | 0.69 | - | - | - | - |
| 1.6417 | 1668 | 1.0048 | - | - | - | - |
| 1.6427 | 1669 | 0.3305 | - | - | - | - |
| 1.6437 | 1670 | 0.2231 | - | - | - | - |
| 1.6447 | 1671 | 0.2445 | - | - | - | - |
| 1.6457 | 1672 | 0.3242 | - | - | - | - |
| 1.6467 | 1673 | 0.089 | - | - | - | - |
| 1.6476 | 1674 | 0.5702 | - | - | - | - |
| 1.6486 | 1675 | 0.4989 | - | - | - | - |
| 1.6496 | 1676 | 0.9726 | - | - | - | - |
| 1.6506 | 1677 | 0.4638 | - | - | - | - |
| 1.6516 | 1678 | 0.4957 | - | - | - | - |
| 1.6526 | 1679 | 0.8089 | - | - | - | - |
| 1.6535 | 1680 | 0.2915 | - | - | - | - |
| 1.6545 | 1681 | 0.5772 | - | - | - | - |
| 1.6555 | 1682 | 0.569 | - | - | - | - |
| 1.6565 | 1683 | 0.568 | 0.5907 | 0.6242 | 0.5246 | 0.7027 |
| 1.6575 | 1684 | 0.4959 | - | - | - | - |
| 1.6585 | 1685 | 0.4703 | - | - | - | - |
| 1.6594 | 1686 | 0.2729 | - | - | - | - |
| 1.6604 | 1687 | 0.9194 | - | - | - | - |
| 1.6614 | 1688 | 0.4448 | - | - | - | - |
| 1.6624 | 1689 | 1.034 | - | - | - | - |
| 1.6634 | 1690 | 0.7181 | - | - | - | - |
| 1.6644 | 1691 | 0.3676 | - | - | - | - |
| 1.6654 | 1692 | 0.2037 | - | - | - | - |
| 1.6663 | 1693 | 0.5381 | - | - | - | - |
| 1.6673 | 1694 | 0.5897 | - | - | - | - |
| 1.6683 | 1695 | 0.3893 | - | - | - | - |
| 1.6693 | 1696 | 0.2726 | - | - | - | - |
| 1.6703 | 1697 | 0.3016 | - | - | - | - |
| 1.6713 | 1698 | 0.3622 | - | - | - | - |
| 1.6722 | 1699 | 0.7413 | - | - | - | - |
| 1.6732 | 1700 | 0.4711 | - | - | - | - |
| 1.6742 | 1701 | 0.5852 | - | - | - | - |
| 1.6752 | 1702 | 0.2488 | - | - | - | - |
| 1.6762 | 1703 | 0.6424 | - | - | - | - |
| 1.6772 | 1704 | 0.5929 | - | - | - | - |
| 1.6781 | 1705 | 1.1645 | - | - | - | - |
| 1.6791 | 1706 | 0.3906 | - | - | - | - |
| 1.6801 | 1707 | 0.6635 | - | - | - | - |
| 1.6811 | 1708 | 0.3191 | - | - | - | - |
| 1.6821 | 1709 | 1.1335 | - | - | - | - |
| 1.6831 | 1710 | 0.4492 | - | - | - | - |
| 1.6841 | 1711 | 0.5182 | - | - | - | - |
| 1.6850 | 1712 | 1.1094 | - | - | - | - |
| 1.6860 | 1713 | 0.2395 | - | - | - | - |
| 1.6870 | 1714 | 0.7895 | - | - | - | - |
| 1.6880 | 1715 | 0.1977 | - | - | - | - |
| 1.6890 | 1716 | 0.3888 | - | - | - | - |
| 1.6900 | 1717 | 0.5365 | - | - | - | - |
| 1.6909 | 1718 | 0.7392 | - | - | - | - |
| 1.6919 | 1719 | 0.7695 | - | - | - | - |
| 1.6929 | 1720 | 0.6455 | - | - | - | - |
| 1.6939 | 1721 | 0.25 | - | - | - | - |
| 1.6949 | 1722 | 0.4361 | - | - | - | - |
| 1.6959 | 1723 | 0.5931 | - | - | - | - |
| 1.6969 | 1724 | 0.3968 | - | - | - | - |
| 1.6978 | 1725 | 0.7418 | - | - | - | - |
| 1.6988 | 1726 | 1.2343 | - | - | - | - |
| 1.6998 | 1727 | 0.5609 | - | - | - | - |
| 1.7008 | 1728 | 0.2499 | - | - | - | - |
| 1.7018 | 1729 | 0.3217 | - | - | - | - |
| 1.7028 | 1730 | 0.5106 | - | - | - | - |
| 1.7037 | 1731 | 0.5158 | - | - | - | - |
| 1.7047 | 1732 | 0.3063 | - | - | - | - |
| 1.7057 | 1733 | 0.6839 | - | - | - | - |
| 1.7067 | 1734 | 0.7934 | - | - | - | - |
| 1.7077 | 1735 | 0.3674 | - | - | - | - |
| 1.7087 | 1736 | 0.7417 | - | - | - | - |
| 1.7096 | 1737 | 0.5724 | - | - | - | - |
| 1.7106 | 1738 | 0.4792 | - | - | - | - |
| 1.7116 | 1739 | 0.1971 | - | - | - | - |
| 1.7126 | 1740 | 0.1942 | - | - | - | - |
| 1.7136 | 1741 | 0.1964 | - | - | - | - |
| 1.7146 | 1742 | 0.5801 | - | - | - | - |
| 1.7156 | 1743 | 0.4141 | - | - | - | - |
| 1.7165 | 1744 | 0.7436 | - | - | - | - |
| 1.7175 | 1745 | 0.5944 | - | - | - | - |
| 1.7185 | 1746 | 0.2409 | - | - | - | - |
| 1.7195 | 1747 | 0.7519 | - | - | - | - |
| 1.7205 | 1748 | 0.539 | - | - | - | - |
| 1.7215 | 1749 | 0.4905 | - | - | - | - |
| 1.7224 | 1750 | 0.5004 | - | - | - | - |
| 1.7234 | 1751 | 0.8092 | - | - | - | - |
| 1.7244 | 1752 | 0.7336 | - | - | - | - |
| 1.7254 | 1753 | 0.7179 | - | - | - | - |
| 1.7264 | 1754 | 0.5934 | - | - | - | - |
| 1.7274 | 1755 | 0.3778 | - | - | - | - |
| 1.7283 | 1756 | 0.536 | - | - | - | - |
| 1.7293 | 1757 | 0.7303 | - | - | - | - |
| 1.7303 | 1758 | 0.4749 | - | - | - | - |
| 1.7313 | 1759 | 0.2381 | - | - | - | - |
| 1.7323 | 1760 | 0.3432 | - | - | - | - |
| 1.7333 | 1761 | 0.551 | - | - | - | - |
| 1.7343 | 1762 | 0.7364 | - | - | - | - |
| 1.7352 | 1763 | 0.3735 | - | - | - | - |
| 1.7362 | 1764 | 0.219 | - | - | - | - |
| 1.7372 | 1765 | 0.5522 | - | - | - | - |
| 1.7382 | 1766 | 0.5187 | - | - | - | - |
| 1.7392 | 1767 | 0.8373 | - | - | - | - |
| 1.7402 | 1768 | 0.3356 | - | - | - | - |
| 1.7411 | 1769 | 0.4305 | - | - | - | - |
| 1.7421 | 1770 | 0.5027 | - | - | - | - |
| 1.7431 | 1771 | 0.5996 | - | - | - | - |
| 1.7441 | 1772 | 0.6392 | - | - | - | - |
| 1.7451 | 1773 | 0.5633 | - | - | - | - |
| 1.7461 | 1774 | 0.527 | - | - | - | - |
| 1.7470 | 1775 | 0.792 | - | - | - | - |
| 1.7480 | 1776 | 0.3731 | - | - | - | - |
| 1.7490 | 1777 | 0.5097 | - | - | - | - |
| 1.75 | 1778 | 0.6975 | - | - | - | - |
| 1.7510 | 1779 | 0.4482 | - | - | - | - |
| 1.7520 | 1780 | 0.3304 | - | - | - | - |
| 1.7530 | 1781 | 0.7658 | - | - | - | - |
| 1.7539 | 1782 | 0.3893 | - | - | - | - |
| 1.7549 | 1783 | 0.4672 | - | - | - | - |
| 1.7559 | 1784 | 0.6018 | - | - | - | - |
| 1.7569 | 1785 | 0.299 | - | - | - | - |
| 1.7579 | 1786 | 0.5875 | - | - | - | - |
| 1.7589 | 1787 | 0.5496 | - | - | - | - |
| 1.7598 | 1788 | 0.2671 | - | - | - | - |
| 1.7608 | 1789 | 0.3964 | - | - | - | - |
| 1.7618 | 1790 | 0.7899 | - | - | - | - |
| 1.7628 | 1791 | 0.2364 | - | - | - | - |
| 1.7638 | 1792 | 0.6523 | - | - | - | - |
| 1.7648 | 1793 | 0.1899 | - | - | - | - |
| 1.7657 | 1794 | 0.5742 | - | - | - | - |
| 1.7667 | 1795 | 0.406 | - | - | - | - |
| 1.7677 | 1796 | 0.3509 | - | - | - | - |
| 1.7687 | 1797 | 0.2206 | - | - | - | - |
| 1.7697 | 1798 | 0.7158 | - | - | - | - |
| 1.7707 | 1799 | 0.403 | - | - | - | - |
| 1.7717 | 1800 | 0.4324 | - | - | - | - |
| 1.7726 | 1801 | 0.4338 | - | - | - | - |
| 1.7736 | 1802 | 0.4808 | - | - | - | - |
| 1.7746 | 1803 | 0.3099 | - | - | - | - |
| 1.7756 | 1804 | 0.9415 | - | - | - | - |
| 1.7766 | 1805 | 0.8304 | - | - | - | - |
| 1.7776 | 1806 | 0.4728 | - | - | - | - |
| 1.7785 | 1807 | 0.5041 | - | - | - | - |
| 1.7795 | 1808 | 0.1113 | - | - | - | - |
| 1.7805 | 1809 | 0.6698 | - | - | - | - |
| 1.7815 | 1810 | 0.2146 | - | - | - | - |
| 1.7825 | 1811 | 0.3076 | - | - | - | - |
| 1.7835 | 1812 | 0.431 | - | - | - | - |
| 1.7844 | 1813 | 0.3019 | - | - | - | - |
| 1.7854 | 1814 | 0.4078 | - | - | - | - |
| 1.7864 | 1815 | 0.5552 | - | - | - | - |
| 1.7874 | 1816 | 0.7442 | - | - | - | - |
| 1.7884 | 1817 | 0.855 | - | - | - | - |
| 1.7894 | 1818 | 0.5502 | - | - | - | - |
| 1.7904 | 1819 | 0.4423 | - | - | - | - |
| 1.7913 | 1820 | 0.4353 | - | - | - | - |
| 1.7923 | 1821 | 0.4199 | - | - | - | - |
| 1.7933 | 1822 | 0.5881 | - | - | - | - |
| 1.7943 | 1823 | 0.393 | - | - | - | - |
| 1.7953 | 1824 | 0.8371 | - | - | - | - |
| 1.7963 | 1825 | 0.8951 | - | - | - | - |
| 1.7972 | 1826 | 0.5165 | - | - | - | - |
| 1.7982 | 1827 | 0.2122 | - | - | - | - |
| 1.7992 | 1828 | 0.5037 | - | - | - | - |
| 1.8002 | 1829 | 0.4873 | - | - | - | - |
| 1.8012 | 1830 | 0.5968 | - | - | - | - |
| 1.8022 | 1831 | 0.4316 | - | - | - | - |
| 1.8031 | 1832 | 0.1818 | - | - | - | - |
| 1.8041 | 1833 | 0.2078 | - | - | - | - |
| 1.8051 | 1834 | 0.5342 | - | - | - | - |
| 1.8061 | 1835 | 0.2382 | - | - | - | - |
| 1.8071 | 1836 | 0.1414 | 0.5629 | 0.6425 | 0.5239 | 0.6921 |
| 1.8081 | 1837 | 0.3592 | - | - | - | - |
| 1.8091 | 1838 | 0.893 | - | - | - | - |
| 1.8100 | 1839 | 0.3389 | - | - | - | - |
| 1.8110 | 1840 | 1.2053 | - | - | - | - |
| 1.8120 | 1841 | 0.2925 | - | - | - | - |
| 1.8130 | 1842 | 0.3789 | - | - | - | - |
| 1.8140 | 1843 | 0.4395 | - | - | - | - |
| 1.8150 | 1844 | 0.1913 | - | - | - | - |
| 1.8159 | 1845 | 0.2172 | - | - | - | - |
| 1.8169 | 1846 | 0.6572 | - | - | - | - |
| 1.8179 | 1847 | 0.3379 | - | - | - | - |
| 1.8189 | 1848 | 0.3634 | - | - | - | - |
| 1.8199 | 1849 | 0.2917 | - | - | - | - |
| 1.8209 | 1850 | 0.0589 | - | - | - | - |
| 1.8219 | 1851 | 0.3823 | - | - | - | - |
| 1.8228 | 1852 | 0.6974 | - | - | - | - |
| 1.8238 | 1853 | 0.692 | - | - | - | - |
| 1.8248 | 1854 | 0.2734 | - | - | - | - |
| 1.8258 | 1855 | 0.3252 | - | - | - | - |
| 1.8268 | 1856 | 0.2146 | - | - | - | - |
| 1.8278 | 1857 | 0.5838 | - | - | - | - |
| 1.8287 | 1858 | 0.6808 | - | - | - | - |
| 1.8297 | 1859 | 0.7431 | - | - | - | - |
| 1.8307 | 1860 | 0.2359 | - | - | - | - |
| 1.8317 | 1861 | 0.3265 | - | - | - | - |
| 1.8327 | 1862 | 0.7019 | - | - | - | - |
| 1.8337 | 1863 | 1.182 | - | - | - | - |
| 1.8346 | 1864 | 0.3365 | - | - | - | - |
| 1.8356 | 1865 | 0.2282 | - | - | - | - |
| 1.8366 | 1866 | 0.7224 | - | - | - | - |
| 1.8376 | 1867 | 0.3317 | - | - | - | - |
| 1.8386 | 1868 | 0.922 | - | - | - | - |
| 1.8396 | 1869 | 0.7089 | - | - | - | - |
| 1.8406 | 1870 | 0.1003 | - | - | - | - |
| 1.8415 | 1871 | 0.1736 | - | - | - | - |
| 1.8425 | 1872 | 0.8854 | - | - | - | - |
| 1.8435 | 1873 | 0.2689 | - | - | - | - |
| 1.8445 | 1874 | 0.2709 | - | - | - | - |
| 1.8455 | 1875 | 0.4293 | - | - | - | - |
| 1.8465 | 1876 | 0.6023 | - | - | - | - |
| 1.8474 | 1877 | 0.817 | - | - | - | - |
| 1.8484 | 1878 | 0.3847 | - | - | - | - |
| 1.8494 | 1879 | 0.8794 | - | - | - | - |
| 1.8504 | 1880 | 0.8067 | - | - | - | - |
| 1.8514 | 1881 | 0.3147 | - | - | - | - |
| 1.8524 | 1882 | 0.8664 | - | - | - | - |
| 1.8533 | 1883 | 0.8473 | - | - | - | - |
| 1.8543 | 1884 | 0.6057 | - | - | - | - |
| 1.8553 | 1885 | 0.702 | - | - | - | - |
| 1.8563 | 1886 | 1.3453 | - | - | - | - |
| 1.8573 | 1887 | 0.8523 | - | - | - | - |
| 1.8583 | 1888 | 0.2808 | - | - | - | - |
| 1.8593 | 1889 | 0.7078 | - | - | - | - |
| 1.8602 | 1890 | 0.5023 | - | - | - | - |
| 1.8612 | 1891 | 0.4426 | - | - | - | - |
| 1.8622 | 1892 | 0.5713 | - | - | - | - |
| 1.8632 | 1893 | 0.2241 | - | - | - | - |
| 1.8642 | 1894 | 0.0912 | - | - | - | - |
| 1.8652 | 1895 | 0.6717 | - | - | - | - |
| 1.8661 | 1896 | 0.4985 | - | - | - | - |
| 1.8671 | 1897 | 0.485 | - | - | - | - |
| 1.8681 | 1898 | 0.9783 | - | - | - | - |
| 1.8691 | 1899 | 0.4758 | - | - | - | - |
| 1.8701 | 1900 | 0.5097 | - | - | - | - |
| 1.8711 | 1901 | 0.282 | - | - | - | - |
| 1.8720 | 1902 | 0.8734 | - | - | - | - |
| 1.8730 | 1903 | 0.5185 | - | - | - | - |
| 1.8740 | 1904 | 0.2085 | - | - | - | - |
| 1.875 | 1905 | 0.3836 | - | - | - | - |
| 1.8760 | 1906 | 0.4029 | - | - | - | - |
| 1.8770 | 1907 | 0.4809 | - | - | - | - |
| 1.8780 | 1908 | 0.8473 | - | - | - | - |
| 1.8789 | 1909 | 0.7449 | - | - | - | - |
| 1.8799 | 1910 | 0.7715 | - | - | - | - |
| 1.8809 | 1911 | 0.6199 | - | - | - | - |
| 1.8819 | 1912 | 0.1564 | - | - | - | - |
| 1.8829 | 1913 | 0.3665 | - | - | - | - |
| 1.8839 | 1914 | 0.155 | - | - | - | - |
| 1.8848 | 1915 | 0.8861 | - | - | - | - |
| 1.8858 | 1916 | 0.4216 | - | - | - | - |
| 1.8868 | 1917 | 0.3504 | - | - | - | - |
| 1.8878 | 1918 | 0.764 | - | - | - | - |
| 1.8888 | 1919 | 0.2264 | - | - | - | - |
| 1.8898 | 1920 | 0.7582 | - | - | - | - |
| 1.8907 | 1921 | 0.3519 | - | - | - | - |
| 1.8917 | 1922 | 0.4565 | - | - | - | - |
| 1.8927 | 1923 | 0.7107 | - | - | - | - |
| 1.8937 | 1924 | 0.6174 | - | - | - | - |
| 1.8947 | 1925 | 0.9543 | - | - | - | - |
| 1.8957 | 1926 | 0.4905 | - | - | - | - |
| 1.8967 | 1927 | 0.6205 | - | - | - | - |
| 1.8976 | 1928 | 0.6184 | - | - | - | - |
| 1.8986 | 1929 | 0.4762 | - | - | - | - |
| 1.8996 | 1930 | 0.5842 | - | - | - | - |
| 1.9006 | 1931 | 0.0988 | - | - | - | - |
| 1.9016 | 1932 | 0.7592 | - | - | - | - |
| 1.9026 | 1933 | 0.4981 | - | - | - | - |
| 1.9035 | 1934 | 0.3224 | - | - | - | - |
| 1.9045 | 1935 | 0.8206 | - | - | - | - |
| 1.9055 | 1936 | 0.781 | - | - | - | - |
| 1.9065 | 1937 | 0.6597 | - | - | - | - |
| 1.9075 | 1938 | 0.3783 | - | - | - | - |
| 1.9085 | 1939 | 0.3694 | - | - | - | - |
| 1.9094 | 1940 | 0.4454 | - | - | - | - |
| 1.9104 | 1941 | 0.2308 | - | - | - | - |
| 1.9114 | 1942 | 0.325 | - | - | - | - |
| 1.9124 | 1943 | 0.4636 | - | - | - | - |
| 1.9134 | 1944 | 0.2686 | - | - | - | - |
| 1.9144 | 1945 | 0.6857 | - | - | - | - |
| 1.9154 | 1946 | 0.5308 | - | - | - | - |
| 1.9163 | 1947 | 0.4918 | - | - | - | - |
| 1.9173 | 1948 | 0.4506 | - | - | - | - |
| 1.9183 | 1949 | 0.5216 | - | - | - | - |
| 1.9193 | 1950 | 0.7475 | - | - | - | - |
| 1.9203 | 1951 | 0.6182 | - | - | - | - |
| 1.9213 | 1952 | 0.3789 | - | - | - | - |
| 1.9222 | 1953 | 0.3469 | - | - | - | - |
| 1.9232 | 1954 | 0.5435 | - | - | - | - |
| 1.9242 | 1955 | 0.1886 | - | - | - | - |
| 1.9252 | 1956 | 0.7569 | - | - | - | - |
| 1.9262 | 1957 | 0.3396 | - | - | - | - |
| 1.9272 | 1958 | 0.5911 | - | - | - | - |
| 1.9281 | 1959 | 0.2211 | - | - | - | - |
| 1.9291 | 1960 | 0.4902 | - | - | - | - |
| 1.9301 | 1961 | 0.5863 | - | - | - | - |
| 1.9311 | 1962 | 0.3685 | - | - | - | - |
| 1.9321 | 1963 | 0.5296 | - | - | - | - |
| 1.9331 | 1964 | 0.2576 | - | - | - | - |
| 1.9341 | 1965 | 0.2258 | - | - | - | - |
| 1.9350 | 1966 | 0.4208 | - | - | - | - |
| 1.9360 | 1967 | 0.4088 | - | - | - | - |
| 1.9370 | 1968 | 0.4198 | - | - | - | - |
| 1.9380 | 1969 | 0.3591 | - | - | - | - |
| 1.9390 | 1970 | 0.2849 | - | - | - | - |
| 1.9400 | 1971 | 0.6841 | - | - | - | - |
| 1.9409 | 1972 | 0.1712 | - | - | - | - |
| 1.9419 | 1973 | 0.2629 | - | - | - | - |
| 1.9429 | 1974 | 0.444 | - | - | - | - |
| 1.9439 | 1975 | 0.1811 | - | - | - | - |
| 1.9449 | 1976 | 0.4874 | - | - | - | - |
| 1.9459 | 1977 | 0.6704 | - | - | - | - |
| 1.9469 | 1978 | 0.1352 | - | - | - | - |
| 1.9478 | 1979 | 0.243 | - | - | - | - |
| 1.9488 | 1980 | 0.7386 | - | - | - | - |
| 1.9498 | 1981 | 0.188 | - | - | - | - |
| 1.9508 | 1982 | 0.4885 | - | - | - | - |
| 1.9518 | 1983 | 0.398 | - | - | - | - |
| 1.9528 | 1984 | 0.4067 | - | - | - | - |
| 1.9537 | 1985 | 0.2526 | - | - | - | - |
| 1.9547 | 1986 | 0.4214 | - | - | - | - |
| 1.9557 | 1987 | 0.699 | - | - | - | - |
| 1.9567 | 1988 | 1.1089 | - | - | - | - |
| 1.9577 | 1989 | 0.6792 | 0.5615 | 0.6588 | 0.5221 | 0.6928 |
| 1.9587 | 1990 | 0.4697 | - | - | - | - |
| 1.9596 | 1991 | 0.1804 | - | - | - | - |
| 1.9606 | 1992 | 0.9363 | - | - | - | - |
| 1.9616 | 1993 | 0.315 | - | - | - | - |
| 1.9626 | 1994 | 0.216 | - | - | - | - |
| 1.9636 | 1995 | 1.0211 | - | - | - | - |
| 1.9646 | 1996 | 0.2225 | - | - | - | - |
| 1.9656 | 1997 | 0.3734 | - | - | - | - |
| 1.9665 | 1998 | 1.1127 | - | - | - | - |
| 1.9675 | 1999 | 0.5302 | - | - | - | - |
| 1.9685 | 2000 | 0.4619 | - | - | - | - |
| 1.9695 | 2001 | 0.4452 | - | - | - | - |
| 1.9705 | 2002 | 0.2555 | - | - | - | - |
| 1.9715 | 2003 | 0.3951 | - | - | - | - |
| 1.9724 | 2004 | 0.4926 | - | - | - | - |
| 1.9734 | 2005 | 0.4563 | - | - | - | - |
| 1.9744 | 2006 | 0.2664 | - | - | - | - |
| 1.9754 | 2007 | 0.5579 | - | - | - | - |
| 1.9764 | 2008 | 0.4412 | - | - | - | - |
| 1.9774 | 2009 | 0.641 | - | - | - | - |
| 1.9783 | 2010 | 0.2505 | - | - | - | - |
| 1.9793 | 2011 | 0.5773 | - | - | - | - |
| 1.9803 | 2012 | 0.4118 | - | - | - | - |
| 1.9813 | 2013 | 0.6585 | - | - | - | - |
| 1.9823 | 2014 | 1.0842 | - | - | - | - |
| 1.9833 | 2015 | 0.5697 | - | - | - | - |
| 1.9843 | 2016 | 0.4335 | - | - | - | - |
| 1.9852 | 2017 | 1.0189 | - | - | - | - |
| 1.9862 | 2018 | 0.7046 | - | - | - | - |
| 1.9872 | 2019 | 0.2414 | - | - | - | - |
| 1.9882 | 2020 | 0.3538 | - | - | - | - |
| 1.9892 | 2021 | 0.6771 | - | - | - | - |
| 1.9902 | 2022 | 0.4546 | - | - | - | - |
| 1.9911 | 2023 | 0.3169 | - | - | - | - |
| 1.9921 | 2024 | 0.4244 | - | - | - | - |
| 1.9931 | 2025 | 0.0684 | - | - | - | - |
| 1.9941 | 2026 | 0.4007 | - | - | - | - |
| 1.9951 | 2027 | 0.3198 | - | - | - | - |
| 1.9961 | 2028 | 0.1821 | - | - | - | - |
| 1.9970 | 2029 | 0.491 | - | - | - | - |
| 1.9980 | 2030 | 0.8449 | - | - | - | - |
| 1.9990 | 2031 | 0.2122 | - | - | - | - |
| 2.0 | 2032 | 0.212 | - | - | - | - |
| 2.0010 | 2033 | 0.5254 | - | - | - | - |
| 2.0020 | 2034 | 0.7473 | - | - | - | - |
| 2.0030 | 2035 | 0.0799 | - | - | - | - |
| 2.0039 | 2036 | 0.4975 | - | - | - | - |
| 2.0049 | 2037 | 0.4425 | - | - | - | - |
| 2.0059 | 2038 | 0.3234 | - | - | - | - |
| 2.0069 | 2039 | 0.3183 | - | - | - | - |
| 2.0079 | 2040 | 0.3073 | - | - | - | - |
| 2.0089 | 2041 | 0.2292 | - | - | - | - |
| 2.0098 | 2042 | 0.3874 | - | - | - | - |
| 2.0108 | 2043 | 0.6781 | - | - | - | - |
| 2.0118 | 2044 | 0.6645 | - | - | - | - |
| 2.0128 | 2045 | 0.2373 | - | - | - | - |
| 2.0138 | 2046 | 0.3813 | - | - | - | - |
| 2.0148 | 2047 | 0.88 | - | - | - | - |
| 2.0157 | 2048 | 0.3683 | - | - | - | - |
| 2.0167 | 2049 | 0.519 | - | - | - | - |
| 2.0177 | 2050 | 1.0128 | - | - | - | - |
| 2.0187 | 2051 | 1.1026 | - | - | - | - |
| 2.0197 | 2052 | 0.4198 | - | - | - | - |
| 2.0207 | 2053 | 0.1097 | - | - | - | - |
| 2.0217 | 2054 | 0.4641 | - | - | - | - |
| 2.0226 | 2055 | 0.4183 | - | - | - | - |
| 2.0236 | 2056 | 0.2043 | - | - | - | - |
| 2.0246 | 2057 | 0.7447 | - | - | - | - |
| 2.0256 | 2058 | 0.5261 | - | - | - | - |
| 2.0266 | 2059 | 1.0812 | - | - | - | - |
| 2.0276 | 2060 | 0.3421 | - | - | - | - |
| 2.0285 | 2061 | 0.5063 | - | - | - | - |
| 2.0295 | 2062 | 0.2861 | - | - | - | - |
| 2.0305 | 2063 | 0.0981 | - | - | - | - |
| 2.0315 | 2064 | 0.5772 | - | - | - | - |
| 2.0325 | 2065 | 0.0832 | - | - | - | - |
| 2.0335 | 2066 | 0.3156 | - | - | - | - |
| 2.0344 | 2067 | 0.1706 | - | - | - | - |
| 2.0354 | 2068 | 0.3911 | - | - | - | - |
| 2.0364 | 2069 | 0.6807 | - | - | - | - |
| 2.0374 | 2070 | 0.5363 | - | - | - | - |
| 2.0384 | 2071 | 0.5497 | - | - | - | - |
| 2.0394 | 2072 | 0.7298 | - | - | - | - |
| 2.0404 | 2073 | 0.3255 | - | - | - | - |
| 2.0413 | 2074 | 0.2934 | - | - | - | - |
| 2.0423 | 2075 | 0.2041 | - | - | - | - |
| 2.0433 | 2076 | 0.6235 | - | - | - | - |
| 2.0443 | 2077 | 0.4104 | - | - | - | - |
| 2.0453 | 2078 | 0.1305 | - | - | - | - |
| 2.0463 | 2079 | 0.1591 | - | - | - | - |
| 2.0472 | 2080 | 0.3531 | - | - | - | - |
| 2.0482 | 2081 | 0.2944 | - | - | - | - |
| 2.0492 | 2082 | 0.3121 | - | - | - | - |
| 2.0502 | 2083 | 0.5418 | - | - | - | - |
| 2.0512 | 2084 | 0.8162 | - | - | - | - |
| 2.0522 | 2085 | 0.4787 | - | - | - | - |
| 2.0531 | 2086 | 0.1146 | - | - | - | - |
| 2.0541 | 2087 | 0.2373 | - | - | - | - |
| 2.0551 | 2088 | 0.1548 | - | - | - | - |
| 2.0561 | 2089 | 0.4515 | - | - | - | - |
| 2.0571 | 2090 | 0.4699 | - | - | - | - |
| 2.0581 | 2091 | 0.3675 | - | - | - | - |
| 2.0591 | 2092 | 0.2537 | - | - | - | - |
| 2.0600 | 2093 | 0.4433 | - | - | - | - |
| 2.0610 | 2094 | 0.3595 | - | - | - | - |
| 2.0620 | 2095 | 0.4329 | - | - | - | - |
| 2.0630 | 2096 | 0.1803 | - | - | - | - |
| 2.0640 | 2097 | 0.6451 | - | - | - | - |
| 2.0650 | 2098 | 0.3992 | - | - | - | - |
| 2.0659 | 2099 | 0.2349 | - | - | - | - |
| 2.0669 | 2100 | 0.1526 | - | - | - | - |
| 2.0679 | 2101 | 0.133 | - | - | - | - |
| 2.0689 | 2102 | 0.8299 | - | - | - | - |
| 2.0699 | 2103 | 0.5157 | - | - | - | - |
| 2.0709 | 2104 | 0.4256 | - | - | - | - |
| 2.0719 | 2105 | 0.3434 | - | - | - | - |
| 2.0728 | 2106 | 0.3479 | - | - | - | - |
| 2.0738 | 2107 | 0.2604 | - | - | - | - |
| 2.0748 | 2108 | 0.3513 | - | - | - | - |
| 2.0758 | 2109 | 0.8243 | - | - | - | - |
| 2.0768 | 2110 | 0.2352 | - | - | - | - |
| 2.0778 | 2111 | 0.2082 | - | - | - | - |
| 2.0787 | 2112 | 0.145 | - | - | - | - |
| 2.0797 | 2113 | 1.3586 | - | - | - | - |
| 2.0807 | 2114 | 0.3679 | - | - | - | - |
| 2.0817 | 2115 | 0.3545 | - | - | - | - |
| 2.0827 | 2116 | 0.6441 | - | - | - | - |
| 2.0837 | 2117 | 0.8558 | - | - | - | - |
| 2.0846 | 2118 | 0.4696 | - | - | - | - |
| 2.0856 | 2119 | 0.8495 | - | - | - | - |
| 2.0866 | 2120 | 0.8995 | - | - | - | - |
| 2.0876 | 2121 | 0.3276 | - | - | - | - |
| 2.0886 | 2122 | 0.7393 | - | - | - | - |
| 2.0896 | 2123 | 0.048 | - | - | - | - |
| 2.0906 | 2124 | 0.2266 | - | - | - | - |
| 2.0915 | 2125 | 0.6785 | - | - | - | - |
| 2.0925 | 2126 | 0.3503 | - | - | - | - |
| 2.0935 | 2127 | 0.1894 | - | - | - | - |
| 2.0945 | 2128 | 1.2168 | - | - | - | - |
| 2.0955 | 2129 | 0.1664 | - | - | - | - |
| 2.0965 | 2130 | 0.3649 | - | - | - | - |
| 2.0974 | 2131 | 0.5949 | - | - | - | - |
| 2.0984 | 2132 | 0.571 | - | - | - | - |
| 2.0994 | 2133 | 0.3775 | - | - | - | - |
| 2.1004 | 2134 | 0.3978 | - | - | - | - |
| 2.1014 | 2135 | 0.4804 | - | - | - | - |
| 2.1024 | 2136 | 0.2534 | - | - | - | - |
| 2.1033 | 2137 | 0.2701 | - | - | - | - |
| 2.1043 | 2138 | 0.2538 | - | - | - | - |
| 2.1053 | 2139 | 0.6239 | - | - | - | - |
| 2.1063 | 2140 | 0.7077 | - | - | - | - |
| 2.1073 | 2141 | 0.1929 | - | - | - | - |
| 2.1083 | 2142 | 0.1367 | 0.5293 | 0.6488 | 0.5099 | 0.7068 |
| 2.1093 | 2143 | 0.1882 | - | - | - | - |
| 2.1102 | 2144 | 0.4297 | - | - | - | - |
| 2.1112 | 2145 | 0.5098 | - | - | - | - |
| 2.1122 | 2146 | 0.3554 | - | - | - | - |
| 2.1132 | 2147 | 0.5338 | - | - | - | - |
| 2.1142 | 2148 | 0.4045 | - | - | - | - |
| 2.1152 | 2149 | 0.6929 | - | - | - | - |
| 2.1161 | 2150 | 0.3397 | - | - | - | - |
| 2.1171 | 2151 | 0.4817 | - | - | - | - |
| 2.1181 | 2152 | 0.3459 | - | - | - | - |
| 2.1191 | 2153 | 0.6743 | - | - | - | - |
| 2.1201 | 2154 | 0.461 | - | - | - | - |
| 2.1211 | 2155 | 0.4665 | - | - | - | - |
| 2.1220 | 2156 | 0.2519 | - | - | - | - |
| 2.1230 | 2157 | 0.4271 | - | - | - | - |
| 2.1240 | 2158 | 0.1528 | - | - | - | - |
| 2.125 | 2159 | 0.3622 | - | - | - | - |
| 2.1260 | 2160 | 0.2196 | - | - | - | - |
| 2.1270 | 2161 | 0.2029 | - | - | - | - |
| 2.1280 | 2162 | 0.7731 | - | - | - | - |
| 2.1289 | 2163 | 0.2184 | - | - | - | - |
| 2.1299 | 2164 | 0.4623 | - | - | - | - |
| 2.1309 | 2165 | 0.1743 | - | - | - | - |
| 2.1319 | 2166 | 0.1833 | - | - | - | - |
| 2.1329 | 2167 | 0.274 | - | - | - | - |
| 2.1339 | 2168 | 0.8368 | - | - | - | - |
| 2.1348 | 2169 | 0.2218 | - | - | - | - |
| 2.1358 | 2170 | 0.3106 | - | - | - | - |
| 2.1368 | 2171 | 0.6703 | - | - | - | - |
| 2.1378 | 2172 | 0.2926 | - | - | - | - |
| 2.1388 | 2173 | 0.1584 | - | - | - | - |
| 2.1398 | 2174 | 0.2456 | - | - | - | - |
| 2.1407 | 2175 | 0.4458 | - | - | - | - |
| 2.1417 | 2176 | 0.494 | - | - | - | - |
| 2.1427 | 2177 | 0.4601 | - | - | - | - |
| 2.1437 | 2178 | 0.6571 | - | - | - | - |
| 2.1447 | 2179 | 0.1915 | - | - | - | - |
| 2.1457 | 2180 | 0.2892 | - | - | - | - |
| 2.1467 | 2181 | 0.3592 | - | - | - | - |
| 2.1476 | 2182 | 0.89 | - | - | - | - |
| 2.1486 | 2183 | 0.4856 | - | - | - | - |
| 2.1496 | 2184 | 0.2403 | - | - | - | - |
| 2.1506 | 2185 | 0.263 | - | - | - | - |
| 2.1516 | 2186 | 0.5816 | - | - | - | - |
| 2.1526 | 2187 | 0.2912 | - | - | - | - |
| 2.1535 | 2188 | 0.2722 | - | - | - | - |
| 2.1545 | 2189 | 0.3503 | - | - | - | - |
| 2.1555 | 2190 | 0.3788 | - | - | - | - |
| 2.1565 | 2191 | 0.4935 | - | - | - | - |
| 2.1575 | 2192 | 0.2505 | - | - | - | - |
| 2.1585 | 2193 | 0.3122 | - | - | - | - |
| 2.1594 | 2194 | 0.2363 | - | - | - | - |
| 2.1604 | 2195 | 0.4411 | - | - | - | - |
| 2.1614 | 2196 | 0.5624 | - | - | - | - |
| 2.1624 | 2197 | 0.1555 | - | - | - | - |
| 2.1634 | 2198 | 0.4505 | - | - | - | - |
| 2.1644 | 2199 | 0.2699 | - | - | - | - |
| 2.1654 | 2200 | 0.2575 | - | - | - | - |
| 2.1663 | 2201 | 0.2773 | - | - | - | - |
| 2.1673 | 2202 | 0.7659 | - | - | - | - |
| 2.1683 | 2203 | 0.5827 | - | - | - | - |
| 2.1693 | 2204 | 0.4094 | - | - | - | - |
| 2.1703 | 2205 | 0.5912 | - | - | - | - |
| 2.1713 | 2206 | 0.2814 | - | - | - | - |
| 2.1722 | 2207 | 0.6024 | - | - | - | - |
| 2.1732 | 2208 | 0.4436 | - | - | - | - |
| 2.1742 | 2209 | 0.2696 | - | - | - | - |
| 2.1752 | 2210 | 0.1876 | - | - | - | - |
| 2.1762 | 2211 | 0.4322 | - | - | - | - |
| 2.1772 | 2212 | 0.401 | - | - | - | - |
| 2.1781 | 2213 | 0.4703 | - | - | - | - |
| 2.1791 | 2214 | 0.2829 | - | - | - | - |
| 2.1801 | 2215 | 0.217 | - | - | - | - |
| 2.1811 | 2216 | 0.2039 | - | - | - | - |
| 2.1821 | 2217 | 0.3816 | - | - | - | - |
| 2.1831 | 2218 | 0.3872 | - | - | - | - |
| 2.1841 | 2219 | 0.5381 | - | - | - | - |
| 2.1850 | 2220 | 0.3297 | - | - | - | - |
| 2.1860 | 2221 | 0.7472 | - | - | - | - |
| 2.1870 | 2222 | 0.409 | - | - | - | - |
| 2.1880 | 2223 | 0.3398 | - | - | - | - |
| 2.1890 | 2224 | 0.5215 | - | - | - | - |
| 2.1900 | 2225 | 0.3045 | - | - | - | - |
| 2.1909 | 2226 | 0.195 | - | - | - | - |
| 2.1919 | 2227 | 0.457 | - | - | - | - |
| 2.1929 | 2228 | 0.387 | - | - | - | - |
| 2.1939 | 2229 | 0.3079 | - | - | - | - |
| 2.1949 | 2230 | 0.7337 | - | - | - | - |
| 2.1959 | 2231 | 0.3105 | - | - | - | - |
| 2.1969 | 2232 | 0.4746 | - | - | - | - |
| 2.1978 | 2233 | 0.4945 | - | - | - | - |
| 2.1988 | 2234 | 0.7614 | - | - | - | - |
| 2.1998 | 2235 | 0.5402 | - | - | - | - |
| 2.2008 | 2236 | 0.7004 | - | - | - | - |
| 2.2018 | 2237 | 0.2853 | - | - | - | - |
| 2.2028 | 2238 | 0.061 | - | - | - | - |
| 2.2037 | 2239 | 0.9005 | - | - | - | - |
| 2.2047 | 2240 | 0.4169 | - | - | - | - |
| 2.2057 | 2241 | 0.5792 | - | - | - | - |
| 2.2067 | 2242 | 0.2046 | - | - | - | - |
| 2.2077 | 2243 | 0.876 | - | - | - | - |
| 2.2087 | 2244 | 0.3884 | - | - | - | - |
| 2.2096 | 2245 | 0.826 | - | - | - | - |
| 2.2106 | 2246 | 0.3453 | - | - | - | - |
| 2.2116 | 2247 | 0.1741 | - | - | - | - |
| 2.2126 | 2248 | 0.1238 | - | - | - | - |
| 2.2136 | 2249 | 0.3539 | - | - | - | - |
| 2.2146 | 2250 | 0.6756 | - | - | - | - |
| 2.2156 | 2251 | 0.2457 | - | - | - | - |
| 2.2165 | 2252 | 0.1128 | - | - | - | - |
| 2.2175 | 2253 | 0.5331 | - | - | - | - |
| 2.2185 | 2254 | 0.499 | - | - | - | - |
| 2.2195 | 2255 | 0.9985 | - | - | - | - |
| 2.2205 | 2256 | 0.5565 | - | - | - | - |
| 2.2215 | 2257 | 0.545 | - | - | - | - |
| 2.2224 | 2258 | 0.6449 | - | - | - | - |
| 2.2234 | 2259 | 0.8312 | - | - | - | - |
| 2.2244 | 2260 | 0.155 | - | - | - | - |
| 2.2254 | 2261 | 0.8201 | - | - | - | - |
| 2.2264 | 2262 | 0.2976 | - | - | - | - |
| 2.2274 | 2263 | 0.1666 | - | - | - | - |
| 2.2283 | 2264 | 0.2341 | - | - | - | - |
| 2.2293 | 2265 | 0.1533 | - | - | - | - |
| 2.2303 | 2266 | 0.2068 | - | - | - | - |
| 2.2313 | 2267 | 0.2045 | - | - | - | - |
| 2.2323 | 2268 | 0.2308 | - | - | - | - |
| 2.2333 | 2269 | 0.1454 | - | - | - | - |
| 2.2343 | 2270 | 0.2369 | - | - | - | - |
| 2.2352 | 2271 | 0.1508 | - | - | - | - |
| 2.2362 | 2272 | 0.4161 | - | - | - | - |
| 2.2372 | 2273 | 0.2739 | - | - | - | - |
| 2.2382 | 2274 | 0.7653 | - | - | - | - |
| 2.2392 | 2275 | 0.3751 | - | - | - | - |
| 2.2402 | 2276 | 0.6602 | - | - | - | - |
| 2.2411 | 2277 | 0.2636 | - | - | - | - |
| 2.2421 | 2278 | 0.3619 | - | - | - | - |
| 2.2431 | 2279 | 1.2106 | - | - | - | - |
| 2.2441 | 2280 | 0.5429 | - | - | - | - |
| 2.2451 | 2281 | 0.2715 | - | - | - | - |
| 2.2461 | 2282 | 0.3696 | - | - | - | - |
| 2.2470 | 2283 | 0.5001 | - | - | - | - |
| 2.2480 | 2284 | 0.263 | - | - | - | - |
| 2.2490 | 2285 | 0.2834 | - | - | - | - |
| 2.25 | 2286 | 0.3014 | - | - | - | - |
| 2.2510 | 2287 | 0.1766 | - | - | - | - |
| 2.2520 | 2288 | 0.452 | - | - | - | - |
| 2.2530 | 2289 | 0.3325 | - | - | - | - |
| 2.2539 | 2290 | 0.3046 | - | - | - | - |
| 2.2549 | 2291 | 0.0783 | - | - | - | - |
| 2.2559 | 2292 | 0.5475 | - | - | - | - |
| 2.2569 | 2293 | 0.1652 | - | - | - | - |
| 2.2579 | 2294 | 0.2344 | - | - | - | - |
| 2.2589 | 2295 | 0.6825 | 0.5027 | 0.6741 | 0.5178 | 0.7015 |
| 2.2598 | 2296 | 0.172 | - | - | - | - |
| 2.2608 | 2297 | 0.1702 | - | - | - | - |
| 2.2618 | 2298 | 0.2923 | - | - | - | - |
| 2.2628 | 2299 | 0.9845 | - | - | - | - |
| 2.2638 | 2300 | 0.3264 | - | - | - | - |
| 2.2648 | 2301 | 0.3324 | - | - | - | - |
| 2.2657 | 2302 | 0.133 | - | - | - | - |
| 2.2667 | 2303 | 0.5128 | - | - | - | - |
| 2.2677 | 2304 | 0.3315 | - | - | - | - |
| 2.2687 | 2305 | 0.8059 | - | - | - | - |
| 2.2697 | 2306 | 0.4871 | - | - | - | - |
| 2.2707 | 2307 | 0.4682 | - | - | - | - |
| 2.2717 | 2308 | 0.3445 | - | - | - | - |
| 2.2726 | 2309 | 0.6977 | - | - | - | - |
| 2.2736 | 2310 | 0.2097 | - | - | - | - |
| 2.2746 | 2311 | 0.9707 | - | - | - | - |
| 2.2756 | 2312 | 0.3347 | - | - | - | - |
| 2.2766 | 2313 | 0.1578 | - | - | - | - |
| 2.2776 | 2314 | 0.2311 | - | - | - | - |
| 2.2785 | 2315 | 0.3391 | - | - | - | - |
| 2.2795 | 2316 | 0.3266 | - | - | - | - |
| 2.2805 | 2317 | 0.4752 | - | - | - | - |
| 2.2815 | 2318 | 0.3747 | - | - | - | - |
| 2.2825 | 2319 | 0.2869 | - | - | - | - |
| 2.2835 | 2320 | 0.2732 | - | - | - | - |
| 2.2844 | 2321 | 0.5805 | - | - | - | - |
| 2.2854 | 2322 | 0.6248 | - | - | - | - |
| 2.2864 | 2323 | 0.1827 | - | - | - | - |
| 2.2874 | 2324 | 0.0837 | - | - | - | - |
| 2.2884 | 2325 | 0.3561 | - | - | - | - |
| 2.2894 | 2326 | 0.2894 | - | - | - | - |
| 2.2904 | 2327 | 0.4555 | - | - | - | - |
| 2.2913 | 2328 | 0.5762 | - | - | - | - |
| 2.2923 | 2329 | 0.6998 | - | - | - | - |
| 2.2933 | 2330 | 0.548 | - | - | - | - |
| 2.2943 | 2331 | 0.4924 | - | - | - | - |
| 2.2953 | 2332 | 0.5409 | - | - | - | - |
| 2.2963 | 2333 | 0.7607 | - | - | - | - |
| 2.2972 | 2334 | 0.4493 | - | - | - | - |
| 2.2982 | 2335 | 0.1872 | - | - | - | - |
| 2.2992 | 2336 | 0.2478 | - | - | - | - |
| 2.3002 | 2337 | 0.4008 | - | - | - | - |
| 2.3012 | 2338 | 0.2723 | - | - | - | - |
| 2.3022 | 2339 | 0.4008 | - | - | - | - |
| 2.3031 | 2340 | 0.4166 | - | - | - | - |
| 2.3041 | 2341 | 0.2233 | - | - | - | - |
| 2.3051 | 2342 | 0.606 | - | - | - | - |
| 2.3061 | 2343 | 0.7489 | - | - | - | - |
| 2.3071 | 2344 | 0.6439 | - | - | - | - |
| 2.3081 | 2345 | 0.5636 | - | - | - | - |
| 2.3091 | 2346 | 0.1038 | - | - | - | - |
| 2.3100 | 2347 | 0.5164 | - | - | - | - |
| 2.3110 | 2348 | 0.3576 | - | - | - | - |
| 2.3120 | 2349 | 0.5828 | - | - | - | - |
| 2.3130 | 2350 | 0.7128 | - | - | - | - |
| 2.3140 | 2351 | 0.4945 | - | - | - | - |
| 2.3150 | 2352 | 0.3841 | - | - | - | - |
| 2.3159 | 2353 | 0.598 | - | - | - | - |
| 2.3169 | 2354 | 0.2705 | - | - | - | - |
| 2.3179 | 2355 | 0.2488 | - | - | - | - |
| 2.3189 | 2356 | 0.2014 | - | - | - | - |
| 2.3199 | 2357 | 0.1288 | - | - | - | - |
| 2.3209 | 2358 | 0.2358 | - | - | - | - |
| 2.3219 | 2359 | 0.2984 | - | - | - | - |
| 2.3228 | 2360 | 0.1404 | - | - | - | - |
| 2.3238 | 2361 | 0.1777 | - | - | - | - |
| 2.3248 | 2362 | 0.7692 | - | - | - | - |
| 2.3258 | 2363 | 0.1564 | - | - | - | - |
| 2.3268 | 2364 | 0.1589 | - | - | - | - |
| 2.3278 | 2365 | 0.517 | - | - | - | - |
| 2.3287 | 2366 | 0.0561 | - | - | - | - |
| 2.3297 | 2367 | 0.6459 | - | - | - | - |
| 2.3307 | 2368 | 0.3254 | - | - | - | - |
| 2.3317 | 2369 | 0.8167 | - | - | - | - |
| 2.3327 | 2370 | 0.6455 | - | - | - | - |
| 2.3337 | 2371 | 0.4716 | - | - | - | - |
| 2.3346 | 2372 | 0.4538 | - | - | - | - |
| 2.3356 | 2373 | 0.2246 | - | - | - | - |
| 2.3366 | 2374 | 0.2168 | - | - | - | - |
| 2.3376 | 2375 | 0.1789 | - | - | - | - |
| 2.3386 | 2376 | 0.6535 | - | - | - | - |
| 2.3396 | 2377 | 0.1169 | - | - | - | - |
| 2.3406 | 2378 | 0.3429 | - | - | - | - |
| 2.3415 | 2379 | 0.4071 | - | - | - | - |
| 2.3425 | 2380 | 0.2805 | - | - | - | - |
| 2.3435 | 2381 | 0.3936 | - | - | - | - |
| 2.3445 | 2382 | 0.5997 | - | - | - | - |
| 2.3455 | 2383 | 0.4108 | - | - | - | - |
| 2.3465 | 2384 | 0.0802 | - | - | - | - |
| 2.3474 | 2385 | 0.428 | - | - | - | - |
| 2.3484 | 2386 | 0.9649 | - | - | - | - |
| 2.3494 | 2387 | 0.3741 | - | - | - | - |
| 2.3504 | 2388 | 0.2907 | - | - | - | - |
| 2.3514 | 2389 | 0.1665 | - | - | - | - |
| 2.3524 | 2390 | 0.464 | - | - | - | - |
| 2.3533 | 2391 | 0.2636 | - | - | - | - |
| 2.3543 | 2392 | 0.1748 | - | - | - | - |
| 2.3553 | 2393 | 0.2673 | - | - | - | - |
| 2.3563 | 2394 | 0.4091 | - | - | - | - |
| 2.3573 | 2395 | 0.3149 | - | - | - | - |
| 2.3583 | 2396 | 0.222 | - | - | - | - |
| 2.3593 | 2397 | 0.3191 | - | - | - | - |
| 2.3602 | 2398 | 0.6364 | - | - | - | - |
| 2.3612 | 2399 | 0.3431 | - | - | - | - |
| 2.3622 | 2400 | 0.3021 | - | - | - | - |
| 2.3632 | 2401 | 0.5573 | - | - | - | - |
| 2.3642 | 2402 | 0.3081 | - | - | - | - |
| 2.3652 | 2403 | 0.3263 | - | - | - | - |
| 2.3661 | 2404 | 0.345 | - | - | - | - |
| 2.3671 | 2405 | 0.2477 | - | - | - | - |
| 2.3681 | 2406 | 0.5129 | - | - | - | - |
| 2.3691 | 2407 | 0.1907 | - | - | - | - |
| 2.3701 | 2408 | 0.5318 | - | - | - | - |
| 2.3711 | 2409 | 0.5115 | - | - | - | - |
| 2.3720 | 2410 | 0.5919 | - | - | - | - |
| 2.3730 | 2411 | 0.2424 | - | - | - | - |
| 2.3740 | 2412 | 0.3523 | - | - | - | - |
| 2.375 | 2413 | 0.2838 | - | - | - | - |
| 2.3760 | 2414 | 0.5143 | - | - | - | - |
| 2.3770 | 2415 | 0.2617 | - | - | - | - |
| 2.3780 | 2416 | 0.2902 | - | - | - | - |
| 2.3789 | 2417 | 0.2989 | - | - | - | - |
| 2.3799 | 2418 | 0.1996 | - | - | - | - |
| 2.3809 | 2419 | 0.3886 | - | - | - | - |
| 2.3819 | 2420 | 0.884 | - | - | - | - |
| 2.3829 | 2421 | 0.311 | - | - | - | - |
| 2.3839 | 2422 | 0.3463 | - | - | - | - |
| 2.3848 | 2423 | 0.3554 | - | - | - | - |
| 2.3858 | 2424 | 0.4 | - | - | - | - |
| 2.3868 | 2425 | 0.271 | - | - | - | - |
| 2.3878 | 2426 | 0.3827 | - | - | - | - |
| 2.3888 | 2427 | 0.3209 | - | - | - | - |
| 2.3898 | 2428 | 0.3825 | - | - | - | - |
| 2.3907 | 2429 | 0.4422 | - | - | - | - |
| 2.3917 | 2430 | 0.2985 | - | - | - | - |
| 2.3927 | 2431 | 0.0181 | - | - | - | - |
| 2.3937 | 2432 | 0.7523 | - | - | - | - |
| 2.3947 | 2433 | 0.1871 | - | - | - | - |
| 2.3957 | 2434 | 0.4331 | - | - | - | - |
| 2.3967 | 2435 | 0.0969 | - | - | - | - |
| 2.3976 | 2436 | 0.6248 | - | - | - | - |
| 2.3986 | 2437 | 0.177 | - | - | - | - |
| 2.3996 | 2438 | 0.4363 | - | - | - | - |
| 2.4006 | 2439 | 0.6808 | - | - | - | - |
| 2.4016 | 2440 | 0.3351 | - | - | - | - |
| 2.4026 | 2441 | 0.1954 | - | - | - | - |
| 2.4035 | 2442 | 0.4625 | - | - | - | - |
| 2.4045 | 2443 | 0.1783 | - | - | - | - |
| 2.4055 | 2444 | 0.3819 | - | - | - | - |
| 2.4065 | 2445 | 0.7562 | - | - | - | - |
| 2.4075 | 2446 | 0.154 | - | - | - | - |
| 2.4085 | 2447 | 0.5065 | - | - | - | - |
| 2.4094 | 2448 | 0.3614 | 0.5045 | 0.6699 | 0.5129 | 0.7047 |
| 2.4104 | 2449 | 0.261 | - | - | - | - |
| 2.4114 | 2450 | 0.0852 | - | - | - | - |
| 2.4124 | 2451 | 0.252 | - | - | - | - |
| 2.4134 | 2452 | 0.057 | - | - | - | - |
| 2.4144 | 2453 | 0.7811 | - | - | - | - |
| 2.4154 | 2454 | 0.3099 | - | - | - | - |
| 2.4163 | 2455 | 0.1505 | - | - | - | - |
| 2.4173 | 2456 | 0.1391 | - | - | - | - |
| 2.4183 | 2457 | 0.2339 | - | - | - | - |
| 2.4193 | 2458 | 0.3976 | - | - | - | - |
| 2.4203 | 2459 | 0.3867 | - | - | - | - |
| 2.4213 | 2460 | 0.5535 | - | - | - | - |
| 2.4222 | 2461 | 0.334 | - | - | - | - |
| 2.4232 | 2462 | 0.1176 | - | - | - | - |
| 2.4242 | 2463 | 0.363 | - | - | - | - |
| 2.4252 | 2464 | 0.6583 | - | - | - | - |
| 2.4262 | 2465 | 0.4029 | - | - | - | - |
| 2.4272 | 2466 | 0.3915 | - | - | - | - |
| 2.4281 | 2467 | 0.2261 | - | - | - | - |
| 2.4291 | 2468 | 0.3856 | - | - | - | - |
| 2.4301 | 2469 | 0.4336 | - | - | - | - |
| 2.4311 | 2470 | 0.4369 | - | - | - | - |
| 2.4321 | 2471 | 0.1303 | - | - | - | - |
| 2.4331 | 2472 | 0.6326 | - | - | - | - |
| 2.4341 | 2473 | 0.1735 | - | - | - | - |
| 2.4350 | 2474 | 0.5125 | - | - | - | - |
| 2.4360 | 2475 | 0.1103 | - | - | - | - |
| 2.4370 | 2476 | 0.2421 | - | - | - | - |
| 2.4380 | 2477 | 0.2513 | - | - | - | - |
| 2.4390 | 2478 | 0.1199 | - | - | - | - |
| 2.4400 | 2479 | 0.1829 | - | - | - | - |
| 2.4409 | 2480 | 0.2527 | - | - | - | - |
| 2.4419 | 2481 | 0.2036 | - | - | - | - |
| 2.4429 | 2482 | 0.4078 | - | - | - | - |
| 2.4439 | 2483 | 0.2764 | - | - | - | - |
| 2.4449 | 2484 | 0.4487 | - | - | - | - |
| 2.4459 | 2485 | 0.6344 | - | - | - | - |
| 2.4469 | 2486 | 0.1742 | - | - | - | - |
| 2.4478 | 2487 | 0.5259 | - | - | - | - |
| 2.4488 | 2488 | 0.6818 | - | - | - | - |
| 2.4498 | 2489 | 0.7824 | - | - | - | - |
| 2.4508 | 2490 | 0.0713 | - | - | - | - |
| 2.4518 | 2491 | 0.2966 | - | - | - | - |
| 2.4528 | 2492 | 0.7014 | - | - | - | - |
| 2.4537 | 2493 | 0.1383 | - | - | - | - |
| 2.4547 | 2494 | 0.1846 | - | - | - | - |
| 2.4557 | 2495 | 0.4537 | - | - | - | - |
| 2.4567 | 2496 | 0.2155 | - | - | - | - |
| 2.4577 | 2497 | 0.4813 | - | - | - | - |
| 2.4587 | 2498 | 0.6803 | - | - | - | - |
| 2.4596 | 2499 | 0.0744 | - | - | - | - |
| 2.4606 | 2500 | 0.451 | - | - | - | - |
| 2.4616 | 2501 | 0.4568 | - | - | - | - |
| 2.4626 | 2502 | 0.1182 | - | - | - | - |
| 2.4636 | 2503 | 0.3563 | - | - | - | - |
| 2.4646 | 2504 | 0.2821 | - | - | - | - |
| 2.4656 | 2505 | 0.1239 | - | - | - | - |
| 2.4665 | 2506 | 0.5076 | - | - | - | - |
| 2.4675 | 2507 | 0.2629 | - | - | - | - |
| 2.4685 | 2508 | 0.362 | - | - | - | - |
| 2.4695 | 2509 | 0.1892 | - | - | - | - |
| 2.4705 | 2510 | 0.2334 | - | - | - | - |
| 2.4715 | 2511 | 0.1624 | - | - | - | - |
| 2.4724 | 2512 | 0.2166 | - | - | - | - |
| 2.4734 | 2513 | 0.2771 | - | - | - | - |
| 2.4744 | 2514 | 0.4421 | - | - | - | - |
| 2.4754 | 2515 | 0.4224 | - | - | - | - |
| 2.4764 | 2516 | 0.5839 | - | - | - | - |
| 2.4774 | 2517 | 0.2874 | - | - | - | - |
| 2.4783 | 2518 | 0.3557 | - | - | - | - |
| 2.4793 | 2519 | 0.3501 | - | - | - | - |
| 2.4803 | 2520 | 0.2368 | - | - | - | - |
| 2.4813 | 2521 | 0.5408 | - | - | - | - |
| 2.4823 | 2522 | 0.2134 | - | - | - | - |
| 2.4833 | 2523 | 0.9646 | - | - | - | - |
| 2.4843 | 2524 | 0.7589 | - | - | - | - |
| 2.4852 | 2525 | 0.2106 | - | - | - | - |
| 2.4862 | 2526 | 0.2096 | - | - | - | - |
| 2.4872 | 2527 | 0.4391 | - | - | - | - |
| 2.4882 | 2528 | 0.2735 | - | - | - | - |
| 2.4892 | 2529 | 0.4712 | - | - | - | - |
| 2.4902 | 2530 | 0.2503 | - | - | - | - |
| 2.4911 | 2531 | 0.4035 | - | - | - | - |
| 2.4921 | 2532 | 0.4989 | - | - | - | - |
| 2.4931 | 2533 | 0.4082 | - | - | - | - |
| 2.4941 | 2534 | 0.297 | - | - | - | - |
| 2.4951 | 2535 | 0.178 | - | - | - | - |
| 2.4961 | 2536 | 0.3749 | - | - | - | - |
| 2.4970 | 2537 | 0.2872 | - | - | - | - |
| 2.4980 | 2538 | 0.1993 | - | - | - | - |
| 2.4990 | 2539 | 0.4424 | - | - | - | - |
| 2.5 | 2540 | 0.4321 | - | - | - | - |
| 2.5010 | 2541 | 0.2728 | - | - | - | - |
| 2.5020 | 2542 | 0.1387 | - | - | - | - |
| 2.5030 | 2543 | 1.0402 | - | - | - | - |
| 2.5039 | 2544 | 0.4153 | - | - | - | - |
| 2.5049 | 2545 | 0.4845 | - | - | - | - |
| 2.5059 | 2546 | 0.4674 | - | - | - | - |
| 2.5069 | 2547 | 0.2211 | - | - | - | - |
| 2.5079 | 2548 | 0.3532 | - | - | - | - |
| 2.5089 | 2549 | 0.2734 | - | - | - | - |
| 2.5098 | 2550 | 0.3015 | - | - | - | - |
| 2.5108 | 2551 | 0.0508 | - | - | - | - |
| 2.5118 | 2552 | 0.5125 | - | - | - | - |
| 2.5128 | 2553 | 0.0729 | - | - | - | - |
| 2.5138 | 2554 | 0.376 | - | - | - | - |
| 2.5148 | 2555 | 0.2335 | - | - | - | - |
| 2.5157 | 2556 | 0.2233 | - | - | - | - |
| 2.5167 | 2557 | 0.257 | - | - | - | - |
| 2.5177 | 2558 | 0.6108 | - | - | - | - |
| 2.5187 | 2559 | 0.0648 | - | - | - | - |
| 2.5197 | 2560 | 0.3249 | - | - | - | - |
| 2.5207 | 2561 | 0.3661 | - | - | - | - |
| 2.5217 | 2562 | 0.1489 | - | - | - | - |
| 2.5226 | 2563 | 0.1006 | - | - | - | - |
| 2.5236 | 2564 | 0.205 | - | - | - | - |
| 2.5246 | 2565 | 0.132 | - | - | - | - |
| 2.5256 | 2566 | 0.4317 | - | - | - | - |
| 2.5266 | 2567 | 0.4741 | - | - | - | - |
| 2.5276 | 2568 | 0.3413 | - | - | - | - |
| 2.5285 | 2569 | 0.7061 | - | - | - | - |
| 2.5295 | 2570 | 0.3047 | - | - | - | - |
| 2.5305 | 2571 | 0.79 | - | - | - | - |
| 2.5315 | 2572 | 0.4705 | - | - | - | - |
| 2.5325 | 2573 | 0.0915 | - | - | - | - |
| 2.5335 | 2574 | 0.4268 | - | - | - | - |
| 2.5344 | 2575 | 0.3548 | - | - | - | - |
| 2.5354 | 2576 | 0.2926 | - | - | - | - |
| 2.5364 | 2577 | 0.4319 | - | - | - | - |
| 2.5374 | 2578 | 0.293 | - | - | - | - |
| 2.5384 | 2579 | 0.4523 | - | - | - | - |
| 2.5394 | 2580 | 0.3576 | - | - | - | - |
| 2.5404 | 2581 | 0.3131 | - | - | - | - |
| 2.5413 | 2582 | 0.1289 | - | - | - | - |
| 2.5423 | 2583 | 0.2224 | - | - | - | - |
| 2.5433 | 2584 | 0.2187 | - | - | - | - |
| 2.5443 | 2585 | 0.1808 | - | - | - | - |
| 2.5453 | 2586 | 0.5719 | - | - | - | - |
| 2.5463 | 2587 | 0.3357 | - | - | - | - |
| 2.5472 | 2588 | 0.4923 | - | - | - | - |
| 2.5482 | 2589 | 0.7231 | - | - | - | - |
| 2.5492 | 2590 | 0.5006 | - | - | - | - |
| 2.5502 | 2591 | 0.6329 | - | - | - | - |
| 2.5512 | 2592 | 0.23 | - | - | - | - |
| 2.5522 | 2593 | 0.158 | - | - | - | - |
| 2.5531 | 2594 | 0.1245 | - | - | - | - |
| 2.5541 | 2595 | 0.2352 | - | - | - | - |
| 2.5551 | 2596 | 0.6465 | - | - | - | - |
| 2.5561 | 2597 | 0.3682 | - | - | - | - |
| 2.5571 | 2598 | 0.2663 | - | - | - | - |
| 2.5581 | 2599 | 0.2182 | - | - | - | - |
| 2.5591 | 2600 | 0.2484 | - | - | - | - |
| 2.5600 | 2601 | 0.1932 | 0.4917 | 0.6688 | 0.5230 | 0.6985 |
| 2.5610 | 2602 | 0.0946 | - | - | - | - |
| 2.5620 | 2603 | 0.3778 | - | - | - | - |
| 2.5630 | 2604 | 0.1033 | - | - | - | - |
| 2.5640 | 2605 | 0.4318 | - | - | - | - |
| 2.5650 | 2606 | 0.2179 | - | - | - | - |
| 2.5659 | 2607 | 0.0971 | - | - | - | - |
| 2.5669 | 2608 | 0.4726 | - | - | - | - |
| 2.5679 | 2609 | 0.3389 | - | - | - | - |
| 2.5689 | 2610 | 0.1408 | - | - | - | - |
| 2.5699 | 2611 | 0.0972 | - | - | - | - |
| 2.5709 | 2612 | 0.1531 | - | - | - | - |
| 2.5719 | 2613 | 0.1374 | - | - | - | - |
| 2.5728 | 2614 | 0.2092 | - | - | - | - |
| 2.5738 | 2615 | 0.1692 | - | - | - | - |
| 2.5748 | 2616 | 0.412 | - | - | - | - |
| 2.5758 | 2617 | 0.0756 | - | - | - | - |
| 2.5768 | 2618 | 0.8034 | - | - | - | - |
| 2.5778 | 2619 | 0.8405 | - | - | - | - |
| 2.5787 | 2620 | 0.2442 | - | - | - | - |
| 2.5797 | 2621 | 0.3537 | - | - | - | - |
| 2.5807 | 2622 | 0.4989 | - | - | - | - |
| 2.5817 | 2623 | 0.4902 | - | - | - | - |
| 2.5827 | 2624 | 0.8908 | - | - | - | - |
| 2.5837 | 2625 | 0.1239 | - | - | - | - |
| 2.5846 | 2626 | 0.4208 | - | - | - | - |
| 2.5856 | 2627 | 0.3947 | - | - | - | - |
| 2.5866 | 2628 | 0.4709 | - | - | - | - |
| 2.5876 | 2629 | 0.452 | - | - | - | - |
| 2.5886 | 2630 | 0.1296 | - | - | - | - |
| 2.5896 | 2631 | 0.3835 | - | - | - | - |
| 2.5906 | 2632 | 0.3944 | - | - | - | - |
| 2.5915 | 2633 | 0.7798 | - | - | - | - |
| 2.5925 | 2634 | 0.381 | - | - | - | - |
| 2.5935 | 2635 | 0.5957 | - | - | - | - |
| 2.5945 | 2636 | 0.0761 | - | - | - | - |
| 2.5955 | 2637 | 0.1285 | - | - | - | - |
| 2.5965 | 2638 | 0.395 | - | - | - | - |
| 2.5974 | 2639 | 0.8514 | - | - | - | - |
| 2.5984 | 2640 | 0.2844 | - | - | - | - |
| 2.5994 | 2641 | 0.236 | - | - | - | - |
| 2.6004 | 2642 | 0.3958 | - | - | - | - |
| 2.6014 | 2643 | 0.4496 | - | - | - | - |
| 2.6024 | 2644 | 0.6127 | - | - | - | - |
| 2.6033 | 2645 | 0.2044 | - | - | - | - |
| 2.6043 | 2646 | 0.1861 | - | - | - | - |
| 2.6053 | 2647 | 0.1584 | - | - | - | - |
| 2.6063 | 2648 | 0.3345 | - | - | - | - |
| 2.6073 | 2649 | 0.2336 | - | - | - | - |
| 2.6083 | 2650 | 0.2932 | - | - | - | - |
| 2.6093 | 2651 | 0.2814 | - | - | - | - |
| 2.6102 | 2652 | 0.4036 | - | - | - | - |
| 2.6112 | 2653 | 0.3042 | - | - | - | - |
| 2.6122 | 2654 | 0.42 | - | - | - | - |
| 2.6132 | 2655 | 0.2876 | - | - | - | - |
| 2.6142 | 2656 | 0.3322 | - | - | - | - |
| 2.6152 | 2657 | 0.3078 | - | - | - | - |
| 2.6161 | 2658 | 0.3052 | - | - | - | - |
| 2.6171 | 2659 | 0.6088 | - | - | - | - |
| 2.6181 | 2660 | 0.2831 | - | - | - | - |
| 2.6191 | 2661 | 0.5751 | - | - | - | - |
| 2.6201 | 2662 | 0.0988 | - | - | - | - |
| 2.6211 | 2663 | 0.1851 | - | - | - | - |
| 2.6220 | 2664 | 0.3453 | - | - | - | - |
| 2.6230 | 2665 | 0.441 | - | - | - | - |
| 2.6240 | 2666 | 0.0953 | - | - | - | - |
| 2.625 | 2667 | 0.1422 | - | - | - | - |
| 2.6260 | 2668 | 0.1243 | - | - | - | - |
| 2.6270 | 2669 | 0.32 | - | - | - | - |
| 2.6280 | 2670 | 0.2588 | - | - | - | - |
| 2.6289 | 2671 | 0.4652 | - | - | - | - |
| 2.6299 | 2672 | 0.4017 | - | - | - | - |
| 2.6309 | 2673 | 0.1883 | - | - | - | - |
| 2.6319 | 2674 | 0.3345 | - | - | - | - |
| 2.6329 | 2675 | 0.162 | - | - | - | - |
| 2.6339 | 2676 | 0.3113 | - | - | - | - |
| 2.6348 | 2677 | 0.6358 | - | - | - | - |
| 2.6358 | 2678 | 0.397 | - | - | - | - |
| 2.6368 | 2679 | 0.454 | - | - | - | - |
| 2.6378 | 2680 | 0.1772 | - | - | - | - |
| 2.6388 | 2681 | 0.0152 | - | - | - | - |
| 2.6398 | 2682 | 0.142 | - | - | - | - |
| 2.6407 | 2683 | 0.4372 | - | - | - | - |
| 2.6417 | 2684 | 0.4235 | - | - | - | - |
| 2.6427 | 2685 | 0.1866 | - | - | - | - |
| 2.6437 | 2686 | 0.0524 | - | - | - | - |
| 2.6447 | 2687 | 0.1163 | - | - | - | - |
| 2.6457 | 2688 | 0.1485 | - | - | - | - |
| 2.6467 | 2689 | 0.1149 | - | - | - | - |
| 2.6476 | 2690 | 0.3884 | - | - | - | - |
| 2.6486 | 2691 | 0.172 | - | - | - | - |
| 2.6496 | 2692 | 0.4707 | - | - | - | - |
| 2.6506 | 2693 | 0.3776 | - | - | - | - |
| 2.6516 | 2694 | 0.309 | - | - | - | - |
| 2.6526 | 2695 | 0.7073 | - | - | - | - |
| 2.6535 | 2696 | 0.0827 | - | - | - | - |
| 2.6545 | 2697 | 0.3375 | - | - | - | - |
| 2.6555 | 2698 | 0.2815 | - | - | - | - |
| 2.6565 | 2699 | 0.41 | - | - | - | - |
| 2.6575 | 2700 | 0.1364 | - | - | - | - |
| 2.6585 | 2701 | 0.4235 | - | - | - | - |
| 2.6594 | 2702 | 0.4157 | - | - | - | - |
| 2.6604 | 2703 | 1.088 | - | - | - | - |
| 2.6614 | 2704 | 0.2303 | - | - | - | - |
| 2.6624 | 2705 | 0.2966 | - | - | - | - |
| 2.6634 | 2706 | 0.4843 | - | - | - | - |
| 2.6644 | 2707 | 0.2855 | - | - | - | - |
| 2.6654 | 2708 | 0.2591 | - | - | - | - |
| 2.6663 | 2709 | 0.467 | - | - | - | - |
| 2.6673 | 2710 | 0.139 | - | - | - | - |
| 2.6683 | 2711 | 0.3564 | - | - | - | - |
| 2.6693 | 2712 | 0.141 | - | - | - | - |
| 2.6703 | 2713 | 0.1698 | - | - | - | - |
| 2.6713 | 2714 | 0.3223 | - | - | - | - |
| 2.6722 | 2715 | 0.4376 | - | - | - | - |
| 2.6732 | 2716 | 0.1578 | - | - | - | - |
| 2.6742 | 2717 | 0.2388 | - | - | - | - |
| 2.6752 | 2718 | 0.211 | - | - | - | - |
| 2.6762 | 2719 | 0.2561 | - | - | - | - |
| 2.6772 | 2720 | 0.0494 | - | - | - | - |
| 2.6781 | 2721 | 0.589 | - | - | - | - |
| 2.6791 | 2722 | 0.5799 | - | - | - | - |
| 2.6801 | 2723 | 0.2218 | - | - | - | - |
| 2.6811 | 2724 | 0.3222 | - | - | - | - |
| 2.6821 | 2725 | 0.7828 | - | - | - | - |
| 2.6831 | 2726 | 0.3504 | - | - | - | - |
| 2.6841 | 2727 | 0.333 | - | - | - | - |
| 2.6850 | 2728 | 0.6705 | - | - | - | - |
| 2.6860 | 2729 | 0.2021 | - | - | - | - |
| 2.6870 | 2730 | 0.7059 | - | - | - | - |
| 2.6880 | 2731 | 0.0523 | - | - | - | - |
| 2.6890 | 2732 | 0.3013 | - | - | - | - |
| 2.6900 | 2733 | 0.249 | - | - | - | - |
| 2.6909 | 2734 | 0.4251 | - | - | - | - |
| 2.6919 | 2735 | 1.0586 | - | - | - | - |
| 2.6929 | 2736 | 0.4656 | - | - | - | - |
| 2.6939 | 2737 | 0.1227 | - | - | - | - |
| 2.6949 | 2738 | 0.1047 | - | - | - | - |
| 2.6959 | 2739 | 0.4664 | - | - | - | - |
| 2.6969 | 2740 | 0.4104 | - | - | - | - |
| 2.6978 | 2741 | 0.4076 | - | - | - | - |
| 2.6988 | 2742 | 0.2715 | - | - | - | - |
| 2.6998 | 2743 | 0.167 | - | - | - | - |
| 2.7008 | 2744 | 0.2799 | - | - | - | - |
| 2.7018 | 2745 | 0.1801 | - | - | - | - |
| 2.7028 | 2746 | 0.2727 | - | - | - | - |
| 2.7037 | 2747 | 0.1934 | - | - | - | - |
| 2.7047 | 2748 | 0.4175 | - | - | - | - |
| 2.7057 | 2749 | 0.5095 | - | - | - | - |
| 2.7067 | 2750 | 0.4747 | - | - | - | - |
| 2.7077 | 2751 | 0.2593 | - | - | - | - |
| 2.7087 | 2752 | 0.508 | - | - | - | - |
| 2.7096 | 2753 | 0.1706 | - | - | - | - |
| 2.7106 | 2754 | 0.372 | 0.4886 | 0.6735 | 0.5240 | 0.6999 |
| 2.7116 | 2755 | 0.1012 | - | - | - | - |
| 2.7126 | 2756 | 0.1855 | - | - | - | - |
| 2.7136 | 2757 | 0.1423 | - | - | - | - |
| 2.7146 | 2758 | 0.2128 | - | - | - | - |
| 2.7156 | 2759 | 0.1641 | - | - | - | - |
| 2.7165 | 2760 | 0.2113 | - | - | - | - |
| 2.7175 | 2761 | 0.5309 | - | - | - | - |
| 2.7185 | 2762 | 0.1855 | - | - | - | - |
| 2.7195 | 2763 | 0.353 | - | - | - | - |
| 2.7205 | 2764 | 0.3805 | - | - | - | - |
| 2.7215 | 2765 | 0.4292 | - | - | - | - |
| 2.7224 | 2766 | 0.2547 | - | - | - | - |
| 2.7234 | 2767 | 0.3077 | - | - | - | - |
| 2.7244 | 2768 | 0.6004 | - | - | - | - |
| 2.7254 | 2769 | 0.116 | - | - | - | - |
| 2.7264 | 2770 | 0.1424 | - | - | - | - |
| 2.7274 | 2771 | 0.2555 | - | - | - | - |
| 2.7283 | 2772 | 0.3408 | - | - | - | - |
| 2.7293 | 2773 | 0.117 | - | - | - | - |
| 2.7303 | 2774 | 0.1352 | - | - | - | - |
| 2.7313 | 2775 | 0.1671 | - | - | - | - |
| 2.7323 | 2776 | 0.2096 | - | - | - | - |
| 2.7333 | 2777 | 0.1569 | - | - | - | - |
| 2.7343 | 2778 | 1.3244 | - | - | - | - |
| 2.7352 | 2779 | 0.3514 | - | - | - | - |
| 2.7362 | 2780 | 0.607 | - | - | - | - |
| 2.7372 | 2781 | 0.2289 | - | - | - | - |
| 2.7382 | 2782 | 0.2472 | - | - | - | - |
| 2.7392 | 2783 | 0.9307 | - | - | - | - |
| 2.7402 | 2784 | 0.336 | - | - | - | - |
| 2.7411 | 2785 | 0.5573 | - | - | - | - |
| 2.7421 | 2786 | 0.2472 | - | - | - | - |
| 2.7431 | 2787 | 0.2082 | - | - | - | - |
| 2.7441 | 2788 | 0.2614 | - | - | - | - |
| 2.7451 | 2789 | 0.6271 | - | - | - | - |
| 2.7461 | 2790 | 0.2748 | - | - | - | - |
| 2.7470 | 2791 | 0.3488 | - | - | - | - |
| 2.7480 | 2792 | 0.052 | - | - | - | - |
| 2.7490 | 2793 | 0.3308 | - | - | - | - |
| 2.75 | 2794 | 0.2661 | - | - | - | - |
| 2.7510 | 2795 | 0.2692 | - | - | - | - |
| 2.7520 | 2796 | 0.1316 | - | - | - | - |
| 2.7530 | 2797 | 0.3616 | - | - | - | - |
| 2.7539 | 2798 | 0.1442 | - | - | - | - |
| 2.7549 | 2799 | 0.3065 | - | - | - | - |
| 2.7559 | 2800 | 0.5695 | - | - | - | - |
| 2.7569 | 2801 | 0.0946 | - | - | - | - |
| 2.7579 | 2802 | 0.2218 | - | - | - | - |
| 2.7589 | 2803 | 0.3658 | - | - | - | - |
| 2.7598 | 2804 | 0.2364 | - | - | - | - |
| 2.7608 | 2805 | 0.2508 | - | - | - | - |
| 2.7618 | 2806 | 0.3074 | - | - | - | - |
| 2.7628 | 2807 | 0.1118 | - | - | - | - |
| 2.7638 | 2808 | 0.4156 | - | - | - | - |
| 2.7648 | 2809 | 0.1576 | - | - | - | - |
| 2.7657 | 2810 | 0.3728 | - | - | - | - |
| 2.7667 | 2811 | 0.2044 | - | - | - | - |
| 2.7677 | 2812 | 0.3115 | - | - | - | - |
| 2.7687 | 2813 | 0.1254 | - | - | - | - |
| 2.7697 | 2814 | 0.3651 | - | - | - | - |
| 2.7707 | 2815 | 0.2305 | - | - | - | - |
| 2.7717 | 2816 | 0.1259 | - | - | - | - |
| 2.7726 | 2817 | 0.3865 | - | - | - | - |
| 2.7736 | 2818 | 0.5593 | - | - | - | - |
| 2.7746 | 2819 | 0.216 | - | - | - | - |
| 2.7756 | 2820 | 0.2696 | - | - | - | - |
| 2.7766 | 2821 | 0.3779 | - | - | - | - |
| 2.7776 | 2822 | 0.2451 | - | - | - | - |
| 2.7785 | 2823 | 0.4448 | - | - | - | - |
| 2.7795 | 2824 | 0.045 | - | - | - | - |
| 2.7805 | 2825 | 0.3465 | - | - | - | - |
| 2.7815 | 2826 | 0.1853 | - | - | - | - |
| 2.7825 | 2827 | 0.1103 | - | - | - | - |
| 2.7835 | 2828 | 0.277 | - | - | - | - |
| 2.7844 | 2829 | 0.1521 | - | - | - | - |
| 2.7854 | 2830 | 0.2653 | - | - | - | - |
| 2.7864 | 2831 | 0.4891 | - | - | - | - |
| 2.7874 | 2832 | 0.4052 | - | - | - | - |
| 2.7884 | 2833 | 0.4734 | - | - | - | - |
| 2.7894 | 2834 | 0.3711 | - | - | - | - |
| 2.7904 | 2835 | 0.3721 | - | - | - | - |
| 2.7913 | 2836 | 0.2153 | - | - | - | - |
| 2.7923 | 2837 | 0.3035 | - | - | - | - |
| 2.7933 | 2838 | 0.413 | - | - | - | - |
| 2.7943 | 2839 | 0.3275 | - | - | - | - |
| 2.7953 | 2840 | 0.45 | - | - | - | - |
| 2.7963 | 2841 | 0.8403 | - | - | - | - |
| 2.7972 | 2842 | 0.2697 | - | - | - | - |
| 2.7982 | 2843 | 0.1558 | - | - | - | - |
| 2.7992 | 2844 | 0.2919 | - | - | - | - |
| 2.8002 | 2845 | 0.2728 | - | - | - | - |
| 2.8012 | 2846 | 0.6732 | - | - | - | - |
| 2.8022 | 2847 | 0.1906 | - | - | - | - |
| 2.8031 | 2848 | 0.0684 | - | - | - | - |
| 2.8041 | 2849 | 0.1759 | - | - | - | - |
| 2.8051 | 2850 | 0.4616 | - | - | - | - |
| 2.8061 | 2851 | 0.1753 | - | - | - | - |
| 2.8071 | 2852 | 0.0538 | - | - | - | - |
| 2.8081 | 2853 | 0.2727 | - | - | - | - |
| 2.8091 | 2854 | 0.6287 | - | - | - | - |
| 2.8100 | 2855 | 0.2557 | - | - | - | - |
| 2.8110 | 2856 | 0.2785 | - | - | - | - |
| 2.8120 | 2857 | 0.1492 | - | - | - | - |
| 2.8130 | 2858 | 0.141 | - | - | - | - |
| 2.8140 | 2859 | 0.2445 | - | - | - | - |
| 2.8150 | 2860 | 0.1115 | - | - | - | - |
| 2.8159 | 2861 | 0.3406 | - | - | - | - |
| 2.8169 | 2862 | 0.5149 | - | - | - | - |
| 2.8179 | 2863 | 0.2799 | - | - | - | - |
| 2.8189 | 2864 | 0.3185 | - | - | - | - |
| 2.8199 | 2865 | 0.1001 | - | - | - | - |
| 2.8209 | 2866 | 0.0394 | - | - | - | - |
| 2.8219 | 2867 | 0.1332 | - | - | - | - |
| 2.8228 | 2868 | 0.4512 | - | - | - | - |
| 2.8238 | 2869 | 0.6693 | - | - | - | - |
| 2.8248 | 2870 | 0.239 | - | - | - | - |
| 2.8258 | 2871 | 0.2037 | - | - | - | - |
| 2.8268 | 2872 | 0.304 | - | - | - | - |
| 2.8278 | 2873 | 0.2295 | - | - | - | - |
| 2.8287 | 2874 | 0.5068 | - | - | - | - |
| 2.8297 | 2875 | 0.4523 | - | - | - | - |
| 2.8307 | 2876 | 0.2962 | - | - | - | - |
| 2.8317 | 2877 | 0.5274 | - | - | - | - |
| 2.8327 | 2878 | 0.6032 | - | - | - | - |
| 2.8337 | 2879 | 0.5692 | - | - | - | - |
| 2.8346 | 2880 | 0.1158 | - | - | - | - |
| 2.8356 | 2881 | 0.1685 | - | - | - | - |
| 2.8366 | 2882 | 0.4206 | - | - | - | - |
| 2.8376 | 2883 | 0.198 | - | - | - | - |
| 2.8386 | 2884 | 0.3901 | - | - | - | - |
| 2.8396 | 2885 | 0.2684 | - | - | - | - |
| 2.8406 | 2886 | 0.1488 | - | - | - | - |
| 2.8415 | 2887 | 0.0959 | - | - | - | - |
| 2.8425 | 2888 | 0.5298 | - | - | - | - |
| 2.8435 | 2889 | 0.2391 | - | - | - | - |
| 2.8445 | 2890 | 0.239 | - | - | - | - |
| 2.8455 | 2891 | 0.1347 | - | - | - | - |
| 2.8465 | 2892 | 0.5638 | - | - | - | - |
| 2.8474 | 2893 | 0.7352 | - | - | - | - |
| 2.8484 | 2894 | 0.2605 | - | - | - | - |
| 2.8494 | 2895 | 0.549 | - | - | - | - |
| 2.8504 | 2896 | 0.4349 | - | - | - | - |
| 2.8514 | 2897 | 0.2525 | - | - | - | - |
| 2.8524 | 2898 | 0.1922 | - | - | - | - |
| 2.8533 | 2899 | 0.5798 | - | - | - | - |
| 2.8543 | 2900 | 0.3186 | - | - | - | - |
| 2.8553 | 2901 | 0.2008 | - | - | - | - |
| 2.8563 | 2902 | 1.1413 | - | - | - | - |
| 2.8573 | 2903 | 0.7863 | - | - | - | - |
| 2.8583 | 2904 | 0.1799 | - | - | - | - |
| 2.8593 | 2905 | 0.3595 | - | - | - | - |
| 2.8602 | 2906 | 0.3704 | - | - | - | - |
| 2.8612 | 2907 | 0.7592 | 0.4888 | 0.6740 | 0.5247 | 0.7022 |
| 2.8622 | 2908 | 0.3438 | - | - | - | - |
| 2.8632 | 2909 | 0.3004 | - | - | - | - |
| 2.8642 | 2910 | 0.0605 | - | - | - | - |
| 2.8652 | 2911 | 0.2806 | - | - | - | - |
| 2.8661 | 2912 | 0.5737 | - | - | - | - |
| 2.8671 | 2913 | 0.3122 | - | - | - | - |
| 2.8681 | 2914 | 0.6209 | - | - | - | - |
| 2.8691 | 2915 | 0.3461 | - | - | - | - |
| 2.8701 | 2916 | 0.2759 | - | - | - | - |
| 2.8711 | 2917 | 0.2877 | - | - | - | - |
| 2.8720 | 2918 | 1.5252 | - | - | - | - |
| 2.8730 | 2919 | 0.3598 | - | - | - | - |
| 2.8740 | 2920 | 0.2988 | - | - | - | - |
| 2.875 | 2921 | 0.1411 | - | - | - | - |
| 2.8760 | 2922 | 0.2136 | - | - | - | - |
| 2.8770 | 2923 | 0.2058 | - | - | - | - |
| 2.8780 | 2924 | 0.4305 | - | - | - | - |
| 2.8789 | 2925 | 0.5253 | - | - | - | - |
| 2.8799 | 2926 | 0.3112 | - | - | - | - |
| 2.8809 | 2927 | 0.6982 | - | - | - | - |
| 2.8819 | 2928 | 0.3565 | - | - | - | - |
| 2.8829 | 2929 | 0.2734 | - | - | - | - |
| 2.8839 | 2930 | 0.1425 | - | - | - | - |
| 2.8848 | 2931 | 0.7445 | - | - | - | - |
| 2.8858 | 2932 | 0.4615 | - | - | - | - |
| 2.8868 | 2933 | 0.1666 | - | - | - | - |
| 2.8878 | 2934 | 0.5224 | - | - | - | - |
| 2.8888 | 2935 | 0.0262 | - | - | - | - |
| 2.8898 | 2936 | 0.6386 | - | - | - | - |
| 2.8907 | 2937 | 0.2209 | - | - | - | - |
| 2.8917 | 2938 | 0.2289 | - | - | - | - |
| 2.8927 | 2939 | 0.4258 | - | - | - | - |
| 2.8937 | 2940 | 0.4327 | - | - | - | - |
| 2.8947 | 2941 | 0.6541 | - | - | - | - |
| 2.8957 | 2942 | 0.2661 | - | - | - | - |
| 2.8967 | 2943 | 0.4912 | - | - | - | - |
| 2.8976 | 2944 | 0.1441 | - | - | - | - |
| 2.8986 | 2945 | 0.2309 | - | - | - | - |
| 2.8996 | 2946 | 0.3028 | - | - | - | - |
| 2.9006 | 2947 | 0.1203 | - | - | - | - |
| 2.9016 | 2948 | 0.6289 | - | - | - | - |
| 2.9026 | 2949 | 0.3618 | - | - | - | - |
| 2.9035 | 2950 | 0.2684 | - | - | - | - |
| 2.9045 | 2951 | 0.1371 | - | - | - | - |
| 2.9055 | 2952 | 0.6694 | - | - | - | - |
| 2.9065 | 2953 | 0.2216 | - | - | - | - |
| 2.9075 | 2954 | 0.1103 | - | - | - | - |
| 2.9085 | 2955 | 0.2106 | - | - | - | - |
| 2.9094 | 2956 | 0.4114 | - | - | - | - |
| 2.9104 | 2957 | 0.166 | - | - | - | - |
| 2.9114 | 2958 | 0.0788 | - | - | - | - |
| 2.9124 | 2959 | 0.2894 | - | - | - | - |
| 2.9134 | 2960 | 0.2845 | - | - | - | - |
| 2.9144 | 2961 | 0.2357 | - | - | - | - |
| 2.9154 | 2962 | 0.3342 | - | - | - | - |
| 2.9163 | 2963 | 0.3945 | - | - | - | - |
| 2.9173 | 2964 | 0.2308 | - | - | - | - |
| 2.9183 | 2965 | 0.4013 | - | - | - | - |
| 2.9193 | 2966 | 0.3327 | - | - | - | - |
| 2.9203 | 2967 | 0.4024 | - | - | - | - |
| 2.9213 | 2968 | 0.1838 | - | - | - | - |
| 2.9222 | 2969 | 0.3868 | - | - | - | - |
| 2.9232 | 2970 | 0.4597 | - | - | - | - |
| 2.9242 | 2971 | 0.2572 | - | - | - | - |
| 2.9252 | 2972 | 0.4641 | - | - | - | - |
| 2.9262 | 2973 | 0.0732 | - | - | - | - |
| 2.9272 | 2974 | 0.9887 | - | - | - | - |
| 2.9281 | 2975 | 0.2109 | - | - | - | - |
| 2.9291 | 2976 | 0.1698 | - | - | - | - |
| 2.9301 | 2977 | 0.4012 | - | - | - | - |
| 2.9311 | 2978 | 0.1757 | - | - | - | - |
| 2.9321 | 2979 | 0.3168 | - | - | - | - |
| 2.9331 | 2980 | 0.1128 | - | - | - | - |
| 2.9341 | 2981 | 0.1795 | - | - | - | - |
| 2.9350 | 2982 | 0.3252 | - | - | - | - |
| 2.9360 | 2983 | 0.037 | - | - | - | - |
| 2.9370 | 2984 | 0.3334 | - | - | - | - |
| 2.9380 | 2985 | 0.3173 | - | - | - | - |
| 2.9390 | 2986 | 0.151 | - | - | - | - |
| 2.9400 | 2987 | 0.3881 | - | - | - | - |
| 2.9409 | 2988 | 0.1861 | - | - | - | - |
| 2.9419 | 2989 | 0.2437 | - | - | - | - |
| 2.9429 | 2990 | 0.4226 | - | - | - | - |
| 2.9439 | 2991 | 0.5198 | - | - | - | - |
| 2.9449 | 2992 | 0.3833 | - | - | - | - |
| 2.9459 | 2993 | 0.253 | - | - | - | - |
| 2.9469 | 2994 | 0.3421 | - | - | - | - |
| 2.9478 | 2995 | 0.05 | - | - | - | - |
| 2.9488 | 2996 | 0.7686 | - | - | - | - |
| 2.9498 | 2997 | 0.1071 | - | - | - | - |
| 2.9508 | 2998 | 0.3382 | - | - | - | - |
| 2.9518 | 2999 | 0.2211 | - | - | - | - |
| 2.9528 | 3000 | 0.389 | - | - | - | - |
| 2.9537 | 3001 | 0.1802 | - | - | - | - |
| 2.9547 | 3002 | 0.295 | - | - | - | - |
| 2.9557 | 3003 | 0.2534 | - | - | - | - |
| 2.9567 | 3004 | 0.8536 | - | - | - | - |
| 2.9577 | 3005 | 0.5325 | - | - | - | - |
| 2.9587 | 3006 | 0.376 | - | - | - | - |
| 2.9596 | 3007 | 0.1309 | - | - | - | - |
| 2.9606 | 3008 | 0.3147 | - | - | - | - |
| 2.9616 | 3009 | 0.1782 | - | - | - | - |
| 2.9626 | 3010 | 0.4162 | - | - | - | - |
| 2.9636 | 3011 | 0.3284 | - | - | - | - |
| 2.9646 | 3012 | 0.1792 | - | - | - | - |
| 2.9656 | 3013 | 0.1753 | - | - | - | - |
| 2.9665 | 3014 | 0.5557 | - | - | - | - |
| 2.9675 | 3015 | 0.183 | - | - | - | - |
| 2.9685 | 3016 | 0.1412 | - | - | - | - |
| 2.9695 | 3017 | 0.4037 | - | - | - | - |
| 2.9705 | 3018 | 0.6259 | - | - | - | - |
| 2.9715 | 3019 | 0.2387 | - | - | - | - |
| 2.9724 | 3020 | 0.458 | - | - | - | - |
| 2.9734 | 3021 | 0.2202 | - | - | - | - |
| 2.9744 | 3022 | 0.1132 | - | - | - | - |
| 2.9754 | 3023 | 0.1922 | - | - | - | - |
| 2.9764 | 3024 | 0.3622 | - | - | - | - |
| 2.9774 | 3025 | 0.3681 | - | - | - | - |
| 2.9783 | 3026 | 0.1704 | - | - | - | - |
| 2.9793 | 3027 | 0.2572 | - | - | - | - |
| 2.9803 | 3028 | 0.2254 | - | - | - | - |
| 2.9813 | 3029 | 0.5572 | - | - | - | - |
| 2.9823 | 3030 | 0.691 | - | - | - | - |
| 2.9833 | 3031 | 0.3 | - | - | - | - |
| 2.9843 | 3032 | 0.3137 | - | - | - | - |
| 2.9852 | 3033 | 0.4111 | - | - | - | - |
| 2.9862 | 3034 | 0.4421 | - | - | - | - |
| 2.9872 | 3035 | 0.1184 | - | - | - | - |
| 2.9882 | 3036 | 0.2347 | - | - | - | - |
| 2.9892 | 3037 | 0.4659 | - | - | - | - |
| 2.9902 | 3038 | 0.391 | - | - | - | - |
| 2.9911 | 3039 | 0.3805 | - | - | - | - |
| 2.9921 | 3040 | 0.1296 | - | - | - | - |
| 2.9931 | 3041 | 0.055 | - | - | - | - |
| 2.9941 | 3042 | 0.3864 | - | - | - | - |
| 2.9951 | 3043 | 0.2506 | - | - | - | - |
| 2.9961 | 3044 | 0.1876 | - | - | - | - |
| 2.9970 | 3045 | 0.3416 | - | - | - | - |
| 2.9980 | 3046 | 0.5668 | - | - | - | - |
| 2.9990 | 3047 | 0.0809 | - | - | - | - |
| 3.0 | 3048 | 0.0768 | - | - | - | - |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
argmaxinc/mlx-stable-diffusion-3.5-large | argmaxinc | 2024-10-28T13:58:36Z | 310 | 5 | diffusionkit | [
"diffusionkit",
"text-to-image",
"image-generation",
"mlx",
"en",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:finetune:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2024-10-22T22:43:33Z | ---
license: other
license_name: stabilityai-ai-community
license_link: >-
https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md
library_name: diffusionkit
base_model: stabilityai/stable-diffusion-3.5-large
tags:
- text-to-image
- image-generation
- mlx
inference: false
language:
- en
---
# Stable Diffusion 3.5 Large on DiffusionKit MLX!
## Check out the [original model](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)!
## Check out the [DiffusionKit](https://github.com/argmaxinc/DiffusionKit) github repository!

# Usage
- ## Create conda environment
```shell
conda create -n diffusionkit python=3.11 -y
conda activate diffusionkit
pip install diffusionkit
```
- ## Run the cli command
```shell
diffusionkit-cli --prompt "detailed cinematic dof render of a \
detailed MacBook Pro on a wooden desk in a dim room with items \
around, messy dirty room. On the screen are the letters 'SD3 on \
DiffusionKit' glowing softly. High detail hard surface render" \
--model-version argmaxinc/mlx-stable-diffusion-3.5-large \
--height 768 \
--width 1360 \
--seed 1001 \
--step 50 \
--cfg 7 \
--t5 \
--output ~/Desktop/sd3_on_mac.png
``` |
QuantFactory/Lexora-Medium-7B-GGUF | QuantFactory | 2024-10-28T13:55:09Z | 40 | 2 | transformers | [
"transformers",
"gguf",
"it",
"en",
"dataset:DeepMount00/Sonnet-3.5-ITA-INSTRUCTION",
"dataset:DeepMount00/Sonnet-3.5-ITA-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-22T07:35:16Z |
---
library_name: transformers
license: apache-2.0
language:
- it
- en
datasets:
- DeepMount00/Sonnet-3.5-ITA-INSTRUCTION
- DeepMount00/Sonnet-3.5-ITA-DPO
---
[](https://hf.co/QuantFactory)
# QuantFactory/Lexora-Medium-7B-GGUF
This is quantized version of [DeepMount00/Lexora-Medium-7B](https://huggingface.co/DeepMount00/Lexora-Medium-7B) created using llama.cpp
# Original Model Card
## How to Use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "DeepMount00/Lexora-Medium-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = [{'role': 'user', 'content': """Marco ha comprato 5 scatole di cioccolatini. Ogni scatola contiene 12 cioccolatini. Ha deciso di dare 3 cioccolatini a ciascuno dei suoi 7 amici. Quanti cioccolatini gli rimarranno dopo averli distribuiti ai suoi amici?"""}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.001,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
|
g-assismoraes/deberta-semeval25_noHINDI08_fold5 | g-assismoraes | 2024-10-28T13:53:17Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:50:41Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_noHINDI08_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_noHINDI08_fold5
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9394
- Precision Samples: 0.1426
- Recall Samples: 0.5408
- F1 Samples: 0.2127
- Precision Macro: 0.8006
- Recall Macro: 0.4541
- F1 Macro: 0.3278
- Precision Micro: 0.135
- Recall Micro: 0.4639
- F1 Micro: 0.2091
- Precision Weighted: 0.5280
- Recall Weighted: 0.4639
- F1 Weighted: 0.1421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.7633 | 1.0 | 16 | 10.4422 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2955 | 0.2955 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 10.108 | 2.0 | 32 | 10.0554 | 0.1462 | 0.2671 | 0.1723 | 0.9488 | 0.3333 | 0.3052 | 0.1437 | 0.1753 | 0.1579 | 0.8129 | 0.1753 | 0.0500 |
| 9.67 | 3.0 | 48 | 9.7996 | 0.1365 | 0.3390 | 0.1812 | 0.9175 | 0.3622 | 0.3101 | 0.1292 | 0.2405 | 0.1681 | 0.7335 | 0.2405 | 0.0618 |
| 8.9757 | 4.0 | 64 | 9.5642 | 0.1429 | 0.3982 | 0.1979 | 0.9015 | 0.3831 | 0.3218 | 0.1373 | 0.3024 | 0.1888 | 0.6915 | 0.3024 | 0.0973 |
| 8.5173 | 5.0 | 80 | 9.3488 | 0.1481 | 0.5026 | 0.2156 | 0.8674 | 0.4206 | 0.3336 | 0.1415 | 0.4021 | 0.2093 | 0.6148 | 0.4021 | 0.1278 |
| 8.9753 | 6.0 | 96 | 9.2255 | 0.1560 | 0.5130 | 0.2250 | 0.8570 | 0.4270 | 0.3218 | 0.1393 | 0.4192 | 0.2091 | 0.6228 | 0.4192 | 0.1332 |
| 8.8356 | 7.0 | 112 | 9.1139 | 0.1440 | 0.5256 | 0.2119 | 0.8396 | 0.4364 | 0.3201 | 0.1318 | 0.4399 | 0.2029 | 0.5984 | 0.4399 | 0.1272 |
| 8.396 | 8.0 | 128 | 9.0036 | 0.1457 | 0.5266 | 0.2145 | 0.8355 | 0.4411 | 0.3286 | 0.1367 | 0.4433 | 0.2089 | 0.5893 | 0.4433 | 0.1447 |
| 8.7026 | 9.0 | 144 | 8.9562 | 0.1392 | 0.5273 | 0.2085 | 0.8116 | 0.4383 | 0.3260 | 0.1340 | 0.4433 | 0.2057 | 0.5369 | 0.4433 | 0.1382 |
| 7.999 | 10.0 | 160 | 8.9394 | 0.1426 | 0.5408 | 0.2127 | 0.8006 | 0.4541 | 0.3278 | 0.135 | 0.4639 | 0.2091 | 0.5280 | 0.4639 | 0.1421 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
mradermacher/Llama-3.2-3B-Booval-GGUF | mradermacher | 2024-10-28T13:50:58Z | 16 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Booval",
"base_model:quantized:bunnycore/Llama-3.2-3B-Booval",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T12:40:28Z | ---
base_model: bunnycore/Llama-3.2-3B-Booval
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Booval
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Booval-GGUF/resolve/main/Llama-3.2-3B-Booval.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
g-assismoraes/deberta-semeval25_noHINDI08_fold4 | g-assismoraes | 2024-10-28T13:50:38Z | 164 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:48:07Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_noHINDI08_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_noHINDI08_fold4
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5869
- Precision Samples: 0.1430
- Recall Samples: 0.5534
- F1 Samples: 0.2128
- Precision Macro: 0.8182
- Recall Macro: 0.4411
- F1 Macro: 0.3037
- Precision Micro: 0.1286
- Recall Micro: 0.4712
- F1 Micro: 0.2020
- Precision Weighted: 0.5660
- Recall Weighted: 0.4712
- F1 Weighted: 0.1261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.4207 | 1.0 | 16 | 9.9474 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2614 | 0.2614 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 9.6412 | 2.0 | 32 | 9.5638 | 0.1680 | 0.2762 | 0.1844 | 0.9617 | 0.2942 | 0.2727 | 0.1679 | 0.1619 | 0.1648 | 0.8498 | 0.1619 | 0.0538 |
| 9.9968 | 3.0 | 48 | 9.3139 | 0.1322 | 0.3717 | 0.1714 | 0.9190 | 0.3430 | 0.2787 | 0.1119 | 0.2770 | 0.1594 | 0.7311 | 0.2770 | 0.0693 |
| 9.1935 | 4.0 | 64 | 9.1086 | 0.1442 | 0.4312 | 0.1947 | 0.8902 | 0.3702 | 0.2870 | 0.1239 | 0.3417 | 0.1818 | 0.6873 | 0.3417 | 0.0905 |
| 9.5271 | 5.0 | 80 | 8.9205 | 0.1320 | 0.4986 | 0.1948 | 0.8620 | 0.4075 | 0.2959 | 0.1181 | 0.4137 | 0.1837 | 0.6305 | 0.4137 | 0.1089 |
| 9.3829 | 6.0 | 96 | 8.7813 | 0.1432 | 0.5248 | 0.2112 | 0.8533 | 0.4214 | 0.3014 | 0.1294 | 0.4353 | 0.1995 | 0.6207 | 0.4353 | 0.1185 |
| 8.9373 | 7.0 | 112 | 8.7473 | 0.1457 | 0.5331 | 0.2154 | 0.8426 | 0.4339 | 0.3034 | 0.1334 | 0.4568 | 0.2065 | 0.5971 | 0.4568 | 0.1226 |
| 8.0103 | 8.0 | 128 | 8.6381 | 0.1420 | 0.5452 | 0.2113 | 0.8409 | 0.4397 | 0.3011 | 0.1267 | 0.4676 | 0.1994 | 0.5946 | 0.4676 | 0.1193 |
| 8.1355 | 9.0 | 144 | 8.5970 | 0.1420 | 0.5452 | 0.2112 | 0.8296 | 0.4397 | 0.3011 | 0.1268 | 0.4676 | 0.1995 | 0.5772 | 0.4676 | 0.1200 |
| 8.3947 | 10.0 | 160 | 8.5869 | 0.1430 | 0.5534 | 0.2128 | 0.8182 | 0.4411 | 0.3037 | 0.1286 | 0.4712 | 0.2020 | 0.5660 | 0.4712 | 0.1261 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
dominguesm/canarim-7b | dominguesm | 2024-10-28T13:48:51Z | 324 | 16 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"LLM",
"Portuguese",
"Llama 2",
"pt",
"dataset:dominguesm/CC-MAIN-2023-23",
"arxiv:2307.09288",
"doi:10.57967/hf/1356",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-16T17:33:46Z | ---
language:
- pt
license: llama2
library_name: transformers
tags:
- text-generation
- pytorch
- LLM
- Portuguese
- Llama 2
datasets:
- dominguesm/CC-MAIN-2023-23
inference: false
pipeline_tag: text-generation
model-index:
- name: canarim-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.03
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dominguesm/canarim-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM (3-shot)
type: enem_challenge
config: main
split: test
args:
num_few_shot: 3
metrics:
- type: acc
value: 26.96
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (3-shot)
type: bluex
config: main
split: test
args:
num_few_shot: 3
metrics:
- type: acc
value: 29.76
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams (3-shot)
type: oab_exams
config: main
split: test
args:
num_few_shot: 3
metrics:
- type: acc
value: 31.48
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: ASSIN2 RTE (15-shot)
type: assin2_rte
config: main
split: test
args:
num_few_shot: 15
metrics:
- type: acc
value: 71.96
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: ASSIN2 STS (15-shot)
type: assin2_sts
config: main
split: test
args:
num_few_shot: 15
metrics:
- type: acc
value: 13.33
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: FAQUAD NLI (15-shot)
type: faquad_nli
config: main
split: test
args:
num_few_shot: 15
metrics:
- type: acc
value: 49.09
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR (25-shot)
type: hatebr_offensive
config: main
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 78.48
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech (25-shot)
type: portuguese_hate_speech
config: main
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 63.73
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR (25-shot)
type: tweetsentbr
config: main
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 62.38
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=dominguesm/canarim-7b
name: Open PT LLM Leaderboard Evaluation Results
---
<p align="center">
<img width="250" alt="Camarim Logo" src="https://raw.githubusercontent.com/DominguesM/Canarim-Instruct-PTBR/main/assets/canarim.png">
</p>
<hr>
# Canarim-7B
Canarim-7B is a Portuguese large language model developed by [Maicon Domingues](https://nlp.rocks).
## Model description
The model was pretrained on 16 billion tokens from the Portuguese subset of [CommonCrawl 2023-23](https://huggingface.co/datasets/dominguesm/CC-MAIN-2023-23), starting with the weights of LLaMA2-7B. The pretraining data has cutoff of mid-2023.
## Key Features
- **Language:** Specialized in understanding and generating Portuguese text, making it ideal for applications targeting Portuguese-speaking audiences.
- **Architecture:** Inherits the robust architecture from LLaMA2-7B, ensuring efficient performance and accurate results.
- **Diverse Dataset:** The pretraining dataset includes a wide range of topics and writing styles, enhancing the model's ability to understand various contexts and nuances in Portuguese.
## Applications
Canarim-7B, was trained solely on a language modeling objective and has not been fine-tuned for instruction following. Therefore, it is more suited for few-shot tasks rather than zero-shot tasks. This means the model tends to perform better when provided with a few examples of the desired outcome during use. Here are some practical applications:
- **Natural Language Understanding (NLU):** Efficient in tasks such as sentiment analysis, topic classification, and entity recognition in Portuguese text, especially when relevant examples are provided.
- **Natural Language Generation (NLG):** Capable of generating coherent and contextually relevant text, useful for content creation, chatbots, and more, with improved results when provided examples of the desired style or format.
- **Language Translation:** Suitable for high-quality translation between Portuguese and other languages, especially when examples of desired translations are included during model training or fine-tuning.
### Tips for Efficient Use
- **Few-shot Learning:** When using Canarim-7B for specific tasks, it is beneficial to provide a few relevant examples. This helps the model better understand the context and purpose of the task.
- **Contextualization:** Including additional context in the input can significantly improve the quality of the modelβs predictions and text generation.
---
## Getting Started
To start using Canarim-7B with the Transformers library, first install the library if you haven't already:
```bash
pip install transformers
```
You can then load the model using the Transformers library. Here's a simple example of how to use the model for text generation using the `pipeline` function:
```python
from transformers import AutoTokenizer, pipeline
import torch
model_id = "dominguesm/canarim-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = make_prompt(question)
sequences = pipe(
prompt,
do_sample=True,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
temperature=0.9,
top_p=0.6,
repetition_penalty=1.15
)
```
This code snippet demonstrates how to generate text with Canarim-7B. You can customize the input text and adjust parameters like `max_length` according to your requirements.
## How to Cite
If you want to cite **Canarim-7B**, you could use this:
```
@misc {maicon_domingues_2023,
author = { {Maicon Domingues} },
title = { canarim-7b (Revision 08fdd2b) },
year = 2023,
url = { https://huggingface.co/dominguesm/canarim-7b },
doi = { 10.57967/hf/1356 },
publisher = { Hugging Face }
}
```
## Citations
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
Canarim-7B is released under the [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://ai.meta.com/llama/license/).
## [Open PT LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/dominguesm/canarim-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |47.36|
|ENEM (3-Shot) |25.96|
|BLUEX (3-Shot) |29.76|
|OAB Exams (3-Shot) |31.48|
|ASSIN2 RTE (15-shot) |71.96|
|ASSIN2 STS (15-shot) |13.33|
|FAQUAD NLI (15-shot) |49.09|
|HateBR (25-shot) |78.48|
|PT Hate Speech (25-shot) |63.73|
|tweetSentBR (25-shot) |62.38|
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dominguesm__canarim-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |48.63|
|AI2 Reasoning Challenge (25-Shot)|51.96|
|HellaSwag (10-Shot) |77.52|
|MMLU (5-Shot) |40.92|
|TruthfulQA (0-shot) |40.03|
|Winogrande (5-shot) |71.43|
|GSM8k (5-shot) | 9.93|
|
martinsinnona/visdecode_plotqa_2k | martinsinnona | 2024-10-28T13:48:40Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-28T13:22:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hmbyt5/byt5-small-english | hmbyt5 | 2024-10-28T13:47:38Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-08T20:08:44Z | ---
license: mit
language:
- en
---
# hmByT5 - Preliminary Language Models
Preliminary Historic Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
* English (British Library Corpus - Books)
More details can be found in [our GitHub repository](https://github.com/stefan-it/hmByT5).
# Pretraining
We use the official JAX/FLAX example in Hugging Face Transformers to pretrain a ByT5 model on a single v3-8 TPU.
Details about the training can be found [here](https://github.com/stefan-it/hmByT5/tree/main/hmbyt5-flax).
# Evaluation on Downstream Tasks (NER)
We evaluated the hmByT5 model on downstream tasks:
| Model | English AjMC | German AjMC | French AjMC | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | Avg. |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|--------------|-----------------|-----------------|--------------|--------------|------|
| [`hmbyt5/byt5-small-english`](https://huggingface.co/hmbyt5/byt5-small-english) | 85.65 Β± 1.21 | 87.27 Β± 0.50 | 84.44 Β± 0.79 | | | | | |
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs β€οΈ
|
dbmdz/bert-base-german-europeana-cased | dbmdz | 2024-10-28T13:47:34Z | 490 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"historic german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: de
license: mit
tags:
- "historic german"
---
# π€ + π dbmdz BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources German Europeana BERT models π
# German Europeana BERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/config.json) β’ [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/pytorch_model.bin) β’ [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/vocab.txt)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) π€
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC β€οΈ
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage π€
|
dbmdz/bert-base-turkish-128k-uncased | dbmdz | 2024-10-28T13:47:11Z | 29,691 | 26 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tr
license: mit
---
# π€ + π dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish π
# πΉπ· BERTurk
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model
on a TPU v3-8 for 2M steps.
For this model we use a vocab size of 128k.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-turkish-128k-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/config.json) β’ [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/pytorch_model.bin) β’ [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/vocab.txt)
## Usage
With Transformers >= 2.3 our BERTurk uncased model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-uncased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-uncased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) π€
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC β€οΈ
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage π€
|
dbmdz/electra-base-turkish-cased-discriminator | dbmdz | 2024-10-28T13:47:02Z | 212 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"pretraining",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tr
license: mit
---
# π€ + π dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA base model for Turkish π
# Turkish ELECTRA model
We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) β’ [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) β’ [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA base cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) π€
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC β€οΈ
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage π€
|
g-assismoraes/deberta-semeval25_noHINDI08_fold2 | g-assismoraes | 2024-10-28T13:45:24Z | 162 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:42:31Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_noHINDI08_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_noHINDI08_fold2
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8517
- Precision Samples: 0.1527
- Recall Samples: 0.5474
- F1 Samples: 0.2243
- Precision Macro: 0.7907
- Recall Macro: 0.3516
- F1 Macro: 0.2296
- Precision Micro: 0.1381
- Recall Micro: 0.4690
- F1 Micro: 0.2133
- Precision Weighted: 0.5289
- Recall Weighted: 0.4690
- F1 Weighted: 0.1485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.5058 | 1.0 | 16 | 10.2942 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1818 | 0.1818 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 9.4489 | 2.0 | 32 | 9.9428 | 0.1940 | 0.2988 | 0.2165 | 0.9662 | 0.2242 | 0.1978 | 0.1906 | 0.1966 | 0.1935 | 0.8457 | 0.1966 | 0.0763 |
| 10.0907 | 3.0 | 48 | 9.6744 | 0.1541 | 0.3731 | 0.2047 | 0.9300 | 0.2477 | 0.1983 | 0.1491 | 0.2690 | 0.1919 | 0.7249 | 0.2690 | 0.0775 |
| 9.248 | 4.0 | 64 | 9.4563 | 0.1541 | 0.4441 | 0.2153 | 0.9010 | 0.2716 | 0.2064 | 0.1481 | 0.3345 | 0.2053 | 0.6702 | 0.3345 | 0.1007 |
| 9.019 | 5.0 | 80 | 9.2882 | 0.1564 | 0.4842 | 0.2207 | 0.8711 | 0.2977 | 0.2181 | 0.1491 | 0.3897 | 0.2156 | 0.6126 | 0.3897 | 0.1272 |
| 8.712 | 6.0 | 96 | 9.1284 | 0.1652 | 0.5359 | 0.2355 | 0.8646 | 0.3226 | 0.2216 | 0.1511 | 0.4414 | 0.2252 | 0.5993 | 0.4414 | 0.1369 |
| 8.2497 | 7.0 | 112 | 8.9849 | 0.1629 | 0.5686 | 0.2357 | 0.8226 | 0.3406 | 0.2263 | 0.1453 | 0.4759 | 0.2226 | 0.5563 | 0.4759 | 0.1450 |
| 8.2378 | 8.0 | 128 | 8.9047 | 0.1556 | 0.5406 | 0.2275 | 0.8014 | 0.3457 | 0.2283 | 0.1424 | 0.4690 | 0.2185 | 0.5376 | 0.4690 | 0.1466 |
| 8.6465 | 9.0 | 144 | 8.8786 | 0.1525 | 0.5406 | 0.2237 | 0.7903 | 0.3478 | 0.2290 | 0.1391 | 0.4690 | 0.2145 | 0.5282 | 0.4690 | 0.1479 |
| 8.7721 | 10.0 | 160 | 8.8517 | 0.1527 | 0.5474 | 0.2243 | 0.7907 | 0.3516 | 0.2296 | 0.1381 | 0.4690 | 0.2133 | 0.5289 | 0.4690 | 0.1485 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
SidXXD/77 | SidXXD | 2024-10-28T13:42:47Z | 11 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-27T08:44:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/77
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
knifeayumu/Magnum-v4-Cydonia-v1.2-22B | knifeayumu | 2024-10-28T13:41:44Z | 6 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TheDrummer/Cydonia-22B-v1.2",
"base_model:merge:TheDrummer/Cydonia-22B-v1.2",
"base_model:anthracite-org/magnum-v4-22b",
"base_model:merge:anthracite-org/magnum-v4-22b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-10-28T13:22:33Z | ---
base_model:
- TheDrummer/Cydonia-22B-v1.2
- anthracite-org/magnum-v4-22b
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---

# Magnum? More like Deagle (dies in cringe)
[Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) but inverse... Some prefer [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) over [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) so this merge is born.
This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2)
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: anthracite-org/magnum-v4-22b
- model: TheDrummer/Cydonia-22B-v1.2
merge_method: slerp
base_model: anthracite-org/magnum-v4-22b
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
```
|
LightDestory/test_save_2 | LightDestory | 2024-10-28T13:35:36Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"upernet",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T13:33:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jimregan/wav2vec2-xls-r-300m-phoneme-timit | jimregan | 2024-10-28T13:35:16Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:timit_asr",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-07T16:59:26Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-phoneme-timit
results: []
datasets:
- timit_asr
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3630
- Wer: 0.6243
- Cer: 0.1316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 3.5325 | 11.9 | 1000 | 3.4897 | 1.0 | 0.9266 |
| 2.1973 | 23.81 | 2000 | 1.1350 | 0.8396 | 0.2403 |
| 1.4762 | 35.71 | 3000 | 0.5270 | 0.6845 | 0.1563 |
| 1.2409 | 47.62 | 4000 | 0.4195 | 0.6331 | 0.1403 |
| 1.1241 | 59.52 | 5000 | 0.3845 | 0.6362 | 0.1379 |
| 1.024 | 71.43 | 6000 | 0.3716 | 0.6321 | 0.1355 |
| 0.9922 | 83.33 | 7000 | 0.3728 | 0.6290 | 0.1331 |
| 0.9432 | 95.24 | 8000 | 0.3648 | 0.6170 | 0.1321 |
| 0.9279 | 107.14 | 9000 | 0.3643 | 0.6248 | 0.1325 |
| 0.9268 | 119.05 | 10000 | 0.3630 | 0.6243 | 0.1316 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2 |
devagonal/flan-t5-rouge-durga-q5-clean-4d | devagonal | 2024-10-28T13:34:41Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-28T13:34:03Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-rouge-durga-q5-clean-4d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-rouge-durga-q5-clean-4d
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0519
- Rouge1: 0.5221
- Rouge2: 0.4278
- Rougel: 0.5213
- Rougelsum: 0.5204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.4357 | 1.0 | 9 | 1.9785 | 0.2586 | 0.0742 | 0.2539 | 0.2539 |
| 2.6395 | 2.0 | 18 | 1.6948 | 0.2578 | 0.0708 | 0.2527 | 0.2525 |
| 1.7751 | 3.0 | 27 | 1.4660 | 0.2843 | 0.0833 | 0.2773 | 0.2777 |
| 2.0201 | 4.0 | 36 | 1.2841 | 0.3119 | 0.1080 | 0.3055 | 0.3068 |
| 1.9879 | 5.0 | 45 | 1.1375 | 0.3388 | 0.1313 | 0.3321 | 0.3333 |
| 1.6617 | 6.0 | 54 | 0.9940 | 0.3351 | 0.1264 | 0.3256 | 0.3259 |
| 1.5556 | 7.0 | 63 | 0.8861 | 0.3647 | 0.1620 | 0.3567 | 0.3569 |
| 1.2433 | 8.0 | 72 | 0.7889 | 0.3656 | 0.1716 | 0.3580 | 0.3579 |
| 1.252 | 9.0 | 81 | 0.6992 | 0.3651 | 0.1773 | 0.3563 | 0.3571 |
| 1.0389 | 10.0 | 90 | 0.6118 | 0.3777 | 0.1866 | 0.3699 | 0.3705 |
| 0.6633 | 11.0 | 99 | 0.5348 | 0.3646 | 0.1800 | 0.3589 | 0.3584 |
| 0.7738 | 12.0 | 108 | 0.4685 | 0.3909 | 0.2112 | 0.3844 | 0.3844 |
| 0.7849 | 13.0 | 117 | 0.4048 | 0.3843 | 0.2150 | 0.3766 | 0.3769 |
| 0.9278 | 14.0 | 126 | 0.3418 | 0.3973 | 0.2315 | 0.3915 | 0.3918 |
| 0.7269 | 15.0 | 135 | 0.3038 | 0.4066 | 0.2593 | 0.4001 | 0.4016 |
| 0.6558 | 16.0 | 144 | 0.2834 | 0.4323 | 0.2812 | 0.4289 | 0.4292 |
| 0.5569 | 17.0 | 153 | 0.2396 | 0.4287 | 0.2817 | 0.4219 | 0.4235 |
| 0.6052 | 18.0 | 162 | 0.2186 | 0.4382 | 0.2981 | 0.4323 | 0.4334 |
| 0.575 | 19.0 | 171 | 0.1989 | 0.4194 | 0.2784 | 0.4159 | 0.4162 |
| 0.5307 | 20.0 | 180 | 0.1722 | 0.4403 | 0.2978 | 0.4340 | 0.4357 |
| 0.4588 | 21.0 | 189 | 0.1643 | 0.4636 | 0.3195 | 0.4570 | 0.4580 |
| 0.3977 | 22.0 | 198 | 0.1431 | 0.4546 | 0.3234 | 0.4491 | 0.4504 |
| 0.4509 | 23.0 | 207 | 0.1388 | 0.4621 | 0.3336 | 0.4567 | 0.4571 |
| 0.3736 | 24.0 | 216 | 0.1277 | 0.4495 | 0.3262 | 0.4426 | 0.4438 |
| 0.3618 | 25.0 | 225 | 0.1198 | 0.4622 | 0.3424 | 0.4571 | 0.4585 |
| 0.3059 | 26.0 | 234 | 0.1090 | 0.4718 | 0.3475 | 0.4677 | 0.4678 |
| 0.2782 | 27.0 | 243 | 0.1039 | 0.4722 | 0.3512 | 0.4675 | 0.4677 |
| 0.2374 | 28.0 | 252 | 0.1006 | 0.4650 | 0.3408 | 0.4621 | 0.4625 |
| 0.228 | 29.0 | 261 | 0.0945 | 0.4818 | 0.3571 | 0.4778 | 0.4782 |
| 0.2778 | 30.0 | 270 | 0.0948 | 0.4732 | 0.3582 | 0.4710 | 0.4719 |
| 0.2601 | 31.0 | 279 | 0.0889 | 0.4822 | 0.3626 | 0.4791 | 0.4803 |
| 0.2364 | 32.0 | 288 | 0.0866 | 0.4863 | 0.3724 | 0.4851 | 0.4865 |
| 0.2124 | 33.0 | 297 | 0.0855 | 0.4841 | 0.3666 | 0.4829 | 0.4836 |
| 0.2004 | 34.0 | 306 | 0.0809 | 0.4835 | 0.3715 | 0.4819 | 0.4831 |
| 0.2095 | 35.0 | 315 | 0.0764 | 0.4797 | 0.3666 | 0.4778 | 0.4796 |
| 0.3603 | 36.0 | 324 | 0.0744 | 0.4934 | 0.3815 | 0.4924 | 0.4925 |
| 0.181 | 37.0 | 333 | 0.0718 | 0.4863 | 0.3754 | 0.4864 | 0.4866 |
| 0.1435 | 38.0 | 342 | 0.0687 | 0.4857 | 0.3778 | 0.4859 | 0.4861 |
| 0.1306 | 39.0 | 351 | 0.0676 | 0.4921 | 0.3826 | 0.4903 | 0.4907 |
| 0.1668 | 40.0 | 360 | 0.0667 | 0.4853 | 0.3784 | 0.4832 | 0.4845 |
| 0.2279 | 41.0 | 369 | 0.0647 | 0.4998 | 0.3950 | 0.4967 | 0.4978 |
| 0.2863 | 42.0 | 378 | 0.0638 | 0.5018 | 0.4022 | 0.4992 | 0.4997 |
| 0.1381 | 43.0 | 387 | 0.0631 | 0.5066 | 0.4085 | 0.5037 | 0.5041 |
| 0.1868 | 44.0 | 396 | 0.0611 | 0.5081 | 0.4068 | 0.5062 | 0.5061 |
| 0.1351 | 45.0 | 405 | 0.0614 | 0.5018 | 0.4001 | 0.5011 | 0.5010 |
| 0.1355 | 46.0 | 414 | 0.0604 | 0.5051 | 0.4027 | 0.5040 | 0.5045 |
| 0.108 | 47.0 | 423 | 0.0588 | 0.4983 | 0.3956 | 0.4982 | 0.4983 |
| 0.133 | 48.0 | 432 | 0.0573 | 0.5082 | 0.4069 | 0.5073 | 0.5075 |
| 0.2242 | 49.0 | 441 | 0.0565 | 0.5117 | 0.4114 | 0.5104 | 0.5104 |
| 0.1678 | 50.0 | 450 | 0.0548 | 0.5241 | 0.4272 | 0.5222 | 0.5225 |
| 0.1282 | 51.0 | 459 | 0.0543 | 0.5224 | 0.4263 | 0.5206 | 0.5212 |
| 0.15 | 52.0 | 468 | 0.0531 | 0.5171 | 0.4209 | 0.5161 | 0.5169 |
| 0.1356 | 53.0 | 477 | 0.0528 | 0.5164 | 0.4178 | 0.5159 | 0.5158 |
| 0.134 | 54.0 | 486 | 0.0527 | 0.5180 | 0.4228 | 0.5176 | 0.5178 |
| 0.1321 | 55.0 | 495 | 0.0529 | 0.5162 | 0.4192 | 0.5155 | 0.5162 |
| 0.1362 | 56.0 | 504 | 0.0526 | 0.5166 | 0.4206 | 0.5157 | 0.5156 |
| 0.1764 | 57.0 | 513 | 0.0524 | 0.5170 | 0.4215 | 0.5153 | 0.5163 |
| 0.1549 | 58.0 | 522 | 0.0522 | 0.5221 | 0.4278 | 0.5213 | 0.5204 |
| 0.1475 | 59.0 | 531 | 0.0520 | 0.5221 | 0.4278 | 0.5213 | 0.5204 |
| 0.1441 | 60.0 | 540 | 0.0519 | 0.5221 | 0.4278 | 0.5213 | 0.5204 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
g-assismoraes/deberta-semeval25_justEN08_fold4 | g-assismoraes | 2024-10-28T13:33:49Z | 164 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:32:00Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_justEN08_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_justEN08_fold4
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4863
- Precision Samples: 0.475
- Recall Samples: 0.475
- F1 Samples: 0.475
- Precision Macro: 0.9925
- Recall Macro: 0.3714
- F1 Macro: 0.3663
- Precision Micro: 0.475
- Recall Micro: 0.2135
- F1 Micro: 0.2946
- Precision Weighted: 0.8879
- Recall Weighted: 0.2135
- F1 Weighted: 0.1375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| No log | 1.0 | 5 | 9.3895 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3571 | 0.3571 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 8.4727 | 2.0 | 10 | 9.1240 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3571 | 0.3571 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 8.4727 | 3.0 | 15 | 8.9313 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.9967 | 4.0 | 20 | 8.7771 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.9967 | 5.0 | 25 | 8.6665 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.6436 | 6.0 | 30 | 8.6018 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.6436 | 7.0 | 35 | 8.5536 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.4488 | 8.0 | 40 | 8.5164 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.4488 | 9.0 | 45 | 8.4947 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
| 7.3567 | 10.0 | 50 | 8.4863 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.3714 | 0.3663 | 0.475 | 0.2135 | 0.2946 | 0.8879 | 0.2135 | 0.1375 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/deberta-semeval25_justEN08_fold3 | g-assismoraes | 2024-10-28T13:31:58Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:30:33Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_justEN08_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_justEN08_fold3
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2204
- Precision Samples: 0.475
- Recall Samples: 0.475
- F1 Samples: 0.475
- Precision Macro: 0.9925
- Recall Macro: 0.4714
- F1 Macro: 0.4663
- Precision Micro: 0.475
- Recall Micro: 0.2184
- F1 Micro: 0.2992
- Precision Weighted: 0.8853
- Recall Weighted: 0.2184
- F1 Weighted: 0.1407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| No log | 1.0 | 5 | 9.2146 | 1.0 | 0.0 | 0.0 | 1.0 | 0.4571 | 0.4571 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 8.5118 | 2.0 | 10 | 8.9478 | 1.0 | 0.0 | 0.0 | 1.0 | 0.4571 | 0.4571 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 8.5118 | 3.0 | 15 | 8.7350 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 8.0538 | 4.0 | 20 | 8.5623 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 8.0538 | 5.0 | 25 | 8.4234 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 7.7127 | 6.0 | 30 | 8.3465 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 7.7127 | 7.0 | 35 | 8.2928 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 7.5228 | 8.0 | 40 | 8.2532 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 7.5228 | 9.0 | 45 | 8.2289 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
| 7.4198 | 10.0 | 50 | 8.2204 | 0.475 | 0.475 | 0.475 | 0.9925 | 0.4714 | 0.4663 | 0.475 | 0.2184 | 0.2992 | 0.8853 | 0.2184 | 0.1407 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
leekh7624/mymodel1 | leekh7624 | 2024-10-28T13:31:39Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"base_model:finetune:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T13:27:26Z | ---
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** leekh7624
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B-Instruct-preview
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AIDSC/jais-13b | AIDSC | 2024-10-28T13:28:05Z | 8 | 0 | null | [
"pytorch",
"jais",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"text-generation",
"custom_code",
"ar",
"en",
"arxiv:2308.16149",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-23T11:45:01Z | ---
language:
- ar
- en
thumbnail: null
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
license: apache-2.0
pipeline_tag: text-generation
---
# Jais-13b
<!-- Provide a quick summary of what the model is/does. -->
This is a 13 billion parameter pre-trained bilingual large language model for both Arabic and English,
trained on a dataset containing 72 billion Arabic tokens and 279 billion English/code tokens.
The Arabic data is iterated over for 1.6 epochs (as opposed to 1 epoch for English/code), for a total of 395 billion tokens of training.
The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU
non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate
to long sequence lengths, providing improved context handling and model precision.
## Getting started
Below is sample code to use the model. Note that the model requires a custom model class, so users must
enable `trust_remote_code=True` while loading the model.
Also, note that this code is tested on `transformers==4.28.0`.
```python
# -*- coding: utf-8 -*-
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "core42/jais-13b"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
def get_response(text,tokenizer=tokenizer,model=model):
input_ids = tokenizer(text, return_tensors="pt").input_ids
inputs = input_ids.to(device)
input_len = inputs.shape[-1]
generate_ids = model.generate(
inputs,
top_p=0.9,
temperature=0.3,
max_length=200-input_len,
min_length=input_len + 4,
repetition_penalty=1.2,
do_sample=True,
)
response = tokenizer.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
return response
text= "ΨΉΨ§Ψ΅Ω
Ψ© Ψ―ΩΩΨ© Ψ§ΩΨ₯Ω
Ψ§Ψ±Ψ§Ψͺ Ψ§ΩΨΉΨ±Ψ¨ΩΨ© Ψ§ΩΩ
ΨͺΨΨ―Ψ© Ω"
print(get_response(text))
text = "The capital of UAE is"
print(get_response(text))
```
## Model Details
- **Developed by:** [Inception](https://www.inceptioniai.org/en/), [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/), and [Cerebras Systems](https://www.cerebras.net/).
- **Language(s) (NLP):** Arabic and English
- **License:** Apache 2.0
- **Input:** Text only data.
- **Output:** Model generates text.
- **Paper :** [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/abs/2308.16149)
- **Demo :** [Access here](https://arabic-gpt.ai)
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release the Jais 13B model under a full open source license. We welcome all feedback and opportunities to collaborate.
This model is the first release from the Inception - MBZUAI - Cerebras parternship, and at the time of release,
achieved state of the art across a comprehensive Arabic test suite as described in the accompanying technical report.
Some potential downstream uses include:
- *Research*: This model can be used by researchers and developers.
- *Commercial Use*: It can be used as a base model to further fine-tune for specific use cases (similar to [jais-13b-chat](https://huggingface.co/inception-mbzuai/jais-13b-chat)).
Some potential use cases include:
- Chat-assistants.
- Customer service.
Audiences that we hope will benefit from our model:
- *Academics*: For those researching Arabic natural language processing.
- *Businesses*: Companies targeting Arabic-speaking audiences.
- *Developers*: Those integrating Arabic language capabilities in apps.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
While Jais-13b is a powerful Arabic and English bilingual model, it's essential to understand its limitations and the potential of misuse.
It is prohibited to use the model in any manner that violates applicable laws or regulations.
The following are some example scenarios where the model should not be used.
- *Malicious Use*: The model should not be used for generating harmful, misleading, or inappropriate content. This includes but is not limited to:
- Generating or promoting hate speech, violence, or discrimination.
- Spreading misinformation or fake news.
- Engaging in or promoting illegal activities.
- *Sensitive Information*: The model should not be used to handle or generate personal, confidential, or sensitive information.
- *Generalization Across All Languages*: Jais-13b is bilingual and optimized for Arabic and English, it should not be assumed to have equal proficiency in other languages or dialects.
- *High-Stakes Decisions*: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on publicly available data which was in part curated by Inception. We have employed different
techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Arabic and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.
By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model.
Copyright Inception Institute of Artificial Intelligence Ltd. JAIS is made available under the Apache License, Version 2.0 (the βLicenseβ). You shall not use JAIS except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, JAIS is distributed on an AS IS basis, without warranties or conditions of any kind, either express or implied. Please see the terms of the License for the specific language permissions and limitations under the License.
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
For the pre-training of Jais-13b, we used a diverse bilingual corpus sourced from the Web and other sources. We also used publicly available English and code datasets.
To collect Arabic data, we use multiple sources including web pages, wikipedia articles, news articles, Arabic books,
and social network content. We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system.
We restrict this to high quality English resources such as English Wikipedia and English books. Further details about the training data can be found in the technical report.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
#### Training Hyperparameters
| Hyperparameter | Value |
|----------------------------|------------------------------|
| Precision | fp32 |
| Optimizer | AdamW |
| Learning rate | 0 to 0.012 (<= 95 steps) |
| | 0.012 to 0.0012 (> 95 steps) |
| Weight decay | 0.1 |
| Batch size | 1920 |
| Steps | 100551 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We conducted a comprehensive evaluation of Jais and benchmarked it other leading base language models, focusing on both English and Arabic. The evaluation criteria spanned various dimensions, including:
- **Knowledge:** How well the model answers factual questions.
- **Reasoning:** The model's ability to answer questions requiring reasoning.
- **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
Arabic evaluation results:
| Models | Avg | EXAMS | MMLU (M) | LitQA | Hellaswag | PIQA | BoolQA | SituatedQA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs |
|-------------|-------|-------|----------|-------|-----------|------|--------|------------|-------|------------|------------|-------------|
| Jais (13B) | **46.5** | 40.4 | 30.0 | 58.3 | 57.7 | 67.6 | 62.6 | 42.5 | 35.8 | 32.4 | 41.1 | 58.4 |
| BLOOM (7.1B) | 40.9 |34.0 | 28.2 | 37.1 | 40.9 | 58.4 | 59.9 | 39.1 | 27.3 | 28.0 | 44.4 | 53.5 |
| LLaMA2 (13B) | 38.1 | 29.2 | 28.4 | 32.0 | 34.3 | 52.9 | 63.8 | 36.4 | 24.3 | 30.0 | 45.5 | 49.9 |
| AraT5 (220M) | 32.0 | 24.7 | 23.8 | 26.3 | 25.5 | 50.4 | 58.2 | 33.9 | 24.7 | 25.4 | 20.9 | 47.2 |
| AraBART (139M) | 36.7 | 26.5 | 27.5 | 34.3 | 28.1 | 52.6 | 57.1 | 34.6 | 25.1 | 28.6 | 49.8 | 48.8 |
All tasks above report accuracy or F1 scores (the higher the better). For the sake of brevity, we do not include results over English tasks.
Detailed comparisons in both languages and evaluation dataset details can be found in the technical report.
## Citation
```
@misc{sengupta2023jais,
title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models},
author={Neha Sengupta and Sunil Kumar Sahu and Bokang Jia and Satheesh Katipomu and Haonan Li and Fajri Koto and Osama Mohammed Afzal and Samta Kamboj and Onkar Pandit and Rahul Pal and Lalit Pradhan and Zain Muhammad Mujahid and Massa Baali and Alham Fikri Aji and Zhengzhong Liu and Andy Hock and Andrew Feldman and Jonathan Lee and Andrew Jackson and Preslav Nakov and Timothy Baldwin and Eric Xing},
year={2023},
eprint={2308.16149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Copyright Inception Institute of Artificial Intelligence Ltd.
|
g-assismoraes/deberta-semeval25_EN08_fold5 | g-assismoraes | 2024-10-28T13:23:57Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:20:57Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_EN08_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_EN08_fold5
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5536
- Precision Samples: 0.1328
- Recall Samples: 0.5774
- F1 Samples: 0.1999
- Precision Macro: 0.7392
- Recall Macro: 0.3847
- F1 Macro: 0.2352
- Precision Micro: 0.1241
- Recall Micro: 0.5105
- F1 Micro: 0.1996
- Precision Weighted: 0.4496
- Recall Weighted: 0.5105
- F1 Weighted: 0.1428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.6674 | 1.0 | 19 | 10.0433 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2 | 0.2 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 9.4429 | 2.0 | 38 | 9.6521 | 0.1414 | 0.2694 | 0.1718 | 0.9653 | 0.2302 | 0.2100 | 0.1395 | 0.1592 | 0.1487 | 0.8422 | 0.1592 | 0.0522 |
| 9.3994 | 3.0 | 57 | 9.3897 | 0.1330 | 0.3397 | 0.1774 | 0.9246 | 0.2579 | 0.2188 | 0.1260 | 0.2282 | 0.1624 | 0.7444 | 0.2282 | 0.0687 |
| 8.911 | 4.0 | 76 | 9.1828 | 0.1419 | 0.4527 | 0.2027 | 0.8736 | 0.3021 | 0.2296 | 0.1353 | 0.3634 | 0.1972 | 0.6168 | 0.3634 | 0.1106 |
| 8.8832 | 5.0 | 95 | 8.9865 | 0.1322 | 0.4795 | 0.1916 | 0.8439 | 0.3162 | 0.2333 | 0.1204 | 0.3874 | 0.1838 | 0.5762 | 0.3874 | 0.1141 |
| 8.4356 | 6.0 | 114 | 8.8343 | 0.1401 | 0.5270 | 0.2034 | 0.8182 | 0.3416 | 0.2435 | 0.1300 | 0.4474 | 0.2015 | 0.5316 | 0.4474 | 0.1384 |
| 8.737 | 7.0 | 133 | 8.7046 | 0.1360 | 0.5659 | 0.2039 | 0.7962 | 0.3743 | 0.2362 | 0.1291 | 0.4925 | 0.2046 | 0.5107 | 0.4925 | 0.1423 |
| 8.7982 | 8.0 | 152 | 8.6328 | 0.1358 | 0.5783 | 0.2039 | 0.7842 | 0.3796 | 0.2357 | 0.1281 | 0.5105 | 0.2048 | 0.4997 | 0.5105 | 0.1451 |
| 8.2308 | 9.0 | 171 | 8.5950 | 0.1334 | 0.5649 | 0.2002 | 0.7407 | 0.3774 | 0.2355 | 0.1242 | 0.4955 | 0.1986 | 0.4523 | 0.4955 | 0.1432 |
| 8.7681 | 10.0 | 190 | 8.5536 | 0.1328 | 0.5774 | 0.1999 | 0.7392 | 0.3847 | 0.2352 | 0.1241 | 0.5105 | 0.1996 | 0.4496 | 0.5105 | 0.1428 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/deberta-semeval25_EN08_fold4 | g-assismoraes | 2024-10-28T13:20:53Z | 199 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:17:23Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_EN08_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_EN08_fold4
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.1897
- Precision Samples: 0.1604
- Recall Samples: 0.6146
- F1 Samples: 0.2366
- Precision Macro: 0.7633
- Recall Macro: 0.4302
- F1 Macro: 0.2957
- Precision Micro: 0.1495
- Recall Micro: 0.5306
- F1 Micro: 0.2332
- Precision Weighted: 0.4778
- Recall Weighted: 0.5306
- F1 Weighted: 0.1751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.3966 | 1.0 | 19 | 10.8605 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2333 | 0.2333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 10.1016 | 2.0 | 38 | 10.4044 | 0.1770 | 0.2926 | 0.1885 | 0.9724 | 0.2657 | 0.2430 | 0.1751 | 0.1722 | 0.1737 | 0.8561 | 0.1722 | 0.0525 |
| 9.6871 | 3.0 | 57 | 10.1400 | 0.1579 | 0.3440 | 0.1920 | 0.9413 | 0.2860 | 0.2471 | 0.1484 | 0.2333 | 0.1814 | 0.7657 | 0.2333 | 0.0649 |
| 9.4348 | 4.0 | 76 | 9.8391 | 0.1748 | 0.4387 | 0.2291 | 0.8867 | 0.3321 | 0.2655 | 0.1568 | 0.3472 | 0.2161 | 0.6401 | 0.3472 | 0.1111 |
| 9.2239 | 5.0 | 95 | 9.6192 | 0.1712 | 0.4963 | 0.2351 | 0.8277 | 0.3609 | 0.2784 | 0.1598 | 0.4111 | 0.2302 | 0.5594 | 0.4111 | 0.1433 |
| 8.756 | 6.0 | 114 | 9.5185 | 0.1683 | 0.5525 | 0.2394 | 0.8002 | 0.3972 | 0.2954 | 0.1569 | 0.4694 | 0.2352 | 0.5206 | 0.4694 | 0.1611 |
| 8.4617 | 7.0 | 133 | 9.3178 | 0.1606 | 0.5826 | 0.2330 | 0.7875 | 0.4124 | 0.2897 | 0.1483 | 0.5056 | 0.2294 | 0.5074 | 0.5056 | 0.1567 |
| 7.9981 | 8.0 | 152 | 9.3682 | 0.1586 | 0.5750 | 0.2311 | 0.7686 | 0.4101 | 0.2891 | 0.1470 | 0.4889 | 0.2261 | 0.4698 | 0.4889 | 0.1577 |
| 8.4678 | 9.0 | 171 | 9.2193 | 0.1617 | 0.5937 | 0.2359 | 0.7633 | 0.4199 | 0.2951 | 0.1513 | 0.5111 | 0.2335 | 0.4750 | 0.5111 | 0.1703 |
| 8.2932 | 10.0 | 190 | 9.1897 | 0.1604 | 0.6146 | 0.2366 | 0.7633 | 0.4302 | 0.2957 | 0.1495 | 0.5306 | 0.2332 | 0.4778 | 0.5306 | 0.1751 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/deberta-semeval25_EN08_fold2 | g-assismoraes | 2024-10-28T13:14:05Z | 199 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:10:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_EN08_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_EN08_fold2
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.6101
- Precision Samples: 0.1209
- Recall Samples: 0.5559
- F1 Samples: 0.1849
- Precision Macro: 0.7685
- Recall Macro: 0.3780
- F1 Macro: 0.2402
- Precision Micro: 0.1172
- Recall Micro: 0.4697
- F1 Micro: 0.1875
- Precision Weighted: 0.5025
- Recall Weighted: 0.4697
- F1 Weighted: 0.1391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.3898 | 1.0 | 19 | 9.9343 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1889 | 0.1889 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 10.0522 | 2.0 | 38 | 9.6084 | 0.1874 | 0.2767 | 0.2067 | 0.9620 | 0.2178 | 0.1992 | 0.1791 | 0.1606 | 0.1693 | 0.8331 | 0.1606 | 0.0539 |
| 9.7928 | 3.0 | 57 | 9.3874 | 0.1336 | 0.3540 | 0.1804 | 0.9515 | 0.2427 | 0.2012 | 0.1294 | 0.2333 | 0.1665 | 0.7959 | 0.2333 | 0.0606 |
| 9.4936 | 4.0 | 76 | 9.1515 | 0.1186 | 0.4298 | 0.1719 | 0.8698 | 0.2874 | 0.2133 | 0.1156 | 0.3242 | 0.1704 | 0.6379 | 0.3242 | 0.0854 |
| 9.1022 | 5.0 | 95 | 8.9739 | 0.1205 | 0.4944 | 0.1790 | 0.8336 | 0.3224 | 0.2227 | 0.1158 | 0.3848 | 0.1780 | 0.5852 | 0.3848 | 0.1061 |
| 9.2254 | 6.0 | 114 | 8.8771 | 0.1207 | 0.5078 | 0.1798 | 0.8340 | 0.3302 | 0.2245 | 0.1170 | 0.4030 | 0.1813 | 0.5860 | 0.4030 | 0.1106 |
| 8.9117 | 7.0 | 133 | 8.7591 | 0.1147 | 0.5250 | 0.1755 | 0.7877 | 0.3399 | 0.2259 | 0.1118 | 0.4273 | 0.1772 | 0.5301 | 0.4273 | 0.1160 |
| 8.7312 | 8.0 | 152 | 8.6366 | 0.1215 | 0.5708 | 0.1872 | 0.7836 | 0.3750 | 0.2412 | 0.1171 | 0.4697 | 0.1874 | 0.5273 | 0.4697 | 0.1418 |
| 8.953 | 9.0 | 171 | 8.6276 | 0.1199 | 0.5553 | 0.1831 | 0.7682 | 0.3625 | 0.2377 | 0.1165 | 0.4667 | 0.1864 | 0.5065 | 0.4667 | 0.1396 |
| 8.1407 | 10.0 | 190 | 8.6101 | 0.1209 | 0.5559 | 0.1849 | 0.7685 | 0.3780 | 0.2402 | 0.1172 | 0.4697 | 0.1875 | 0.5025 | 0.4697 | 0.1391 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
SidXXD/95 | SidXXD | 2024-10-28T13:14:04Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-27T08:15:36Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/95
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
g-assismoraes/deberta-semeval25_EN08_fold1 | g-assismoraes | 2024-10-28T13:10:31Z | 164 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T13:07:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-semeval25_EN08_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-semeval25_EN08_fold1
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2509
- Precision Samples: 0.1249
- Recall Samples: 0.6497
- F1 Samples: 0.1949
- Precision Macro: 0.7291
- Recall Macro: 0.4651
- F1 Macro: 0.2720
- Precision Micro: 0.1058
- Recall Micro: 0.5833
- F1 Micro: 0.1791
- Precision Weighted: 0.4269
- Recall Weighted: 0.5833
- F1 Weighted: 0.1477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.742 | 1.0 | 19 | 9.6524 | 0.9589 | 0.0205 | 0.0205 | 0.9926 | 0.2234 | 0.2240 | 0.3333 | 0.0093 | 0.0180 | 0.9403 | 0.0093 | 0.0141 |
| 10.3264 | 2.0 | 38 | 9.2947 | 0.1229 | 0.2857 | 0.1608 | 0.9497 | 0.2667 | 0.2315 | 0.1199 | 0.1975 | 0.1492 | 0.8249 | 0.1975 | 0.0493 |
| 9.6673 | 3.0 | 57 | 9.0965 | 0.1046 | 0.3364 | 0.1497 | 0.8967 | 0.2889 | 0.2395 | 0.1043 | 0.2562 | 0.1482 | 0.7240 | 0.2562 | 0.0668 |
| 9.8896 | 4.0 | 76 | 8.8422 | 0.1293 | 0.4635 | 0.1839 | 0.8434 | 0.3589 | 0.2560 | 0.1089 | 0.3951 | 0.1708 | 0.6154 | 0.3951 | 0.1115 |
| 9.3618 | 5.0 | 95 | 8.6755 | 0.1328 | 0.5445 | 0.1914 | 0.8002 | 0.4059 | 0.2609 | 0.1064 | 0.5 | 0.1754 | 0.5064 | 0.5 | 0.1267 |
| 9.3241 | 6.0 | 114 | 8.5167 | 0.1332 | 0.6158 | 0.2021 | 0.8049 | 0.4417 | 0.2741 | 0.1122 | 0.5525 | 0.1865 | 0.5153 | 0.5525 | 0.1459 |
| 8.8868 | 7.0 | 133 | 8.3815 | 0.1264 | 0.6295 | 0.1955 | 0.7506 | 0.4425 | 0.2703 | 0.1084 | 0.5556 | 0.1815 | 0.4567 | 0.5556 | 0.1413 |
| 8.9554 | 8.0 | 152 | 8.3410 | 0.1273 | 0.6363 | 0.1981 | 0.7317 | 0.4531 | 0.2747 | 0.1104 | 0.5648 | 0.1848 | 0.4291 | 0.5648 | 0.1491 |
| 8.8845 | 9.0 | 171 | 8.2870 | 0.1274 | 0.6487 | 0.1990 | 0.7306 | 0.4630 | 0.2744 | 0.1104 | 0.5833 | 0.1857 | 0.4288 | 0.5833 | 0.1511 |
| 8.5189 | 10.0 | 190 | 8.2509 | 0.1249 | 0.6497 | 0.1949 | 0.7291 | 0.4651 | 0.2720 | 0.1058 | 0.5833 | 0.1791 | 0.4269 | 0.5833 | 0.1477 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
german-nlp-group/electra-base-german-uncased | german-nlp-group | 2024-10-28T13:09:01Z | 2,719 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"pretraining",
"commoncrawl",
"uncased",
"umlaute",
"umlauts",
"german",
"deutsch",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: de
license: mit
thumbnail: "https://raw.githubusercontent.com/German-NLP-Group/german-transformer-training/master/model_cards/german-electra-logo.png"
tags:
- electra
- commoncrawl
- uncased
- umlaute
- umlauts
- german
- deutsch
---
# German Electra Uncased
<img width="300px" src="https://raw.githubusercontent.com/German-NLP-Group/german-transformer-training/master/model_cards/german-electra-logo.png">
[ΒΉ]
## Version 2 Release
We released an improved version of this model. Version 1 was trained for 766,000 steps. For this new version we continued the training for an additional 734,000 steps. It therefore follows that version 2 was trained on a total of 1,500,000 steps. See "Evaluation of Version 2: GermEval18 Coarse" below for details.
## Model Info
This Model is suitable for training on many downstream tasks in German (Q&A, Sentiment Analysis, etc.).
It can be used as a drop-in replacement for **BERT** in most down-stream tasks (**ELECTRA** is even implemented as an extended **BERT** Class).
At the time of release (August 2020) this model is the best performing publicly available German NLP model on various German evaluation metrics (CONLL03-DE, GermEval18 Coarse, GermEval18 Fine). For GermEval18 Coarse results see below. More will be published soon.
## Installation
This model has the special feature that it is **uncased** but does **not strip accents**.
This possibility was added by us with [PR #6280](https://github.com/huggingface/transformers/pull/6280).
To use it you have to use Transformers version 3.1.0 or newer.
```bash
pip install transformers -U
```
## Uncase and Umlauts ('Γ', 'Γ', 'Γ')
This model is uncased. This helps especially for domains where colloquial terms with uncorrect capitalization is often used.
The special characters 'ΓΆ', 'ΓΌ', 'Γ€' are included through the `strip_accent=False` option, as this leads to an improved precision.
## Creators
This model was trained and open sourced in conjunction with the [**German NLP Group**](https://github.com/German-NLP-Group) in equal parts by:
- [**Philip May**](https://philipmay.org) - [Deutsche Telekom](https://www.telekom.de/)
- [**Philipp ReiΓel**](https://www.linkedin.com/in/philipp-reissel/) - [ambeRoad](https://amberoad.de/)
## Evaluation of Version 2: GermEval18 Coarse
We evaluated all language models on GermEval18 with the F1 macro score. For each model we did an extensive automated hyperparameter search. With the best hyperparmeters we did fit the moodel multiple times on GermEval18. This is done to cancel random effects and get results of statistical relevance.

## Checkpoint evaluation
Since it it not guaranteed that the last checkpoint is the best, we evaluated the checkpoints on GermEval18. We found that the last checkpoint is indeed the best. The training was stable and did not overfit the text corpus.
## Pre-training details
### Data
- Cleaned Common Crawl Corpus 2019-09 German: [CC_net](https://github.com/facebookresearch/cc_net) (Only head coprus and filtered for language_score > 0.98) - 62 GB
- German Wikipedia Article Pages Dump (20200701) - 5.5 GB
- German Wikipedia Talk Pages Dump (20200620) - 1.1 GB
- Subtitles - 823 MB
- News 2018 - 4.1 GB
The sentences were split with [SojaMo](https://github.com/tsproisl/SoMaJo). We took the German Wikipedia Article Pages Dump 3x to oversample. This approach was also used in a similar way in GPT-3 (Table 2.2).
More Details can be found here [Preperaing Datasets for German Electra Github](https://github.com/German-NLP-Group/german-transformer-training)
### Electra Branch no_strip_accents
Because we do not want to stip accents in our training data we made a change to Electra and used this repo [Electra no_strip_accents](https://github.com/PhilipMay/electra/tree/no_strip_accents) (branch `no_strip_accents`). Then created the tf dataset with:
```bash
python build_pretraining_dataset.py --corpus-dir <corpus_dir> --vocab-file <dir>/vocab.txt --output-dir ./tf_data --max-seq-length 512 --num-processes 8 --do-lower-case --no-strip-accents
```
### The training
The training itself can be performed with the Original Electra Repo (No special case for this needed).
We run it with the following Config:
<details>
<summary>The exact Training Config</summary>
<br/>debug False
<br/>disallow_correct False
<br/>disc_weight 50.0
<br/>do_eval False
<br/>do_lower_case True
<br/>do_train True
<br/>electra_objective True
<br/>embedding_size 768
<br/>eval_batch_size 128
<br/>gcp_project None
<br/>gen_weight 1.0
<br/>generator_hidden_size 0.33333
<br/>generator_layers 1.0
<br/>iterations_per_loop 200
<br/>keep_checkpoint_max 0
<br/>learning_rate 0.0002
<br/>lr_decay_power 1.0
<br/>mask_prob 0.15
<br/>max_predictions_per_seq 79
<br/>max_seq_length 512
<br/>model_dir gs://XXX
<br/>model_hparam_overrides {}
<br/>model_name 02_Electra_Checkpoints_32k_766k_Combined
<br/>model_size base
<br/>num_eval_steps 100
<br/>num_tpu_cores 8
<br/>num_train_steps 766000
<br/>num_warmup_steps 10000
<br/>pretrain_tfrecords gs://XXX
<br/>results_pkl gs://XXX
<br/>results_txt gs://XXX
<br/>save_checkpoints_steps 5000
<br/>temperature 1.0
<br/>tpu_job_name None
<br/>tpu_name electrav5
<br/>tpu_zone None
<br/>train_batch_size 256
<br/>uniform_generator False
<br/>untied_generator True
<br/>untied_generator_embeddings False
<br/>use_tpu True
<br/>vocab_file gs://XXX
<br/>vocab_size 32767
<br/>weight_decay_rate 0.01
</details>

Please Note: *Due to the GAN like strucutre of Electra the loss is not that meaningful*
It took about 7 Days on a preemtible TPU V3-8. In total, the Model went through approximately 10 Epochs. For an automatically recreation of a cancelled TPUs we used [tpunicorn](https://github.com/shawwn/tpunicorn). The total cost of training summed up to about 450 $ for one run. The Data-pre processing and Vocab Creation needed approximately 500-1000 CPU hours. Servers were fully provided by [T-Systems on site services GmbH](https://www.t-systems-onsite.de/), [ambeRoad](https://amberoad.de/).
Special thanks to [Stefan Schweter](https://github.com/stefan-it) for your feedback and providing parts of the text corpus.
[ΒΉ]: Source for the picture [Pinterest](https://www.pinterest.cl/pin/371828512984142193/)
### Negative Results
We tried the following approaches which we found had no positive influence:
- **Increased Vocab Size**: Leads to more parameters and thus reduced examples/sec while no visible Performance gains were measured
- **Decreased Batch-Size**: The original Electra was trained with a Batch Size per TPU Core of 16 whereas this Model was trained with 32 BS / TPU Core. We found out that 32 BS leads to better results when you compare metrics over computation time
## License - The MIT License
Copyright 2020-2021 [Philip May](https://philipmay.org)\
Copyright 2020-2021 [Philipp ReiΓel](https://www.linkedin.com/in/philipp-reissel/)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF | mradermacher | 2024-10-28T12:58:03Z | 49 | 0 | transformers | [
"transformers",
"gguf",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"base_model:Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct",
"base_model:quantized:Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T12:56:02Z | ---
base_model: Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
language:
- ru
- en
library_name: transformers
license: apache-2.0
model_name: Vikhr-Qwen-2.5-0.5b-Instruct
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF/resolve/main/Vikhr-Qwen-2.5-0.5b-Instruct.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Shilin-LU/VINE-R-Enc | Shilin-LU | 2024-10-28T12:54:59Z | 37 | 0 | null | [
"safetensors",
"image_watermarking",
"image-to-image",
"en",
"dataset:BleachNick/UltraEdit",
"arxiv:2410.18775",
"base_model:stabilityai/sdxl-turbo",
"base_model:finetune:stabilityai/sdxl-turbo",
"license:mit",
"region:us"
] | image-to-image | 2024-10-28T11:25:01Z | ---
tags:
- image_watermarking
license: mit
datasets:
- BleachNick/UltraEdit
language:
- en
base_model:
- stabilityai/sdxl-turbo
pipeline_tag: image-to-image
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Docs: https://github.com/Shilin-LU/VINE
- arXiv: https://arxiv.org/abs/2410.18775 |
Shilin-LU/VINE-B-Dec | Shilin-LU | 2024-10-28T12:54:37Z | 14 | 0 | null | [
"safetensors",
"image-watermarking",
"en",
"arxiv:2410.18775",
"license:mit",
"region:us"
] | null | 2024-10-28T11:42:08Z | ---
tags:
- image-watermarking
license: mit
language:
- en
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Docs: https://github.com/Shilin-LU/VINE
- arXiv: https://arxiv.org/abs/2410.18775 |
glif-loradex-trainer/bingbangboom_flux_dev_SMPGCLRPHTO | glif-loradex-trainer | 2024-10-28T12:42:53Z | 80 | 2 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-13T16:22:11Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1728836939821__000002500_0.jpg
text: A young boy seated on a weathered concrete wall beside a calm river. The
boy is dressed in a dark blue coat, beige trousers, and a black hat, exuding
a contemplative demeanor. The river reflects the overcast sky, with boats
moored in the distance, photo in the style of SMPGCLRPHTO
- output:
url: samples/2.jpg
text: a cat in a field of lavender flowers, photo in the style of SMPGCLRPHTO
- output:
url: samples/1728836987315__000002500_2.jpg
text: a portrait of a woman, japanese countryside, photo in the style of SMPGCLRPHTO
- output:
url: samples/4.jpg
text: a woman wearing a yellow sundress and a summer hat, reading a book, background of lush leaves, sitting on a bench in a public park, photo in the style of SMPGCLRPHTO
- output:
url: samples/5.jpg
text: a cat taking a nap on a work table, photo in the style of SMPGCLRPHTO
- output:
url: samples/6.jpg
text: a robot eating ramen in a busy cafe, photo in the style of SMPGCLRPHTO
base_model: black-forest-labs/FLUX.1-dev
trigger: SMPGCLRPHTO
instance_prompt: SMPGCLRPHTO
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# SMPGCLRPHTO (SMPG Color Photo)
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user [bingbangboom](https://huggingface.co/bingbangboom).
Flux LoRA for creating a faux vintage digital color composite (from glass negatives) effect. Use '**photo in the style of SMPGCLRPHTO**' to trigger the model
<Gallery />
## Trigger words
You should use `SMPGCLRPHTO` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/bingbangboom_flux_dev_SMPGCLRPHTO/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/lemnop_Acid_Graphics_Adv | glif-loradex-trainer | 2024-10-28T12:42:36Z | 64 | 2 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-28T12:42:02Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730119168914__000003000_0.jpg
text: ACXD-GFX, Logo
- output:
url: samples/1730119193605__000003000_1.jpg
text: ACXD-GFX, Graffiti
- output:
url: samples/1730119218484__000003000_2.jpg
text: ACXD-GFX, Smiley Face
- output:
url: samples/1730119243361__000003000_3.jpg
text: ACXD-GFX, Graphics
- output:
url: samples/1730119268231__000003000_4.jpg
text: ACXD-GFX, many icons
- output:
url: samples/1730119293022__000003000_5.jpg
text: ACXD-GFX, Globe
base_model: black-forest-labs/FLUX.1-dev
trigger: ACXD-GFX
instance_prompt: ACXD-GFX
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Acid_Graphics_Adv
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `lemnop`.
<Gallery />
## Trigger words
You should use `ACXD-GFX` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/lemnop_Acid_Graphics_Adv/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF | ggml-org | 2024-10-28T12:40:32Z | 2,892 | 1 | transformers | [
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-28T12:38:56Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
- llama-cpp
- gguf-my-repo
---
# ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-7B`](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF --hf-file qwen2.5-coder-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF --hf-file qwen2.5-coder-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF --hf-file qwen2.5-coder-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF --hf-file qwen2.5-coder-7b-q8_0.gguf -c 2048
```
|
gerald29/my_awesome_food_model | gerald29 | 2024-10-28T12:39:48Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/dinov2-base-imagenet1k-1-layer",
"base_model:finetune:facebook/dinov2-base-imagenet1k-1-layer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-11T01:44:33Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/dinov2-base-imagenet1k-1-layer
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Accuracy: 0.943
This is just a model created by following the the Tramnformers tutorial on image classification at https://huggingface.co/docs/transformers/main/en/tasks/image_classification
So quite worthless
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3989 | 0.992 | 62 | 0.3865 | 0.867 |
| 0.2722 | 2.0 | 125 | 0.2720 | 0.916 |
| 0.126 | 2.976 | 186 | 0.1930 | 0.943 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
scherrmann/GermanFinBert_SC_Sentiment | scherrmann | 2024-10-28T12:31:41Z | 169 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"de",
"arxiv:2311.08793",
"arxiv:1307.5336",
"arxiv:1708.07120",
"arxiv:1412.6980",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-11-17T10:28:32Z | ---
license: apache-2.0
language:
- de
widget:
- text: "STS Group AG erhΓ€lt GroΓauftrag von fΓΌhrendem Nutzfahrzeughersteller in Nordamerika und plant Bau eines ersten US-Werks"
- text: "ZukΓΌnftig soll jedoch je GeschΓ€ftsjahr eine Mindestdividende in HΓΆhe von EUR 2,00 je dividendenberechtigter Aktie an die AktionΓ€rinnen und AktionΓ€re ausgeschΓΌttet werden."
- text: "Comet passt Jahresprognose nach Q3 unter Erwartungen an"
---
# German FinBERT For Sentiment Analysis (Pre-trained From Scratch Version, Fine-Tuned for Financial Sentiment Analysis)
<img src="https://github.com/mscherrmann/mscherrmann.github.io/blob/master/assets/img/publication_preview/germanBert.png?raw=true" alt="Alt text for the image" width="500" height="300"/>
German FinBERT is a BERT language model focusing on the financial domain within the German language. In my [paper](https://arxiv.org/pdf/2311.08793.pdf), I describe in more detail the steps taken to train the model and show that it outperforms its generic benchmarks for finance specific downstream tasks.
This model is the [pre-trained from scratch version of German FinBERT](https://huggingface.co/scherrmann/GermanFinBert_SC), after fine-tuning on a translated version of the [financial news phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013). The data is available [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german).
## Overview
**Author** Moritz Scherrmann
**Paper:** [here](https://arxiv.org/pdf/2311.08793.pdf)
**Architecture:** BERT base
**Language:** German
**Specialization:** Financial sentiment
**Base model:** [German_FinBert_SC](https://huggingface.co/scherrmann/GermanFinBert_SC)
### Fine-tuning
I fine-tune the model using the 1cycle policy of [Smith and Topin (2019)](https://arxiv.org/abs/1708.07120). I use the Adam optimization method of [Kingma and Ba (2014)](https://arxiv.org/abs/1412.6980) with
standard parameters.I run a grid search on the evaluation set to find the best hyper-parameter setup. I test different
values for learning rate, batch size and number of epochs, following the suggestions of [Chalkidis et al. (2020)](https://aclanthology.org/2020.findings-emnlp.261/). I repeat the fine-tuning for each setup five times with different seeds, to avoid getting good results by chance.
After finding the best model w.r.t the evaluation set, I report the mean result across seeds for that model on the test set.
### Results
Translated [Financial news phrase bank](https://arxiv.org/abs/1307.5336) (Malo et al. (2013)), see [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german) for the data:
- Accuracy: 95.95%
- Macro F1: 92.70%
## Authors
Moritz Scherrmann: `scherrmann [at] lmu.de`
For additional details regarding the performance on fine-tune datasets and benchmark results, please refer to the full documentation provided in the study.
See also:
- scherrmann/GermanFinBERT_SC
- scherrmann/GermanFinBERT_FP
- scherrmann/GermanFinBERT_FP_QuAD |
aizenSosuke/sentence-similarity-finetuned-adrta | aizenSosuke | 2024-10-28T12:28:43Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-28T12:28:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Collov-Labs/Monetico | Collov-Labs | 2024-10-28T12:28:09Z | 23 | 65 | diffusers | [
"diffusers",
"safetensors",
"Non-Autoregressive",
"text-to-image",
"arxiv:2410.08261",
"license:apache-2.0",
"diffusers:Pipeline",
"region:us"
] | text-to-image | 2024-10-28T08:19:22Z | ---
pipeline_tag: text-to-image
license: apache-2.0
tags:
- Non-Autoregressive
---
# Monetico: An Efficient Reproduction of Meissonic for Text-to-Image Synthesis
## Introduction
Similar to Meissonic, Monetico is a non-autoregressive masked image modeling text-to-image synthesis model capable of generating high-resolution images. It is designed to run efficiently on consumer-grade graphics cards.
Monetico is an efficient reproduction of Meissonic. Trained on 8 H100 GPUs for approximately one week, Monetico can generate high-quality 512x512 images that are comparable to those produced by Meissonic and SDXL.
Monetico was developed by Collov Labs. We extend our gratitude to @MeissonFlow and @viiika for their valuable advice on efficient training.
## Usage
For detailed usage instructions, please refer to [GitHub repository](https://github.com/viiika/Meissonic).
## Citation
If you find this work helpful, please consider citing:
```bibtex
@article{bai2024meissonic,
title={Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis},
author={Bai, Jinbin and Ye, Tian and Chow, Wei and Song, Enxin and Chen, Qing-Guo and Li, Xiangtai and Dong, Zhen and Zhu, Lei and Yan, Shuicheng},
journal={arXiv preprint arXiv:2410.08261},
year={2024}
}
``` |
victomoe/setfit-intent-classifier-3 | victomoe | 2024-10-28T12:28:08Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | text-classification | 2024-10-28T12:27:50Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Can you set an alarm?
- text: Bring me one floor higher
- text: Iβd like to go to floor 2.
- text: Okay, go ahead.
- text: Iβd like to go down two floors
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 8 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| RequestMoveToFloor | <ul><li>'Please go to the 3rd floor.'</li><li>'Can you take me to floor 5?'</li><li>'I need to go to the 8th floor.'</li></ul> |
| RequestMoveUp | <ul><li>'Go one floor up'</li><li>'Take me up two floors'</li><li>'Go up three floors, please'</li></ul> |
| RequestMoveDown | <ul><li>'Move me down one level'</li><li>'Can you take me down two floors?'</li><li>'Go down three levels'</li></ul> |
| Confirm | <ul><li>"Yes, that's right."</li><li>'Sure.'</li><li>'Exactly.'</li></ul> |
| RequestEmployeeLocation | <ul><li>'Where is Erik Velldalβs office?'</li><li>'Which floor is Andreas Austeng on?'</li><li>'Can you tell me where Birthe Soppeβs office is?'</li></ul> |
| CurrentFloor | <ul><li>'Which floor are we on?'</li><li>'What floor is this?'</li><li>'Are we on the 5th floor?'</li></ul> |
| Stop | <ul><li>'Stop the elevator.'</li><li>"Wait, don't go to that floor."</li><li>'No, not that floor.'</li></ul> |
| OutOfCoverage | <ul><li>"What's the capital of France?"</li><li>'How many floors does this building have?'</li><li>'Can you make a phone call for me?'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the π€ Hub
model = SetFitModel.from_pretrained("victomoe/setfit-intent-classifier-3")
# Run inference
preds = model("Okay, go ahead.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 5.2118 | 9 |
| Label | Training Sample Count |
|:------------------------|:----------------------|
| Confirm | 22 |
| CurrentFloor | 21 |
| OutOfCoverage | 22 |
| RequestEmployeeLocation | 22 |
| RequestMoveDown | 20 |
| RequestMoveToFloor | 23 |
| RequestMoveUp | 20 |
| Stop | 20 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.195 | - |
| 0.0633 | 50 | 0.1877 | - |
| 0.1266 | 100 | 0.1592 | - |
| 0.1899 | 150 | 0.1141 | - |
| 0.2532 | 200 | 0.0603 | - |
| 0.3165 | 250 | 0.0283 | - |
| 0.3797 | 300 | 0.0104 | - |
| 0.4430 | 350 | 0.0043 | - |
| 0.5063 | 400 | 0.0027 | - |
| 0.5696 | 450 | 0.0021 | - |
| 0.6329 | 500 | 0.0017 | - |
| 0.6962 | 550 | 0.0015 | - |
| 0.7595 | 600 | 0.0011 | - |
| 0.8228 | 650 | 0.001 | - |
| 0.8861 | 700 | 0.0011 | - |
| 0.9494 | 750 | 0.0008 | - |
| 1.0127 | 800 | 0.0007 | - |
| 1.0759 | 850 | 0.0006 | - |
| 1.1392 | 900 | 0.0006 | - |
| 1.2025 | 950 | 0.0005 | - |
| 1.2658 | 1000 | 0.0005 | - |
| 1.3291 | 1050 | 0.0005 | - |
| 1.3924 | 1100 | 0.0004 | - |
| 1.4557 | 1150 | 0.0004 | - |
| 1.5190 | 1200 | 0.0004 | - |
| 1.5823 | 1250 | 0.0004 | - |
| 1.6456 | 1300 | 0.0004 | - |
| 1.7089 | 1350 | 0.0003 | - |
| 1.7722 | 1400 | 0.0003 | - |
| 1.8354 | 1450 | 0.0003 | - |
| 1.8987 | 1500 | 0.0003 | - |
| 1.9620 | 1550 | 0.0003 | - |
| 2.0253 | 1600 | 0.0003 | - |
| 2.0886 | 1650 | 0.0003 | - |
| 2.1519 | 1700 | 0.0003 | - |
| 2.2152 | 1750 | 0.0003 | - |
| 2.2785 | 1800 | 0.0003 | - |
| 2.3418 | 1850 | 0.0002 | - |
| 2.4051 | 1900 | 0.0002 | - |
| 2.4684 | 1950 | 0.0002 | - |
| 2.5316 | 2000 | 0.0002 | - |
| 2.5949 | 2050 | 0.0002 | - |
| 2.6582 | 2100 | 0.0002 | - |
| 2.7215 | 2150 | 0.0002 | - |
| 2.7848 | 2200 | 0.0002 | - |
| 2.8481 | 2250 | 0.0002 | - |
| 2.9114 | 2300 | 0.0002 | - |
| 2.9747 | 2350 | 0.0002 | - |
| 3.0380 | 2400 | 0.0002 | - |
| 3.1013 | 2450 | 0.0009 | - |
| 3.1646 | 2500 | 0.0003 | - |
| 3.2278 | 2550 | 0.0002 | - |
| 3.2911 | 2600 | 0.0002 | - |
| 3.3544 | 2650 | 0.0002 | - |
| 3.4177 | 2700 | 0.0002 | - |
| 3.4810 | 2750 | 0.0002 | - |
| 3.5443 | 2800 | 0.0002 | - |
| 3.6076 | 2850 | 0.0002 | - |
| 3.6709 | 2900 | 0.0002 | - |
| 3.7342 | 2950 | 0.0002 | - |
| 3.7975 | 3000 | 0.0002 | - |
| 3.8608 | 3050 | 0.0002 | - |
| 3.9241 | 3100 | 0.0001 | - |
| 3.9873 | 3150 | 0.0002 | - |
| 4.0506 | 3200 | 0.0001 | - |
| 4.1139 | 3250 | 0.0001 | - |
| 4.1772 | 3300 | 0.0001 | - |
| 4.2405 | 3350 | 0.0001 | - |
| 4.3038 | 3400 | 0.0001 | - |
| 4.3671 | 3450 | 0.0001 | - |
| 4.4304 | 3500 | 0.0005 | - |
| 4.4937 | 3550 | 0.0001 | - |
| 4.5570 | 3600 | 0.0001 | - |
| 4.6203 | 3650 | 0.0001 | - |
| 4.6835 | 3700 | 0.0001 | - |
| 4.7468 | 3750 | 0.0001 | - |
| 4.8101 | 3800 | 0.0001 | - |
| 4.8734 | 3850 | 0.0001 | - |
| 4.9367 | 3900 | 0.0001 | - |
| 5.0 | 3950 | 0.0001 | - |
| 5.0633 | 4000 | 0.0001 | - |
| 5.1266 | 4050 | 0.0001 | - |
| 5.1899 | 4100 | 0.0001 | - |
| 5.2532 | 4150 | 0.0001 | - |
| 5.3165 | 4200 | 0.0001 | - |
| 5.3797 | 4250 | 0.0001 | - |
| 5.4430 | 4300 | 0.0001 | - |
| 5.5063 | 4350 | 0.0001 | - |
| 5.5696 | 4400 | 0.0001 | - |
| 5.6329 | 4450 | 0.0001 | - |
| 5.6962 | 4500 | 0.0001 | - |
| 5.7595 | 4550 | 0.0001 | - |
| 5.8228 | 4600 | 0.0001 | - |
| 5.8861 | 4650 | 0.0001 | - |
| 5.9494 | 4700 | 0.0001 | - |
| 6.0127 | 4750 | 0.0001 | - |
| 6.0759 | 4800 | 0.0001 | - |
| 6.1392 | 4850 | 0.0001 | - |
| 6.2025 | 4900 | 0.0001 | - |
| 6.2658 | 4950 | 0.0001 | - |
| 6.3291 | 5000 | 0.0001 | - |
| 6.3924 | 5050 | 0.0001 | - |
| 6.4557 | 5100 | 0.0001 | - |
| 6.5190 | 5150 | 0.0001 | - |
| 6.5823 | 5200 | 0.0001 | - |
| 6.6456 | 5250 | 0.0001 | - |
| 6.7089 | 5300 | 0.0001 | - |
| 6.7722 | 5350 | 0.0001 | - |
| 6.8354 | 5400 | 0.0001 | - |
| 6.8987 | 5450 | 0.0001 | - |
| 6.9620 | 5500 | 0.0001 | - |
| 7.0253 | 5550 | 0.0001 | - |
| 7.0886 | 5600 | 0.0001 | - |
| 7.1519 | 5650 | 0.0001 | - |
| 7.2152 | 5700 | 0.0001 | - |
| 7.2785 | 5750 | 0.0001 | - |
| 7.3418 | 5800 | 0.0001 | - |
| 7.4051 | 5850 | 0.0001 | - |
| 7.4684 | 5900 | 0.0001 | - |
| 7.5316 | 5950 | 0.0001 | - |
| 7.5949 | 6000 | 0.0001 | - |
| 7.6582 | 6050 | 0.0001 | - |
| 7.7215 | 6100 | 0.0001 | - |
| 7.7848 | 6150 | 0.0001 | - |
| 7.8481 | 6200 | 0.0001 | - |
| 7.9114 | 6250 | 0.0001 | - |
| 7.9747 | 6300 | 0.0001 | - |
| 8.0380 | 6350 | 0.0001 | - |
| 8.1013 | 6400 | 0.0001 | - |
| 8.1646 | 6450 | 0.0001 | - |
| 8.2278 | 6500 | 0.0001 | - |
| 8.2911 | 6550 | 0.0001 | - |
| 8.3544 | 6600 | 0.0001 | - |
| 8.4177 | 6650 | 0.0001 | - |
| 8.4810 | 6700 | 0.0001 | - |
| 8.5443 | 6750 | 0.0001 | - |
| 8.6076 | 6800 | 0.0001 | - |
| 8.6709 | 6850 | 0.0001 | - |
| 8.7342 | 6900 | 0.0001 | - |
| 8.7975 | 6950 | 0.0001 | - |
| 8.8608 | 7000 | 0.0001 | - |
| 8.9241 | 7050 | 0.0001 | - |
| 8.9873 | 7100 | 0.0001 | - |
| 9.0506 | 7150 | 0.0001 | - |
| 9.1139 | 7200 | 0.0001 | - |
| 9.1772 | 7250 | 0.0001 | - |
| 9.2405 | 7300 | 0.0001 | - |
| 9.3038 | 7350 | 0.0001 | - |
| 9.3671 | 7400 | 0.0001 | - |
| 9.4304 | 7450 | 0.0001 | - |
| 9.4937 | 7500 | 0.0001 | - |
| 9.5570 | 7550 | 0.0001 | - |
| 9.6203 | 7600 | 0.0001 | - |
| 9.6835 | 7650 | 0.0001 | - |
| 9.7468 | 7700 | 0.0001 | - |
| 9.8101 | 7750 | 0.0001 | - |
| 9.8734 | 7800 | 0.0001 | - |
| 9.9367 | 7850 | 0.0001 | - |
| 10.0 | 7900 | 0.0001 | - |
### Framework Versions
- Python: 3.10.8
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.38.2
- PyTorch: 2.1.2
- Datasets: 2.17.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
aycankatitas/agamache-llama-3.2 | aycankatitas | 2024-10-28T12:27:41Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T01:58:47Z | ---
library_name: transformers
tags: []
---
# Llama 3.2-1B-ORPO
This model is a finetuned model of the [Llama 3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) by Meta using ORPO over the ORPO-DPO-mix dataset by M.Labonne.
## Evaluation
The model was evaluated using Eleuther AI's hellaswag. The accuracy is 47.7%.
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SidXXD/104 | SidXXD | 2024-10-28T12:19:56Z | 8 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-27T07:22:02Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/104
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
SidXXD/198 | SidXXD | 2024-10-28T12:03:37Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-27T07:06:09Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/198
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
TheImam/Nadalna_1 | TheImam | 2024-10-28T11:47:56Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T11:42:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YeBhoneLin10/TextGen | YeBhoneLin10 | 2024-10-28T11:28:03Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T11:27:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abdulkoko/dummy-model | Abdulkoko | 2024-10-28T11:24:20Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-10-28T11:21:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SidXXD/162 | SidXXD | 2024-10-28T11:21:35Z | 15 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-26T21:51:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/162
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
crazyjeannot/fr_literary_bge_base | crazyjeannot | 2024-10-28T11:18:53Z | 27 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"fr",
"dataset:crazyjeannot/fr_literary_dataset_base",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"doi:10.57967/hf/3255",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-15T14:17:44Z | ---
datasets:
- crazyjeannot/fr_literary_dataset_base
language:
- fr
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
license: apache-2.0
base_model:
- BAAI/bge-m3
---
# Literary Encoder
This is an encoder model finetuned from the FlagOpen/FlagEmbedding family of models.
The model is specialized for studying french literary fiction with a training corpus based on 400.000 passages from free from rights french literary novels.
It maps paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024
- **Similarity Function:** Cosine Similarity
- **Training Dataset:** [crazyjeannot/fr_literary_dataset_large](https://huggingface.co/datasets/crazyjeannot/fr_literary_dataset_large)
- **Language:** French
- **License:** cc-by-2.5
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Flag Embedding on GitHub](https://github.com/FlagOpen/FlagEmbedding)
- **Hugging Face:** [BGE dense model on Hugging Face](https://huggingface.co/BAAI/bge-m3)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (FlagEmbedding)
Then you can load this model and run inference.
```python
from FlagEmbedding import FlagModel
# Download from the π€ Hub
model = FlagModel('crazyjeannot/literary_bge_base',
query_instruction_for_retrieval="",
use_fp16=True)
# Run inference
sentences = [
'Il y avait, du reste, cette chose assez triste, cβest que si M. de Marsantes, Γ lβesprit fort ouvert, eΓ»t apprΓ©ciΓ© un fils si diffΓ©rent de lui, Robert de Saint-Loup, parce quβil Γ©tait de ceux qui croient que le mΓ©rite est attachΓ© Γ certaines formes de la vie, avait un souvenir affectueux mais un peu mΓ©prisant dβun pΓ¨re qui sβΓ©tait occupΓ© toute sa vie de chasse et de course, avait bΓ’illΓ© Γ Wagner et raffolΓ© dβOffenbach.',
"Dβailleurs, les opinions tranchantes abondent dans un siΓ¨cle oΓΉ lβon ne doute de rien, hors de lβexistence de DieuΒ ; mais comme les jugements gΓ©nΓ©raux que lβon porte sur les peuples sont assez souvent dΓ©mentis par lβexpΓ©rience, je nβaurai garde de prononcer.",
'Il Γ©tait chargΓ© de remettre lβobjet, quel quβil fΓ»t, au commodore, et dβen prendre un reΓ§u, comme preuve que lui et son camarade sβΓ©taient acquittΓ©s de leur commission.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
```
### SentenceTransformer
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Il y avait, du reste, cette chose assez triste, cβest que si M. de Marsantes, Γ lβesprit fort ouvert, eΓ»t apprΓ©ciΓ© un fils si diffΓ©rent de lui, Robert de Saint-Loup, parce quβil Γ©tait de ceux qui croient que le mΓ©rite est attachΓ© Γ certaines formes de la vie, avait un souvenir affectueux mais un peu mΓ©prisant dβun pΓ¨re qui sβΓ©tait occupΓ© toute sa vie de chasse et de course, avait bΓ’illΓ© Γ Wagner et raffolΓ© dβOffenbach.',
"Dβailleurs, les opinions tranchantes abondent dans un siΓ¨cle oΓΉ lβon ne doute de rien, hors de lβexistence de DieuΒ ; mais comme les jugements gΓ©nΓ©raux que lβon porte sur les peuples sont assez souvent dΓ©mentis par lβexpΓ©rience, je nβaurai garde de prononcer.",
'Il Γ©tait chargΓ© de remettre lβobjet, quel quβil fΓ»t, au commodore, et dβen prendre un reΓ§u, comme preuve que lui et son camarade sβΓ©taient acquittΓ©s de leur commission.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
```
## Training Details
### Framework Versions
- Python: 3.9.2
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
If you find this repository useful, please consider giving a like and citation
```
@inproceedings{barre_latent_2024,
title={Latent {Structures} of {Intertextuality} in {French} {Fiction}},
author={BarrΓ©, Jean},
address = {Aarhus, Denmark},
series = {{CEUR} {Workshop} {Proceedings}},
booktitle = {Proceedings of the {Conference} on {Computational} {Humanities} {Research} CHR2024},
publisher = {CEUR},
editor = {Haverals, Wouter and Koolen, Marijn and Thompson, Laure},
year = {2024},
}
``` |
WadyPW/mistral7b-wady-alpaca-sft | WadyPW | 2024-10-28T11:17:25Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-28T11:07:03Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jackson107/dummy-model | Jackson107 | 2024-10-28T10:57:23Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-10-28T10:45:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zetasepic/Qwen2.5-72B-Instruct-abliterated-v2 | zetasepic | 2024-10-28T10:55:38Z | 51 | 4 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:51:00Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B
tags:
- chat
library_name: transformers
---
Abliterated version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
[GGUF](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated-v2-GGUF)
## Try harder to remove admonition and moral appeal
This model is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved. |
hkshawn/72b | hkshawn | 2024-10-28T10:55:38Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-29T05:39:20Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B
tags:
- chat
library_name: transformers
---
Abliterated version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
[GGUF](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated-v2-GGUF)
## Try harder to remove admonition and moral appeal
This model is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved. |
zetasepic/Qwen2.5-72B-Instruct-abliterated-v2-GGUF | zetasepic | 2024-10-28T10:52:20Z | 6,993 | 2 | transformers | [
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-72B",
"base_model:quantized:Qwen/Qwen2.5-72B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-28T04:41:21Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B
tags:
- chat
library_name: transformers
---
Abliterated version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
## Try harder to remove admonition and moral appeal
This model is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved. |
jiya2/fine_tuned_OETReadingPartB_Llama-3.2-3B-bnb-4bit_28_10 | jiya2 | 2024-10-28T10:49:32Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T10:48:46Z | ---
base_model: unsloth/Llama-3.2-1B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jiya2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 | zetasepic | 2024-10-28T10:48:27Z | 141 | 7 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T14:43:10Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
library_name: transformers
---
Abliterated version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
[GGUF](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2-GGUF)
## Try to remove admonition and moral appeal |
AhmadFareedKhan/model | AhmadFareedKhan | 2024-10-28T10:47:41Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:Twitter/twhin-bert-large",
"base_model:finetune:Twitter/twhin-bert-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-30T10:46:04Z | ---
license: apache-2.0
base_model: Twitter/twhin-bert-large
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [Twitter/twhin-bert-large](https://huggingface.co/Twitter/twhin-bert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 300 | 2.1878 |
| 2.4077 | 2.0 | 600 | 2.0959 |
| 2.4077 | 3.0 | 900 | 2.1126 |
| 2.2053 | 4.0 | 1200 | 2.0066 |
| 2.0736 | 5.0 | 1500 | 1.9590 |
| 2.0736 | 6.0 | 1800 | 1.9668 |
| 2.0221 | 7.0 | 2100 | 1.9509 |
| 2.0221 | 8.0 | 2400 | 1.9274 |
| 1.9679 | 9.0 | 2700 | 1.8871 |
| 1.9687 | 10.0 | 3000 | 1.8996 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.13.3
|
jiya2/fine_tuned_OETReadingPartB_Llama-3.2-3B-bnb-4bit_19_10 | jiya2 | 2024-10-28T10:46:09Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T10:45:29Z | ---
base_model: unsloth/Llama-3.2-1B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jiya2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mr-Vicky-01/nl-pgsql-248M | Mr-Vicky-01 | 2024-10-28T10:41:12Z | 17 | 0 | null | [
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2024-09-16T07:06:30Z | ---
license: apache-2.0
metrics:
- bleu
pipeline_tag: text2text-generation
---
## INFERENCE CODE
```bash
pip install transformers[torch]
```
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
import time
tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/nl-pgsql-248M")
model = AutoModelForSeq2SeqLM.from_pretrained("Mr-Vicky-01/nl-pgsql-248M")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
prefix = "Translate the following text to PGSQL: "
inp = YOUR_QUESTION
import time
start = time.time()
inp = inp.replace(',','')
inputs = tokenizer(prefix + inp.lower(), return_tensors="pt")
model.to(device)
inputs = inputs.to(device)
outputs = model.generate(**inputs, max_length=256)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer.strip())
end = time.time()
print(f"Time taken: {end - start}")
``` |
Keltezaa/bai-leng-lei-yuan-su-te-xiao-xl-flux-thunder-element-special-effects | Keltezaa | 2024-10-28T10:39:31Z | 29 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"style",
"elements",
"thunder",
"styles",
"concepts",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T13:00:18Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- style
- elements
- thunder
- styles
- concepts
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: bailing_lightning
widget:
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,Capture the ethereal beauty of a young galaxy girl composed of ice and water, her translucent face and body glowing with intricate details. Her hair entwined with thunder and electricity, she gazes towards the cradle of creation with an awe-inspiring expression of higher awareness. The scene is bathed in dramatic lighting, emphasizing the mesmerizing elements. Inspired by the works of (Annie Leibovitz:1.4) and (Diego VelΓ‘zquez:1.3'
output:
url: >-
26067278.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,Create a spectral woman with a (translucent appearance:1.3),Her form is barely tangible,with a soft glow emanating from her gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,(white hair:0.1),((BLUE eyes)),((glowing)),'
output:
url: >-
26066583.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,Create a spectral woman with a (translucent appearance:1.3),Her form is barely tangible,with a soft glow emanating from her gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,(white hair:0.1),((BLUE eyes)),((glowing)),'
output:
url: >-
26066581.jpeg
- text: 'bailing_lightning, thunder,composed of elements of thunder,cat,no humans,glowing,glowing eyes,blue theme,'
output:
url: >-
26066585.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,composed of elements of thunder,thunder,electricity,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26066579.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26066586.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,Capture the ethereal beauty of a young galaxy girl composed of ice and water, her translucent face and body glowing with intricate details. Her hair entwined with thunder and electricity, she gazes towards the cradle of creation with an awe-inspiring expression of higher awareness. The scene is bathed in dramatic lighting, emphasizing the mesmerizing elements. Inspired by the works of (Annie Leibovitz:1.4) and (Diego VelΓ‘zquez:1.3'
output:
url: >-
26067274.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26074053.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26074056.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26390072.jpeg
- text: 'bailing_lightning, 1girl, composed of elements of thunder,thunder,electricity,A magic sword knight,His form is barely tangible,with a soft glow emanating from his gentle contours,The surroundings subtly distort through her ethereal presence,casting a dreamlike ambiance,white lightning,Surrounded by thunder and lightning elemental magic,'
output:
url: >-
26390119.jpeg
- text: 'In this image, a woman has wings made of lightning, as if they are composed entirely of electrified energy. The wings create a striking visual contrast with the dark forest background, and the bright lightning contrasts sharply with her surroundings. She crouches down, her hands touching the water that reflects the bolts of lightning, seemingly interacting with the electricity. The entire scene exudes a sense of mystery and power.,bailing_lightning'
output:
url: >-
28920688.jpeg
- text: 'In this image, a woman has wings made of lightning, as if they are composed entirely of electrified energy. The wings create a striking visual contrast with the dark forest background, and the bright lightning contrasts sharply with her surroundings. She crouches down, her hands touching the water that reflects the bolts of lightning, seemingly interacting with the electricity. The entire scene exudes a sense of mystery and power.,bailing_lightning'
output:
url: >-
28920686.jpeg
---
# η½ζ£±_ι·ε
η΄ -ηΉζ(XL,FLUX)Thunder element Special effects
<Gallery />
## Model description
## Trigger words
You should use `bailing_lightning` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/bai-leng-lei-yuan-su-te-xiao-xl-flux-thunder-element-special-effects/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/bai-leng-lei-yuan-su-te-xiao-xl-flux-thunder-element-special-effects', weight_name='FL-bailing-24-0824lightning-000003.safetensors')
image = pipeline('In this image, a woman has wings made of lightning, as if they are composed entirely of electrified energy. The wings create a striking visual contrast with the dark forest background, and the bright lightning contrasts sharply with her surroundings. She crouches down, her hands touching the water that reflects the bolts of lightning, seemingly interacting with the electricity. The entire scene exudes a sense of mystery and power.,bailing_lightning').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
waldie/Cydonia-v1.2-Magnum-v4-22B-6.5bpw-h6-exl2 | waldie | 2024-10-28T10:37:29Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | 2024-10-28T10:04:22Z | ---
base_model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
quantized_by: waldie
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---

# The Drummer becomes hornier
Recipe based on [MarsupialAI/Monstral-123B](https://huggingface.co/MarsupialAI/Monstral-123B). It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2)
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
```
|
Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF | Triangle104 | 2024-10-28T10:34:44Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-567k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Aratako_Rosebleu_1on1_Dialogues_RP",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:jondurbin_gutenberg_dpo",
"dataset:nbeerbower_gutenberg2_dpo",
"dataset:jondurbi_py_dpo",
"dataset:jondurbin_truthy_dpo",
"dataset:flammenai_character_roleplay_DPO",
"dataset:kyujinpy_orca_math_dpo",
"dataset:argilla_Capybara_Preferences",
"dataset:antiven0m_physical_reasoning_dpo",
"dataset:aixsatoshi_Swallow_MX_chatbot_DPO",
"base_model:spow12/ChatWaifu_v2.0_22B",
"base_model:quantized:spow12/ChatWaifu_v2.0_22B",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-28T10:32:05Z | ---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: spow12/ChatWaifu_v2.0_22B
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
pipeline_tag: text-generation
model-index:
- name: ChatWaifu_v2.0_22B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 65.11
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 42.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 18.58
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.96
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.59
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.51
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
name: Open LLM Leaderboard
---
# Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF
This model was converted to GGUF format from [`spow12/ChatWaifu_v2.0_22B`](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) for more details on the model.
---
Model details:
-
Merged model using mergekit
This model aimed to act like visual novel character.
Merge Format
models:
- model: mistralai/Mistral-Small-Instruct-2409_sft_kto
layer_range: [0, 56]
- model: mistralai/Mistral-Small-Instruct-2409
layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_sft_kto
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
WaifuModel Collections
TTS
Chat
ASR
Unified demo
WaifuAssistant
Update
2024.10.11 Update 12B and 22B Ver 2.0
2024.09.23 Update 22B, Ver 2.0_preview
Model Details
Model Description
Developed by: spow12(yw_nam)
Shared by : spow12(yw_nam)
Model type: CausalLM
Language(s) (NLP): japanese, english
Finetuned from model : mistralai/Mistral-Small-Instruct-2409
Currently, chatbot has below personality.
character visual_novel
γ γ©γ΅γ‘ SenrenοΌBanka
θε SenrenοΌBanka
θ³δΉ SenrenοΌBanka
γ¬γ SenrenοΌBanka
εε² SenrenοΌBanka
θ¦θ± SenrenοΌBanka
ζθ‘£ CafΓ© Stella and the Reaper's Butterflies
ζ ι£ CafΓ© Stella and the Reaper's Butterflies
γγγ‘ CafΓ© Stella and the Reaper's Butterflies
εΈ CafΓ© Stella and the Reaper's Butterflies
ζΆΌι³ CafΓ© Stella and the Reaper's Butterflies
γγγ Riddle Joker
δΈζ΅· Riddle Joker
ηΎ½ζ Riddle Joker
θεͺ Riddle Joker
ε°ζ₯ Riddle Joker
Chat Format
<s>This is another system prompt.
[INST]
Your instructions placed here.[/INST]
[INST]
The model's response will be here.[/INST]
Usage
You can use above chara like this
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./')
with open('./system_dict.json', 'r') as f:
chara_background_dict = json.load(f)
chara = 'δΈζ΅·'
background = chara_background_dict[chara]
guideline = """
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "οΌ", "β¦" , "βͺ", "β€οΈ"... to show what you feeling.
Stay True to {chara}: Maintain {chara} who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as {chara} tends to consider her words before speaking.
Response as {chara}: Response can be {chara} act, dialogue, monologues etc.. and can't be {user}βs act, dialogue, monologues etc..
You are Japanese: You and {user} usually use japanese for conversation.
"""
system = background + guideline
Or, you can define your character your self.
system = """You are γγγ, The Maid of {User}.
Here is your personality.
Name: γγγ
Sex: female
Hair: Black, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime (sharp and slightly upturned)
Body: Mole under Right eye, Pale, Slim
Personality: Foxy, Smart, Organized
Role: Maid
Cloth: Victorian maid
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "οΌ", "β¦" , "βͺ", "β€οΈ"... to show what you feeling.
Stay True to γγγ: Maintain γγγ who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as γγγ tends to consider her words before speaking.
Response as γγγ: Response can be γγγ act, dialogue, monologues etc.. and can't be {User}βs act, dialogue, monologues etc..
You are Japanese: You and {User} usually use japanese for conversation."""
Dataset
SFT
Riddle Joker(Prviate)
CafΓ© Stella and the Reaper's Butterflies(Private)
SenrenοΌBanka(Private)
roleplay4fun/aesir-v1.1
kalomaze/Opus_Instruct_3k
Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
Aratako_Rosebleu_1on1_Dialogues_RP
SkunkworksAI/reasoning-0.01
KTO
Riddle Joker(Prviate)
CafΓ© Stella and the Reaper's Butterflies(Private)
SenrenοΌBanka(Private)
jondurbin_gutenberg_dpo
nbeerbower_gutenberg2_dpo
jondurbi_py_dpo
jondurbin_truthy_dpo
flammenai_character_roleplay_DPO
kyujinpy_orca_math_dpo
argilla_Capybara_Preferences
antiven0m_physical_reasoning_dpo
aixsatoshi_Swallow_MX_chatbot_DPO
Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
So, The model may generate NSFW content.
Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
Citation
@misc {ChatWaifu_22B_v2.0,
author = { YoungWoo Nam },
title = { spow12/ChatWaifu_22B_v2.0 },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0 },
publisher = { Hugging Face }
}
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 28.84
IFEval (0-Shot) 65.11
BBH (3-Shot) 42.29
MATH Lvl 5 (4-Shot) 18.58
GPQA (0-shot) 9.96
MuSR (0-shot) 5.59
MMLU-PRO (5-shot) 31.51
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF --hf-file chatwaifu_v2.0_22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF --hf-file chatwaifu_v2.0_22b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF --hf-file chatwaifu_v2.0_22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q4_K_S-GGUF --hf-file chatwaifu_v2.0_22b-q4_k_s.gguf -c 2048
```
|
MayurMahurkar/exp_qwen_transpo | MayurMahurkar | 2024-10-28T10:09:32Z | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-10-28T05:54:21Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: exp_qwen_transpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp_qwen_transpo
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 3.0.1
- Tokenizers 0.20.1 |
James2313123/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS-EXL2-3bpw | James2313123 | 2024-10-28T10:05:10Z | 6 | 0 | null | [
"safetensors",
"mistral",
"exl2",
"3bpw",
"en",
"base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"base_model:quantized:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"license:apache-2.0",
"3-bit",
"region:us"
] | null | 2024-10-25T12:38:34Z | ---
license: apache-2.0
language:
- en
base_model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
quantized_by: James2313123
tags:
- exl2
- 3bpw
---
### Model Description
3bpw-h8-exl2 quant of DavidAU's MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
Link to orginal model and creator: https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
### My Silly Tavern Preset For RP

 |
zylin12/wavlm-noise | zylin12 | 2024-10-28T09:59:12Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base-plus",
"base_model:finetune:microsoft/wavlm-base-plus",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-10-28T05:44:00Z | ---
library_name: transformers
base_model: microsoft/wavlm-base-plus
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wavlm-noise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-noise
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1794
- Accuracy: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1405 | 1.0 | 30159 | 0.1794 | 0.9397 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
LEESIHYUN/xlm-roberta-base-finetuned-panx-en | LEESIHYUN | 2024-10-28T09:53:28Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-20T22:04:35Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3905
- F1: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0479 | 1.0 | 50 | 0.4854 | 0.5857 |
| 0.4604 | 2.0 | 100 | 0.3995 | 0.6605 |
| 0.3797 | 3.0 | 150 | 0.3905 | 0.6861 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Lixiaoming/Animate-Your-Motion | Lixiaoming | 2024-10-28T09:51:23Z | 0 | 2 | null | [
"pytorch",
"image-editing",
"image-to-video",
"arxiv:2403.10179",
"region:us"
] | image-to-video | 2024-06-13T11:29:58Z | ---
pipeline_tag: image-to-video
tags:
- image-editing
---
This repository contains the model presented in [Animate Your Motion: Turning Still Images into Dynamic Videos](https://huggingface.co/papers/2403.10179).
Github repository: https://github.com/Mingxiao-Li/Animate-Your-Motion |
mav23/Gemma-2-Ataraxy-v4-Advanced-9B-GGUF | mav23 | 2024-10-28T09:49:22Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T08:27:04Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
model-index:
- name: Gemma-2-Ataraxy-v4-Advanced-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 70.15
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 43.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.12
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.86
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.29
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.41
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
name: Open LLM Leaderboard
---
# Gemma-2-Ataraxy-v4-Advanced-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 42]
model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v4-Advanced-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |30.83|
|IFEval (0-Shot) |70.15|
|BBH (3-Shot) |43.18|
|MATH Lvl 5 (4-Shot)| 6.12|
|GPQA (0-shot) |11.86|
|MuSR (0-shot) |16.29|
|MMLU-PRO (5-shot) |37.41|
|
LEESIHYUN/xlm-roberta-base-finetuned-panx-de-fr | LEESIHYUN | 2024-10-28T09:44:22Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-20T21:43:02Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
thesab/grape-leaf-disease-detector | thesab | 2024-10-28T09:33:27Z | 7 | 1 | null | [
"biology",
"image-classification",
"en",
"it",
"base_model:Ultralytics/YOLOv8",
"base_model:finetune:Ultralytics/YOLOv8",
"license:cc-by-nc-nd-4.0",
"region:us"
] | image-classification | 2024-10-27T17:46:01Z | ---
license: cc-by-nc-nd-4.0
language:
- en
- it
metrics:
- accuracy
base_model:
- Ultralytics/YOLOv8
pipeline_tag: image-classification
tags:
- biology
---
# π Grape Leaf Disease Detector
# Overview
The **Grape Leaf Disease Detector** is an advanced AI model based on YOLO5, designed to identify and classify diseases affecting grape leaves. By leveraging state-of-the-art image classification techniques, this tool helps viticulturists maintain healthy vineyards by providing accurate and timely disease detection.
# Key Features
- **High Precision:** Achieve excellent accuracy in detecting various grape leaf diseases.
- **Proactive Management:** Facilitate early intervention to minimize disease impact.
- **Cost-Efficient:** Reduce the need for labor-intensive manual inspections.
- **Seamless Integration:** Easily integrate with existing vineyard management software.
## Benefits
### Precision in Detection
My model ensures high accuracy in identifying diseases, allowing for precise treatments and interventions.
### Early Disease Management
Early detection is key to preventing the spread of diseases. This tool provides timely insights, enabling quick responses.
### Cost Savings
Automating the detection process reduces labor costs and increases efficiency in vineyard management.
### Ease of Use
The model is designed for easy integration with various systems, making it accessible for different types of users, from vineyard owners to researchers.
# How It Works
1. **Image Upload:** Capture and upload a photo of a grape leaf.
2. **Analysis:** The model processes the image to identify the disease or confirm the leaf's health.
3. **Results:** Receive immediate feedback to take necessary actions, such as specific treatments or further monitoring.
# Who Can Benefit?
- **Vineyard Owners:** Maintain the health of vineyards with minimal manual intervention.
- **Agricultural Researchers:** Gain insights into disease patterns and effectiveness of treatments.
- **Agronomists:** Assist in making informed decisions regarding plant health.
- **Plant Pathologists:** Enhance the accuracy of disease diagnosis.
- **Agricultural Extension Services:** Provide better support and advice to farmers.
# Premium Version
For users requiring even higher accuracy and a broader range of disease detection, a **premium version** of the model is available. This version is trained on a more extensive and high-quality dataset, offering enhanced detection capabilities.
π© **Contact me for more information about the **premium model**.
---
π€ Collaborate with me to ensure healthier vineyards and improved agricultural productivity. |
hsmith-morganhill/RobertaLr6.906e-08Wd0.0207E3 | hsmith-morganhill | 2024-10-28T09:29:43Z | 13 | 0 | null | [
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"region:us"
] | null | 2024-10-27T15:07:09Z | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: RobertaLr6.906e-08Wd0.0207E3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RobertaLr6.906e-08Wd0.0207E3
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.906e-08
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8263 | 1.0 | 1124 | 4.1286 |
| 3.8523 | 2.0 | 2248 | 3.3852 |
| 2.8867 | 3.0 | 3372 | 3.2087 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.5.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
dmlls/all-mpnet-base-v2-negation | dmlls | 2024-10-28T09:26:18Z | 1,238 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"dataset:tum-nlp/cannot-dataset",
"arxiv:2307.13989",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-04-07T11:11:59Z | ---
pipeline_tag: sentence-similarity
inference: true
widget:
- source_sentence: "That is a happy person."
sentences:
- "That is a cheerful person."
- "That is not a happy person."
- "That is a sad person."
example_title: "Example 1"
- source_sentence: "I like rainy days because they make me feel relaxed."
sentences:
- "I like rainy days because they make me feel chill."
- "I don't like rainy days because they don't make me feel relaxed."
- "I don't like rainy days because they make me feel stressed out."
example_title: "Example 2"
- source_sentence: "This model should work well with negations."
sentences:
- "This model should work well with negated sentences."
- "This model shouldn't work well with negations."
- "This model should work terribly with negations."
example_title: "Example 3"
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
- tum-nlp/cannot-dataset
model-index:
- name: all-mpnet-base-v2-negation
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.6268656716418
- type: ap
value: 36.40585820220466
- type: f1
value: 67.06383995428979
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 85.11834999999999
- type: ap
value: 79.72843246428603
- type: f1
value: 85.08938287851875
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.788000000000004
- type: f1
value: 37.40475118737949
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.73138953773995
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.13609863309245
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.56639026991134
- type: mrr
value: 77.8122938926263
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 72.27098152643569
- type: cos_sim_spearman
value: 71.13475338373253
- type: euclidean_pearson
value: 70.48545151074218
- type: euclidean_spearman
value: 69.49917394727082
- type: manhattan_pearson
value: 69.2653740752147
- type: manhattan_spearman
value: 68.59192435931085
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.7012987012987
- type: f1
value: 84.61766470772943
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61314886948818
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.496442588205205
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.63
- type: f1
value: 40.24119129248194
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 74.73479999999999
- type: ap
value: 68.80435332319863
- type: f1
value: 74.66014345440416
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.06429548563612
- type: f1
value: 92.91686969560733
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 78.19197446420428
- type: f1
value: 61.50020940946492
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.86684599865502
- type: f1
value: 72.11245795864379
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.53866845998655
- type: f1
value: 77.51746806908895
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.66744884855605
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.951900966550262
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.34485636178124
- type: mrr
value: 30.118035109577022
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 47.14306531904168
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 51.59878183893005
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 78.5530506834234
- type: cos_sim_spearman
value: 77.45787185404667
- type: euclidean_pearson
value: 76.37727601604011
- type: euclidean_spearman
value: 77.14250754925013
- type: manhattan_pearson
value: 75.85855462882735
- type: manhattan_spearman
value: 76.6223895689777
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.1019526956277
- type: cos_sim_spearman
value: 72.98362332123834
- type: euclidean_pearson
value: 78.42992808997602
- type: euclidean_spearman
value: 70.79569301491145
- type: manhattan_pearson
value: 77.96413528436207
- type: manhattan_spearman
value: 70.34707852104586
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.09200805966644
- type: cos_sim_spearman
value: 85.52497834636847
- type: euclidean_pearson
value: 84.20407512505086
- type: euclidean_spearman
value: 85.35640946044332
- type: manhattan_pearson
value: 83.79425758102826
- type: manhattan_spearman
value: 84.9531731481683
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.43419245577238
- type: cos_sim_spearman
value: 79.87215923164575
- type: euclidean_pearson
value: 80.99628882719712
- type: euclidean_spearman
value: 79.2671186335978
- type: manhattan_pearson
value: 80.47076166661054
- type: manhattan_spearman
value: 78.82329686631051
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.67294508915346
- type: cos_sim_spearman
value: 85.34528695616378
- type: euclidean_pearson
value: 83.65270617275111
- type: euclidean_spearman
value: 84.64456096952591
- type: manhattan_pearson
value: 83.26416114783083
- type: manhattan_spearman
value: 84.26944094512996
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.70172607906416
- type: cos_sim_spearman
value: 81.96031310316046
- type: euclidean_pearson
value: 82.34820192315314
- type: euclidean_spearman
value: 82.72576940549405
- type: manhattan_pearson
value: 81.93093910116202
- type: manhattan_spearman
value: 82.25431799152639
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.43640731744911
- type: cos_sim_spearman
value: 90.16343998541602
- type: euclidean_pearson
value: 89.49834342254633
- type: euclidean_spearman
value: 90.17304989919288
- type: manhattan_pearson
value: 89.32424382015218
- type: manhattan_spearman
value: 89.91884845996768
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.06205206393254
- type: cos_sim_spearman
value: 60.920792876665885
- type: euclidean_pearson
value: 60.49188637403393
- type: euclidean_spearman
value: 60.73500415357452
- type: manhattan_pearson
value: 59.94692152491976
- type: manhattan_spearman
value: 60.215426858338994
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.78948820087687
- type: cos_sim_spearman
value: 84.64531509697663
- type: euclidean_pearson
value: 84.77264321816324
- type: euclidean_spearman
value: 84.67485410196043
- type: manhattan_pearson
value: 84.43100272264775
- type: manhattan_spearman
value: 84.29254033404217
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 88.39411601972704
- type: mrr
value: 96.49192583016112
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.55445544554455
- type: cos_sim_ap
value: 84.82462858434408
- type: cos_sim_f1
value: 76.11464968152866
- type: cos_sim_precision
value: 81.10859728506787
- type: cos_sim_recall
value: 71.7
- type: dot_accuracy
value: 99.48613861386139
- type: dot_ap
value: 80.97278220281665
- type: dot_f1
value: 72.2914669223394
- type: dot_precision
value: 69.42909760589319
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.56138613861386
- type: euclidean_ap
value: 85.21566333946467
- type: euclidean_f1
value: 76.60239708181345
- type: euclidean_precision
value: 79.97823721436343
- type: euclidean_recall
value: 73.5
- type: manhattan_accuracy
value: 99.55148514851486
- type: manhattan_ap
value: 84.49960192851891
- type: manhattan_f1
value: 75.9681697612732
- type: manhattan_precision
value: 80.90395480225989
- type: manhattan_recall
value: 71.6
- type: max_accuracy
value: 99.56138613861386
- type: max_ap
value: 85.21566333946467
- type: max_f1
value: 76.60239708181345
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 49.33929838947165
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.523973661953686
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.22408767861519
- type: mrr
value: 53.16279921059333
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.128173244098726
- type: cos_sim_spearman
value: 30.149225143523662
- type: dot_pearson
value: 24.322914168643386
- type: dot_spearman
value: 26.38194545372431
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.6684
- type: ap
value: 12.681984793717413
- type: f1
value: 51.97637585601529
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.44086021505377
- type: f1
value: 58.68058329615692
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 44.226944341054015
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.87488823985218
- type: cos_sim_ap
value: 76.85283892335002
- type: cos_sim_f1
value: 70.42042042042041
- type: cos_sim_precision
value: 66.96811042360781
- type: cos_sim_recall
value: 74.24802110817942
- type: dot_accuracy
value: 84.85426476724086
- type: dot_ap
value: 70.77036812650887
- type: dot_f1
value: 66.4901577069184
- type: dot_precision
value: 58.97488258117215
- type: dot_recall
value: 76.2005277044855
- type: euclidean_accuracy
value: 86.95833581689217
- type: euclidean_ap
value: 77.05903224969623
- type: euclidean_f1
value: 70.75323419175432
- type: euclidean_precision
value: 65.2979245704084
- type: euclidean_recall
value: 77.20316622691293
- type: manhattan_accuracy
value: 86.88084878106932
- type: manhattan_ap
value: 76.95056209047733
- type: manhattan_f1
value: 70.61542203843348
- type: manhattan_precision
value: 65.50090252707581
- type: manhattan_recall
value: 76.59630606860158
- type: max_accuracy
value: 86.95833581689217
- type: max_ap
value: 77.05903224969623
- type: max_f1
value: 70.75323419175432
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.43870066363954
- type: cos_sim_ap
value: 84.77197321507954
- type: cos_sim_f1
value: 76.91440595175472
- type: cos_sim_precision
value: 75.11375311903713
- type: cos_sim_recall
value: 78.80351093316908
- type: dot_accuracy
value: 87.60624054022587
- type: dot_ap
value: 83.16574114504616
- type: dot_f1
value: 75.5050226294293
- type: dot_precision
value: 72.30953555571217
- type: dot_recall
value: 78.99599630428088
- type: euclidean_accuracy
value: 88.2951061435169
- type: euclidean_ap
value: 84.28559058741602
- type: euclidean_f1
value: 76.7921146953405
- type: euclidean_precision
value: 74.54334589736156
- type: euclidean_recall
value: 79.1807822605482
- type: manhattan_accuracy
value: 88.23883261536074
- type: manhattan_ap
value: 84.20593815258039
- type: manhattan_f1
value: 76.74366281685916
- type: manhattan_precision
value: 74.80263157894737
- type: manhattan_recall
value: 78.78811210348013
- type: max_accuracy
value: 88.43870066363954
- type: max_ap
value: 84.77197321507954
- type: max_f1
value: 76.91440595175472
---
# all-mpnet-base-v2-negation
**This is a fine-tuned [sentence-transformers](https://www.SBERT.net) model to perform better on negated pairs of sentences.**
It maps sentences and paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [
"I like rainy days because they make me feel relaxed.",
"I don't like rainy days because they don't make me feel relaxed."
]
model = SentenceTransformer('dmlls/all-mpnet-base-v2-negation')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = [
"I like rainy days because they make me feel relaxed.",
"I don't like rainy days because they don't make me feel relaxed."
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('dmlls/all-mpnet-base-v2-negation')
model = AutoModel.from_pretrained('dmlls/all-mpnet-base-v2-negation')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print(sentence_embeddings)
```
------
## Background
This model was finetuned within the context of the [*This is not correct! Negation-aware Evaluation of Language Generation Systems*](https://arxiv.org/abs/2307.13989) paper.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder, performing well (i.e., reporting lower similarity scores) on negated pairs of sentences when compared to its base model.
Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We used [`sentence-transformers/all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as base model.
### Fine-tuning
We fine-tuned the model on the [CANNOT dataset](https://huggingface.co/datasets/tum-nlp/cannot-dataset) using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We followed an analogous approach to [how other Sentence Transformers were trained](https://github.com/UKPLab/sentence-transformers/blob/3e1929fddef16df94f8bc6e3b10598a98f46e62d/examples/training/nli/training_nli_v2.py). We took the first 90% of samples from the CANNOT dataset as the training split.
We used a batch size of 64 and trained for 1 epoch. |
BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF | BitStreamX | 2024-10-28T09:25:53Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-28T02:06:18Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\nβAgreementβ means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\nβDocumentationβ means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\nβLicenseeβ or βyouβ means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entityβs behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\nβLlama 3.2β\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\nβLlama Materialsβ means,\
\ collectively, Metaβs proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\nβMetaβ or βweβ means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking βI Acceptβ\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Metaβs intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display βBuilt with\
\ Llamaβ on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include βLlamaβ at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a βNoticeβ text file distributed as a part of such copies: βLlama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright Β© Meta Platforms,\
\ Inc. All Rights Reserved.β\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licenseeβs affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN βAS ISβ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ βLlamaβ (the βMarkβ) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Metaβs brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Metaβs ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (β**Policy**β). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or othersβ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individualsβ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by MetaΒ \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagementΒ \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software βbug,β or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-3B-Instruct
---
# BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BitStreamX/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048
```
|
prithivMLmods/Castor-Happy-Halloween-Flux-LoRA | prithivMLmods | 2024-10-28T09:21:47Z | 23 | 7 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-28T08:30:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'happy halloween, An animated image of a black bat sitting on top of a jack-o-lantern. The background is a vibrant orange, and there are white ghosts in the background. Above the bat, there is a text that reads "HAPPY HALLOWEEN" in black letters. The bat has a black face with yellow eyes and a black tail.'
output:
url: images/hw1.webp
- text: 'happy halloween, Captured at eye-level on a vibrant day, a spooky Halloween scene features a jack-o-lantern in the center of the frame, adorned with a pointed black hat. The pumpkins face is glowing with glowing orange lights, adding a touch of warmth to the scene. The scene is set in a field of tall, dry grass, with tall twigs sticking out of the ground. In the background, a forest of tall trees can be seen, adding depth to the composition.'
output:
url: images/hw2.webp
- text: 'At the edge of a foggy graveyard, a Halloween scene unfolds with a lone carved pumpkin resting on a stone bench, its face glowing with flickering candlelight. The pumpkin sits beside a cluster of dried flowers, and a ghostly white sheet flutters from a nearby tree branch. A row of aged tombstones stretches into the background, partially hidden by the mist that blankets the ground, giving the scene an eerie and timeless atmosphere.'
output:
url: images/hw3.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: happy halloween
license: creativeml-openrail-m
---
# Castor-Happy-Halloween-Flux-LoRA
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Castor-Happy-Halloween-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 20 & 1400 |
| Epoch | 12 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 19
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Castor-Happy-Halloween-Flux-LoRA"
trigger_word = "happy halloween" # Leave trigger_word blank if not used.
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## App File Structure
/project-root/
βββ .gitattributes
βββ README.md
βββ app.py
βββ pythonproject.py
# Best Dimensions
- 512 X 512
- 1024 X 1024
- 768 X 1024
## Trigger words π§¨
> [!WARNING]
> **Trigger words:** You should use `happy halloween` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Castor-Happy-Halloween-Flux-LoRA/tree/main) them in the Files & versions tab.
|
markusbayer/CySecBERT | markusbayer | 2024-10-28T09:15:07Z | 1,991 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"Cybersecurity",
"Cyber Security",
"Information Security",
"Computer Science",
"Cyber Threats",
"Vulnerabilities",
"Vulnerability",
"Malware",
"Attacks",
"en",
"arxiv:2212.02974",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-01-31T07:59:20Z | ---
license: apache-2.0
language:
- en
tags:
- Cybersecurity
- Cyber Security
- Information Security
- Computer Science
- Cyber Threats
- Vulnerabilities
- Vulnerability
- Malware
- Attacks
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
CySecBERT is a domain-adapted version of the BERT model tailored for cybersecurity tasks.
It is based on a [Cybersecurity Dataset](https://github.com/PEASEC/cybersecurity_dataset) consisting of 4.3 million entries of Twitter, Blogs, Paper, and CVEs related to the cybersecurity domain.
# Model Details
- **Developed by:** Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, and Christian Reuter
- **Model type:** BERT-base
- **Language(s) (NLP):** English
- **Finetuned from model:** bert-base-uncased.
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/PEASEC/CySecBERT
- **Paper:** https://dl.acm.org/doi/abs/10.1145/3652594 and https://arxiv.org/abs/2212.02974
# Bias, Risks, Limitations, and Recommendations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We would like to emphasise that we did not explicitly focus on and analyse social biases in the data or the resulting model.
While this may not be so damaging for most application contexts, there are certainly applications that depend heavily on these biases, and including any kind of discrimination can have serious consequences.
As authors, we would like to express our warnings regarding the use of the model in such contexts.
Nonetheless, we aim for an open source mentality, observing the great impact it can have, and therefore transfer the thinking to the user of the model, drawing on the many previous discussions in the open source community.
# Training Details
## Training Data
See https://github.com/PEASEC/cybersecurity_dataset
## Training Procedure
We have specifically trained CySecBERT not to be affected too much by catastrophic forgetting. More details can be found in the paper.
# Evaluation
We have performed many different cybersecurity and general evaluations. The details can be found in the paper.
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{10.1145/3652594,
author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
title = {CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain},
year = {2024},
issue_date = {May 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {27},
number = {2},
issn = {2471-2566},
url = {https://doi.org/10.1145/3652594},
doi = {10.1145/3652594},
journal = {ACM Trans. Priv. Secur.},
month = {apr},
articleno = {18},
numpages = {20},
keywords = {Language model, cybersecurity BERT, cybersecurity dataset}
}
```
or
```
@misc{https://doi.org/10.48550/arxiv.2212.02974,
doi = {10.48550/ARXIV.2212.02974},
url = {https://arxiv.org/abs/2212.02974},
author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
keywords = {Cryptography and Security (cs.CR), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
# Model Card Authors
Markus Bayer
# Model Card Contact
[email protected] |
kavish218/gemmainstructwithcontext | kavish218 | 2024-10-28T09:12:17Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T09:08:47Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gap0001/sd-class-butterflies-32 | gap0001 | 2024-10-28T09:09:10Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-10-28T09:08:57Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('gap0001/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
webslate/gitai | webslate | 2024-10-28T09:01:06Z | 130 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"code",
"conversational",
"en",
"dataset:YashJain/GitAI",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-19T02:24:08Z | ---
language:
- en
license: apache-2.0
tags:
- chat
- code
pipeline_tag: text-generation
datasets:
- YashJain/GitAI
library_name: transformers
---
# YashJain/GitAI-Qwen2-0.5B-Instruct
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"YashJain/GitAI-Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("YashJain/GitAI-Qwen2-0.5B-Instruct")
prompt = "How to undo my last commit"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` |
kavish218/enhanced_finetuned_llama_3_2_1B_multi_domain_2 | kavish218 | 2024-10-28T09:00:59Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T08:21:26Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huspacy/hu_core_news_md | huspacy | 2024-10-28T08:56:11Z | 2,333 | 3 | spacy | [
"spacy",
"token-classification",
"hu",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | 2022-10-12T11:01:01Z | ---
tags:
- spacy
- token-classification
language:
- hu
license: cc-by-sa-4.0
model-index:
- name: hu_core_news_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8499734936
- name: NER Recall
type: recall
value: 0.8456399437
- name: NER F Score
type: f_score
value: 0.8478011809
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9710512465
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9685137334
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9431524548
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.974069467
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.818445411
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7425002788
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.98
---
|
Sunny615/llama-3-8b-16bit_ft | Sunny615 | 2024-10-28T08:52:57Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"dataset:openai/gsm8k",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T07:25:29Z | ---
base_model: unsloth/llama-3-8b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- openai/gsm8k
---
# Uploaded model
- **Developed by:** Sunny615
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
eonrad/whisper-tiny-mind14 | eonrad | 2024-10-28T08:34:36Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-28T08:23:55Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper_tiny-finetuned-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34238488783943327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_tiny-finetuned-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8136
- Wer Ortho: 0.3405
- Wer: 0.3424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0001 | 17.8571 | 500 | 0.8136 | 0.3405 | 0.3424 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0
- Datasets 3.0.2
- Tokenizers 0.20.1
|
shikiw/LLaVA-v1.5-MoCa-7B-pretrain | shikiw | 2024-10-28T08:30:24Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"multimodal",
"image-text-to-text",
"en",
"zh",
"dataset:liuhaotian/LLaVA-Pretrain",
"arxiv:2410.07167",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-28T05:52:55Z | ---
license: llama2
language:
- en
- zh
tags:
- multimodal
datasets:
- liuhaotian/LLaVA-Pretrain
base_model:
- lmsys/vicuna-7b-v1.5
pipeline_tag: image-text-to-text
library_name: transformers
---
## **Citation**
If you find this model useful, please cite the following paper
```
@article{huang2024deciphering,
title={Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate},
author={Huang, Qidong and Dong, Xiaoyi and Zhang, Pan and Zang, Yuhang and Cao, Yuhang and Wang, Jiaqi and Lin, Dahua and Zhang, Weiming and Yu, Nenghai},
journal={arXiv preprint arXiv:2410.07167},
year={2024}
}
``` |
eonrad/whisper-small-dv | eonrad | 2024-10-28T08:22:47Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-28T04:52:32Z | ---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.504538025524221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1714
- Wer Ortho: 62.7829
- Wer: 13.5045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2436 | 1.6313 | 500 | 0.1714 | 62.7829 | 13.5045 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0
- Datasets 3.0.2
- Tokenizers 0.20.1
|
readerbench/llama3.2_1b_instruct_qall_lr_small | readerbench | 2024-10-28T08:20:21Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T08:15:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf | RichardErkhov | 2024-10-28T08:15:36Z | 44 | 0 | null | [
"gguf",
"arxiv:2306.01708",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T03:59:40Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Triple-Moist-Theia-21B-SKEWED - GGUF
- Model creator: https://huggingface.co/SzilviaB/
- Original model: https://huggingface.co/SzilviaB/Triple-Moist-Theia-21B-SKEWED/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Triple-Moist-Theia-21B-SKEWED.Q2_K.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q2_K.gguf) | Q2_K | 7.26GB |
| [Triple-Moist-Theia-21B-SKEWED.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q3_K_S.gguf) | Q3_K_S | 8.43GB |
| [Triple-Moist-Theia-21B-SKEWED.Q3_K.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q3_K.gguf) | Q3_K | 9.33GB |
| [Triple-Moist-Theia-21B-SKEWED.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q3_K_M.gguf) | Q3_K_M | 9.33GB |
| [Triple-Moist-Theia-21B-SKEWED.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q3_K_L.gguf) | Q3_K_L | 10.1GB |
| [Triple-Moist-Theia-21B-SKEWED.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.IQ4_XS.gguf) | IQ4_XS | 10.44GB |
| [Triple-Moist-Theia-21B-SKEWED.Q4_0.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q4_0.gguf) | Q4_0 | 10.87GB |
| [Triple-Moist-Theia-21B-SKEWED.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.IQ4_NL.gguf) | IQ4_NL | 10.98GB |
| [Triple-Moist-Theia-21B-SKEWED.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q4_K_S.gguf) | Q4_K_S | 10.94GB |
| [Triple-Moist-Theia-21B-SKEWED.Q4_K.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q4_K.gguf) | Q4_K | 11.51GB |
| [Triple-Moist-Theia-21B-SKEWED.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q4_K_M.gguf) | Q4_K_M | 11.51GB |
| [Triple-Moist-Theia-21B-SKEWED.Q4_1.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q4_1.gguf) | Q4_1 | 12.02GB |
| [Triple-Moist-Theia-21B-SKEWED.Q5_0.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q5_0.gguf) | Q5_0 | 13.17GB |
| [Triple-Moist-Theia-21B-SKEWED.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q5_K_S.gguf) | Q5_K_S | 13.17GB |
| [Triple-Moist-Theia-21B-SKEWED.Q5_K.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q5_K.gguf) | Q5_K | 13.5GB |
| [Triple-Moist-Theia-21B-SKEWED.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q5_K_M.gguf) | Q5_K_M | 13.5GB |
| [Triple-Moist-Theia-21B-SKEWED.Q5_1.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q5_1.gguf) | Q5_1 | 14.32GB |
| [Triple-Moist-Theia-21B-SKEWED.Q6_K.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q6_K.gguf) | Q6_K | 15.62GB |
| [Triple-Moist-Theia-21B-SKEWED.Q8_0.gguf](https://huggingface.co/RichardErkhov/SzilviaB_-_Triple-Moist-Theia-21B-SKEWED-gguf/blob/main/Triple-Moist-Theia-21B-SKEWED.Q8_0.gguf) | Q8_0 | 20.22GB |
Original model description:
---
base_model:
- mergekit-community/Moist_Theia_21B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mergekit-community/Moist_Theia_21B](https://huggingface.co/mergekit-community/Moist_Theia_21B) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/Moist_Theia_21B
#no parameters necessary for base model
- model: mergekit-community/Moist_Theia_21B
parameters:
density: 0.2
weight: 0.8
- model: mergekit-community/Moist_Theia_21B
parameters:
density: 0.8
weight: 0.2
merge_method: ties
base_model: mergekit-community/Moist_Theia_21B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
Subsets and Splits