modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
touhidulislam/BERTweet_retrain_2019_48
|
touhidulislam
| 2024-11-13T22:45:40Z | 169 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:53:50Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_48
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1124 | 1.0 | 32 | 3.8080 |
| 3.7079 | 2.0 | 64 | 3.5741 |
| 3.6609 | 3.0 | 96 | 3.5518 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
creatorchain/Brett_2.0
|
creatorchain
| 2024-11-13T22:45:18Z | 6 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-11-13T22:43:48Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/8vtmS9vIiyfNnwuOnx7up_0c4fd3eca63c4b22892f5abcb57ecc43.jpg
- text: '-'
output:
url: images/VKIyCh9Qay157rZSpjgBr_487c724a582a4f98a68777a818aeceb4.jpg
- text: '-'
output:
url: images/DBzUQOzHbRsL-maEzfLol_7ce89f7bd1904f8882729f876190f71d.jpg
- text: '-'
output:
url: images/iEnL03xY2Q7HZ36BKa6mT_a35da5f25a254dc889c60c412a3e1c95.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: brett meme
license: apache-2.0
---
# Brett_2.0
<Gallery />
## Trigger words
You should use `brett meme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/creatorchain/Brett_2.0/tree/main) them in the Files & versions tab.
|
touhidulislam/BERTweet_retrain_2019_46
|
touhidulislam
| 2024-11-13T22:44:11Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:52:13Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_46
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1217 | 1.0 | 31 | 3.7502 |
| 3.6243 | 2.0 | 62 | 3.5576 |
| 3.7031 | 3.0 | 93 | 3.5290 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Vikhrmodels/salt-asr_speech_1_wav_1_tts_speech_3_text-10k
|
Vikhrmodels
| 2024-11-13T22:43:19Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:openslr/librispeech_asr",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:llm-blender/mix-instruct",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-09T15:00:45Z |
---
library_name: transformers
datasets:
- openslr/librispeech_asr
- parler-tts/libritts-r-filtered-speaker-descriptions
- llm-blender/mix-instruct
language:
- en
base_model:
- meta-llama/Llama-3.2-3B
---
**Inference**:<br>
```python
device = "cuda"
n_codebooks_tts = 3
n_codebooks_asr = 1
start_audio_token = "<|start_of_audio|>"
end_audio_token = "<|end_of_audio|>"
end_sequence_token = "<|end_of_text|>"
base_model = "Vikhrmodels/salt-asr_speech_1_wav_1_tts_speech_3_text-10k"
def decode_tts(tokens, quantizer, n_codebooks, n_original_tokens, start_audio_token_id, end_audio_token_id):
# find start and end indices of audio tokens
start = torch.nonzero(tokens == start_audio_token_id)
end = torch.nonzero(tokens == end_audio_token_id)
start = start[0, -1] + 1 if len(start) else 0
end = end[0, -1] if len(end) else tokens.shape[-1]
# subtract length of original vocabulary -> tokens in range [0, 1024)
audio_tokens = tokens[start:end] % n_original_tokens
reminder = audio_tokens.shape[-1] % n_codebooks
if reminder:
# pad if last frame is incomplete
pad_tokens = torch.zeros(n_codebooks - reminder, device="cuda")
audio_tokens = torch.cat([audio_tokens, pad_tokens], dim=0)
transposed = audio_tokens.view(-1, n_codebooks).t()
codes = transposed.view(n_codebooks, 1, -1).to(device)
audio = quantizer.decode(codes).squeeze(0)
del tokens
del audio_tokens
torch.cuda.empty_cache()
return AudioSignal(audio.detach().cpu().numpy(), quantizer.sample_rate)
def infer_text_to_audio(text, model, tokenizer, quantizer, max_seq_length=1024, top_k=20):
text_tokenized = tokenizer(text, return_tensors="pt")
text_input_tokens = text_tokenized["input_ids"].to(device)
soa = tokenizer(start_audio_token, return_tensors="pt")["input_ids"][:, -1:].to(device)
eoa = tokenizer(end_audio_token, return_tensors="pt")["input_ids"][:, -1:].to(device)
text_tokens = torch.cat([text_input_tokens, soa], dim=1)
attention_mask = torch.ones(text_tokens.size(), device=device)
output_audio_tokens = model.generate(
text_tokens,
attention_mask=attention_mask,
max_new_tokens=max_seq_length,
top_k=top_k,
do_sample=True,
temperature=0.1,
repetition_penalty=1.1,
length_penalty=1.2,
no_repeat_ngram_size=3,
)
audio_signal = decode_tts(output_audio_tokens[0], quantizer, 3, len(tokenizer), soa, eoa)
return audio_signal
def infer_audio_to_text(audio_path, model, tokenizer, quantizer_speech, quantizer_wav, max_seq_length=1024, top_k=20):
audio_data, sample_rate = torchaudio.load(audio_path)
audio = audio_data.view(1, -1).float().to(device)
bandwidth_id = torch.tensor([0])
codes_semantics = quantizer_speech.encode(audio.reshape(1, 1, -1))
raw_semantic_tokens = codes_semantics + len(tokenizer)
raw_semantic_tokens = raw_semantic_tokens[:1].view(1, -1)
_, codes = quantizer_wav.encode_infer(audio, bandwidth_id=bandwidth_id)
raw_acoustic_tokens = codes + len(tokenizer) + 1024
raw_acoustic_tokens = raw_acoustic_tokens.view(1, -1)
audio_tokens = torch.cat([raw_semantic_tokens, raw_acoustic_tokens], dim=1)
soa = tokenizer(start_audio_token, return_tensors="pt")["input_ids"][:, -1:].to(device)
eoa = tokenizer(end_audio_token, return_tensors="pt")["input_ids"][:, -1:].to(device)
audio_tokens = torch.cat([soa, audio_tokens, eoa], dim=1)
tokens = torch.cat([audio_tokens], dim=1)
attention_mask = torch.ones(tokens.size(), device=device)
output_text_tokens = model.generate(
tokens,
attention_mask=attention_mask,
max_new_tokens=max_seq_length,
do_sample=True,
temperature=0.1,
top_p=0.9,
top_k=top_k,
)
output_text_tokens = output_text_tokens.cpu()[0]
output_text_tokens = output_text_tokens[output_text_tokens < tokenizer(start_audio_token)["input_ids"][-1]]
decoded_text = tokenizer.decode(output_text_tokens, skip_special_tokens=True)
return decoded_text
tokenizer = AutoTokenizer.from_pretrained(base_model, cache_dir=".")
model = AutoModelForCausalLM.from_pretrained(
base_model,
cache_dir=".",
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
device_map={"": 0}
)
quantizer_speech = SpeechTokenizer.load_from_checkpoint("speechtokenizer/config.json",
"speechtokenizer/SpeechTokenizer.pt")
quantizer_speech = quantizer_speech.eval().to(device)
codebook_size = quantizer_speech.quantizer.bins
quantizer_wav = WavTokenizer.from_pretrained0802("wavtokenizer/config.yaml",
"wavtokenizer/WavTokenizer_small_600_24k_4096.ckpt")
quantizer_wav = quantizer_wav.to(device)
text = ("Say 'COUNT NUMBERS FROM ONE TO TEN' with a male speaker delivers a very monotone and "
"low-pitched speech with a moderate speed in a setting with almost no noise, "
"creating a clear and quiet recording.")
audio_signal = infer_text_to_audio(text, model, tokenizer, quantizer_speech, top_k=60)
audio_signal.write("output.wav")
audio_path = "./input.wav"
generated_text = infer_audio_to_text(audio_path, model, tokenizer, quantizer_speech, quantizer_wav, top_k=10)
print(generated_text)
```
|
touhidulislam/BERTweet_retrain_2019_44
|
touhidulislam
| 2024-11-13T22:42:40Z | 172 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:50:33Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_44
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1307 | 1.0 | 31 | 3.5410 |
| 3.7678 | 2.0 | 62 | 3.5568 |
| 3.4333 | 3.0 | 93 | 3.3651 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_43
|
touhidulislam
| 2024-11-13T22:41:56Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:49:46Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_43
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0917 | 1.0 | 30 | 3.9364 |
| 3.5183 | 2.0 | 60 | 3.7273 |
| 3.6233 | 3.0 | 90 | 3.5506 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_42
|
touhidulislam
| 2024-11-13T22:41:12Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:49:00Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_42
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.164 | 1.0 | 31 | 3.6931 |
| 3.8293 | 2.0 | 62 | 3.5547 |
| 3.4394 | 3.0 | 93 | 3.5317 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_39
|
touhidulislam
| 2024-11-13T22:39:01Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:46:34Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_39
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9132 | 1.0 | 31 | 3.6158 |
| 3.766 | 2.0 | 62 | 3.5581 |
| 3.7047 | 3.0 | 93 | 3.5166 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_37
|
touhidulislam
| 2024-11-13T22:37:37Z | 167 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:45:00Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_37
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1218 | 1.0 | 31 | 3.6668 |
| 3.5477 | 2.0 | 62 | 3.6275 |
| 3.6856 | 3.0 | 93 | 3.5190 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_36
|
touhidulislam
| 2024-11-13T22:36:54Z | 167 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:44:08Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_36
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9699 | 1.0 | 31 | 3.4891 |
| 3.6186 | 2.0 | 62 | 3.7725 |
| 3.6791 | 3.0 | 93 | 3.4153 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_35
|
touhidulislam
| 2024-11-13T22:36:06Z | 169 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:43:21Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_35
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1384 | 1.0 | 32 | 3.8705 |
| 3.6747 | 2.0 | 64 | 3.5231 |
| 3.6351 | 3.0 | 96 | 3.6341 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_31
|
touhidulislam
| 2024-11-13T22:33:08Z | 173 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:40:12Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_31
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2455 | 1.0 | 31 | 3.9506 |
| 3.8408 | 2.0 | 62 | 3.5400 |
| 3.6939 | 3.0 | 93 | 3.3484 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_30
|
touhidulislam
| 2024-11-13T22:32:21Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:39:24Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_30
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5594 | 1.0 | 29 | 3.9202 |
| 3.7718 | 2.0 | 58 | 3.5737 |
| 3.7624 | 3.0 | 87 | 3.6234 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_27
|
touhidulislam
| 2024-11-13T22:30:05Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:37:05Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_27
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0443 | 1.0 | 31 | 3.6678 |
| 3.8886 | 2.0 | 62 | 3.7074 |
| 3.6713 | 3.0 | 93 | 3.6293 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_26
|
touhidulislam
| 2024-11-13T22:29:23Z | 172 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:36:17Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_26
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9719 | 1.0 | 31 | 3.6875 |
| 3.7148 | 2.0 | 62 | 3.7982 |
| 3.8668 | 3.0 | 93 | 3.6558 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2019_22
|
touhidulislam
| 2024-11-13T22:26:08Z | 171 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-09T10:33:02Z |
---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2019_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2019_22
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3951 | 1.0 | 29 | 3.8636 |
| 3.7176 | 2.0 | 58 | 3.5551 |
| 3.6332 | 3.0 | 87 | 3.5288 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
huwhitememes/alejandromayorkas-lora
|
huwhitememes
| 2024-11-13T22:22:00Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-13T22:13:26Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/alejandromayorkas-lora_002880_00_20241003175338.png
text: A photo of Alejandro Mayorkas, Alejandro Mayorkas, Mayorkas, Secretary Mayorkas,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Alejandro Mayorkas, Alejandro Mayorkas, Mayorkas, Secretary Mayorkas,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# alejandromayorkas-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Alejandro Mayorkas, Alejandro Mayorkas, Mayorkas, Secretary Mayorkas,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
rpham17/finetuning-sentiment-model-3000-samples
|
rpham17
| 2024-11-13T22:19:39Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T21:44:43Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
magnifi/Phi3_intent_v43_1_w_unknown_8_lr_0.002
|
magnifi
| 2024-11-13T21:53:47Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T21:51:11Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
seongil-dn/gte-base-filtered-neg4-bs96
|
seongil-dn
| 2024-11-13T21:46:16Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:99239",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-mlm-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-mlm-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-13T21:45:40Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:99239
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-multilingual-mlm-base
widget:
- source_sentence: '''백발 급구''의 주인공 구신의 직업은 무엇인가요?'
sentences:
- 그레고리아 데 헤수스(Gregoria de Jesús, 1875년 5월 9일 - 1943년 3월 15일)는 필리핀의 독립운동가이자 카티푸난의
여성 지도자였다. 필리핀 임시 혁명정부의 제1대 부통령이자 첫 번째 퍼스트 레이디였다.
- <백발 급구>는 모스크바와 페테르부르크의 건축물들에 대한 작가의 폭넓은 이해를 엿볼 수 있는 작품이다. 건축물 사진첩들을 뒤적이며 소일하는
것을 위로 삼아 부정한 아내와의 원만치 못한 결혼 생활을 견뎌가고 있는 주인공 구신은 페테르부르크 출장 중에 렌필름 영화사 앞에서 이류 배우
나타샤와 만나게 된다. 사출 전문가라는 특이한 직업과는 어울리지 않게 페테르부르크의 유명 건축물들을 건설한 예술가들에 대한 이해와 삶의 깊이를
가진 구신의 매력에 빠져든 나타샤는 구신과 보낸 이틀을 잊지 못하게 된다. 모스크바로 돌아온 구신도 나타샤와의 사랑을 통해 삶의 의미를 다시
찾게 되고 모스크바의 생활을 정리하고 페테르부르크의 나타샤에게로 되돌아가게 되면서 작품은 끝을 맺는다.
- '1991년 의족을 하고 눈을 가린 초인은 절대 붕대를 풀어서는 안된다고 어머니는 말한다. 그때 어머니가 아버지에게 가정폭력을 당하자 붕대를
풀고 아버지를 살해한다.
결국 모자는 동반자살을 위해 초인의 목을 조르려고 하자 초인은 붕대를 벗고 달아난다.19년후인 2010년 자동차 폐기장에서 일하는 임규남(고수
분)은 교통 사고로 인해 병원에 입원하게 되고, 그 때문에 일하던 곳에서 해직당한다. 그 이후 일자리를 알아보던 규남은 전당포 일을 맡게 된다.
사장은 자꾸 장부와 금고의 돈이 맞지 않는다며 의문을 제기한다. 어느 날 자동차 폐기장에서 일하던 규남의 친구들이 전당포로 놀러온다. 다 함께
점심을 먹던 중 갑자기 몸이 움직이지 않고, 이상한 기운이 감돈다. 규남은 살짝 정신을 차렸지만 친구들과 사장은 움직이지 않는다. 잠시 후
사람이 들어오고 사장은 초점이 없는 눈으로 금고의 돈을 꺼내 그 사람에게 건넨다. 뭔가 일이 이상하게 흘러간다고 생각한 규남은 정신을 바짝
차려 움직이는데 성공한다. 초인은 규남만 움직이는 것을 보며 놀라움을 감추지 못한다. 규남은 초인을 물리치려 하지만 규남은 조종당하는 자신의
친구들과 사장의 공격을 당하고 결국 사장이 죽게된다. 초인이 물러난 후 정신이 깨어난 규남의 친구들은 규남과 함께 녹화된 CCTV 테잎을 보게
되고 초인에 의해 조종받았다는 사실을 알게 되고 싸우기로 결심한다. 그리고 자신을 제외한 모든 사람들을 눈으로 조종한다는 사실을 깨달은 규남은
전투 끝에 규남을 제압하고 얼굴에 봉지를 씌워 경찰서에 데려간다. 경찰에게 자초지종을 이야기 하고 CCTV 비디오 테이프를 건네주지만 경찰은
규남을 믿지 않는다. 결국 경찰은 봉투를 벗기게 되고 전부 조종당한다. 총격전 끝에 가까스로 살아남은 규남은 경찰의 총을 뺏어가면서 총기탈취
용의자로 몰리게 된다. 한편 규남의 친구들은 눈으로 조종한다는 사실을 깨닫고 각종 무기와 방어구를 갖고 초인을 대하지만 전혀 통하지 않고,
초인에게 끌려간다. 규남은 전당포에서 인질로 붙잡힌 친구들을 찾지만 친구들은 초인에 의해 살해된다. 초인을 쫒던 규남은 옥상에서 초인을 만나
죽이려 하지만 반대편에 사장의 딸 영숙(정은채 분)이 난간위에 올라가 떨어질 위기에 처했다는 것을 보고 잠시 움찔하지만 결국 초인과 같이 뛰어내려
초인은 죽고 규남은 살아남는다. 영숙은 다행히 옥상쪽으로 떨어져 살아남는다. 이 사고로 규남은 전신마비가 되고 스튜어디스가 된 영숙은 그를
챙긴다. 몇 개월후 지하철 선로에 떨어진 유치원생이 위기에 처한다. 유치원생을 넘어 전차가 지나가고, 규남에 의해 구조된다.'
- source_sentence: 산화적 인산화와 ATP 합성의 차이는 무엇인가요?
sentences:
- ATP를 만드는데 사용되는 또 다른 방법은 산화적 인산화를 통해서이며, 이는 세포 호흡 중에 일어난다. 산화적 인산화는 NADH를 NAD로
산화시키는 과정에서 2.5개의 ATP를 생성하고, FADH를 FAD로 산화시키는 과정에서 1.5개의 ATP를 생성한다. 미토콘드리아 내막을
가로질러 양성자의 전기화학적 구배로 저장된 위치 에너지는 ADP와 P(무기인산)로부터 ATP를 생성하는데 필요하고, 이 부분에서 기질수준 인산화와
큰 차이가 난다. 이러한 H의 농도 기울기는 ATP 생성효소에 의해 이용되어, H가 전기화학적인 구배에 의해 ATP 생성효소를 통해서 미토콘드리아의
막 사이 공간에서 미토콘드리아 기질로 확산될 때(화학삼투) 방출되는 자유 에너지를 ATP 생성과 짝짓는다. 역으로, 전자전달은 H를 미토콘드리아
기질에서 막 사이 공간으로 능동수송하는데 필요한 에너지를 제공한다.
- '역사적으로 아디프산은 산화에 쓰이는 여러 지방(FAT)에서 만들어졌다. 현재 아디프산은 사이클로헥세인올과 "KA oil"(케톤-알코올 오일의
약자(Ketone-Alcohol oil), 이하 케알오일)이라 불리는 사이클로헥세온의 혼합물로부터 만들어진다. 이 케알오일은
여러가지를 거쳐 질산과 산화된다. 사이클로헥세인올이 아질산을 내놓으면서 케톤으로 바뀐다:'
- 산화적 인산화는 에너지 방출 반응을 사용하여 에너지 흡수 반응을 진행시킨다. 두 세트의 반응이 짝지어져 있다고 하는데, 이는 두 반응 중 하나가
작동하지 않으면 다른 하나도 작동할 수 없다는 것을 의미한다. NADH와 같은 전자공여체에서부터 산소(O)와 같은 전자수용체에 이르기까지 전자전달계를
통한 전자의 흐름은 에너지를 방출하는 과정이다. 반면에 ATP 합성은 에너지를 흡수하는 과정이다. 전자전달계와 ATP 생성효소는 모두 미토콘드리아
내막에 존재하고, 미토콘드리아 내막을 통과하는 H의 움직임에 의해 전자전달계에서 ATP 생성효소로 에너지가 전달되는데, 이러한 과정을 화학삼투라고
한다. 실제로 이것은 단순한 전기 회로 같으며, H는 전자전달계의 H 펌핑 효소들에 의해 미토콘드리아 기질(N쪽, 상대적으로 H 농도가 낮은
쪽)에서 막 사이 공간(P쪽, 상대적으로 H 농도가 높은 쪽)으로 능동수송된다. 이러한 효소들은 배터리와 같으며, 회로를 통해 전류를 흐르게
하는 일을 수행한다. 양성자(H)의 움직임은 막을 가로질러 전기화학적인 기울기를 만들어 내는데, 이것은 종종 양성자 구동력(proton-motive
force)라고 불린다. 양성자 구동력은 H의 농도 차이(양성자 기울기, ΔpH)로 인한 화학적인 위치 에너지와 H가 상대 이온 없이 막을
지나 이동할 때, 전하의 분리 차이로 인한 전기적인 위치 에너지의 2가지 구성 요소를 가진다.
- source_sentence: 이승만의 독립운동 단체 활동에 대해 자세히 알고 싶습니까?
sentences:
- 한편, 북미정상회담을 준비하던 와중에 틸러슨 국무장관이 경질되는 사건이 일어났다. 더욱이 트위터를 통해 소식을 알렸는데 트럼프와 틸러슨이 자주
이견을 보였고 북한에 상대적으로 유화적인 태도를 보였기 때문인 것으로 알려졌다. 후임에는 마이크 폼페이오 CIA국장이 내정되어 4월 27일
임명되었는데 미국의 대외정책이 더욱 강경해질 것이라는 분석이 나왔다. 또한 북미정상회담 발표 이후 북한은 자제하던 대미 공세를 15일부터 재개했는데
미국이 인권문제를 언급할 것을 사전에 예방하면서 협상력을 높이기 위한 명분 쌓기라는 관측이다.
- 독립협회와 청년이승만(獨立協會와 靑年 李承晩 Independence Association and young Rhee Syng-Man)은 1959년
서울에서 상영된 대한민국의 영화로, 대통령 이승만의 청년기 개화, 계몽운동사를 다루었다. 특히 기울어져 가는 이조 말엽의 국운을 바로잡기 위한
그의 투쟁사를 중심으로 한 전기물이었다.경기도 안양군의 스튜디오에서 제작되었으며 서울 시공관에서 1959년 11월 20일 개봉되었다.1960년
1월 1일자 동아일보의 영화평 보도에 의하면 연기자 5백여 명과 엑스트라 6만 명이 출연하였다 한다. 그밖에 특별히 외국인 연기자들이 초빙되기도
했다.
- 이후 이승만이 영향을 발휘하던 독립운동단체 흥업구락부에서도 가입하여 활동하였는데 1925년 11월 신흥우,송진우 등과 함께 미국의 이승만과
사전 협의를 거친 끝에 태평양문제연구회 조선지회를 조직하고 위원을 맡았다. 1927년 1월에는 비타협적 민족주의자와 사회주의자들이 결합한 좌우연합
독립운동 단체인 신간회 발기인으로 참가했다. 1934년 2월 연희전문학교 부교장을 맡았다. 연희전문학교 부교장 시절 특히 체육에 관심이 깊어,
그의 주선으로 연희전문학교 주최 전국중학교체육대회를 매년 개최하기도 하였다. 1937년 7월 조선체육회 회장을 맡았다.
- source_sentence: 아키야마가 나오를 도와주기로 결심한 이유는 무엇인가요?
sentences:
- 장위(, 1976년 11월 10일 ~ )는 중화인민공화국의 배우이다. 후베이 성 스옌 시 윈 현 출생이며, 본명은 장밍룽()이다. 2006년
11월 중화인민공화국의 유명 감독과 프로듀서 등과 성관계를 가진 것을 찍은 녹음 테이프를 공개하여 논란을 불러일으켰다.
- 출소한 아키야마를 만난 나오는 다짜고짜 도움을 청해보지만, 대답은 당연히 ‘No’. 하지만 절박한 심정의 그녀는 끈질기게 아키야마의 뒤를 쫓는다.
나오가 귀찮아진 아키야마는 “도와 줄 테니 잠시 기다리라”는 거짓말로 그녀를 떼어놓지만, 순진한 나오는 정말로 밤을 세워가며 아키야마를 기다린다.
그런 모습을 멀리서 지켜본 아키야마는 마음을 고쳐먹고 나오를 도와주기로 결심한다.
- '코코야시 마을의 여자 해병 벨메일에 의해 구해졌고, 비가 오고 폭풍이 오는 위급한 상황임에도 불구하고 해맑게 웃고 있었다. 자신과 같이 구조된
여자아이는 바로 ''노지코'', 자신의 양언니이다.(KBS 더빙판에는 ''사비나''로 번역.) 나미는 벨메일의 손에서 코코야시 마을에서 키워졌으며
여전히 행복한 나날을 지내고 있었다. 나미는 바다에 관해 관심이 많았으며 특히 지도를 그리는 실력은 거의 신동에 가까웠다. 그러나 벨메일이
자신의 친엄마가 아니라는 사실을 마음 속에 새겨두고 있었으며, 그것 때문에 벨메일은 순간적인 홧김에 나미를 집 밖으로 쫓아버린다. 벨메일은
나미가 좋아하는 귤 음식을 한다며 노지코에게 나미를 다시 불러오라고 하였다. 노지코는 나미에게 가서 벨메일이 미안하다고 하니 같이 집으로 돌아가자고
하였고 나미는 승낙하였다. 그때 갑자기 코코야시 마을에 아론 일당이 들어왔다. 아론 일당의 선장 아론은 ''지금부터 이 마을을 지배할테니,
어른당 10만 베리, 어린이당 5만 베리를 납부하라''고 명령하였다. 마을 사람들은 어쩔 수 없이 모두 납부하였고 겐조를 비롯한 마을 사람들은
벨메일의 집을 건드리지 않아서 다행이라 생각했지만 불운하게도 벨메일이 요리를 하고 있어 집 밖으로 피어오르는 연기때문에 들키고 말았다.
벨메일은 아론 일당이 자신의 집 앞에 와있다는 것을 짐작하고 아론에게 총을 겨누었으나 어인의 엄청난 힘으로 벨메일은 고폭행 당했다때 나미와
노지코, 겐조가 달려왔고 벨메일은 전재산이 겨우 10만베리라 모두 내주었다. 아론 일당은 나미와 노지코가 벨메일의 자식이 아니라면 그 돈을
겐조가 내고 벨메일을 살려 주기로 하였다. 이때 벨메일은 결국 나미와 노지코를 자기 자식이라 발언해버리고, 돈이 모자란 조건에 아론은 벨메일을
죽이기로 한다. 벨메일은 나미와 노지코에게 ''생존하면 반드시 행복한 일이 많이 있을것이다. 나미, 노지코.
사랑한다."라는 유언을 남기며 아론의 총탄에 목숨을 잃고 겐조 또한 간부에게 베여버려 크게 다치고 만다. 이때 나미가 해도를 그리는 데 엄청난
실력을 갖추고 있는 것을 알아버린 아론은 나미를 강제로 끌고 가서 자신의 측량실에 나미에게 해도를 그리라고 한다. 나미는 마을 사람들이 자신
때문에 죽는 것을 꺼려하여 결국 아론 파크의 문신을 어깨에 새기고 만다. 아론은 ''내 눈앞에 1억 베리를 바치면 너를 포함한 코코야시 마을을
해방시켜주겠다''고 제안했고 나미는 그 제안을 받아들였다. 그 때부터 나미는 코코야시 마을을 사기 위해 해적 전문 도둑이 되어 바다로 떠나서
해적들한테서 돈을 뺐기 시작했다.'
- source_sentence: 안압지의 역사적 이름은 무엇인가요?
sentences:
- 1980년, 안압지에서 발굴된 토기 파편 등으로 신라시대에 이 곳이 월지(月池)라고 불렸다는 사실이 확인되었다. 이는 신라 왕궁인 반월성(半月城)과
가까이 있었기 때문이며, 임해전의 이름도 본디 월지궁이었다고 한다. 조선시대에는 폐허가 된 이곳에 기러기와 오리들이 날아들자 조선의 묵객들이
안압지(雁鴨池)라는 이름을 붙였다. 《삼국사기》에 동궁을 임해전(臨海殿), 즉 바다에 면한 건물이라고 불렀다는 기록이 있으며, 여기에서 안압지는
바다를 상징한다.
- 성은 왕, 본관은 개성이며, 이름은 기록이 남아있지 않아 알 수 없다. 충혜왕, 공민왕 등과 이복 형제간이다. 용산원자의 어머니 조국장공주는
원나라 황족 출신으로, 원 순종의 아들인 위왕 에무게의 딸이다. 1324년(충숙왕 11년)에 충숙왕이 원나라에 있을 때 그와 혼인하였다.
- 안압지라는 명칭은 조선 초기에 간행된 《동국여지승람》과 《동경잡기》등에 나타나고 있다.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-mlm-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-mlm-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base) <!-- at revision b747c5e8eb09e48c24eb3d4e48f80a79a18889ff -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/gte-base-filtered-neg4-bs96")
# Run inference
sentences = [
'안압지의 역사적 이름은 무엇인가요?',
'1980년, 안압지에서 발굴된 토기 파편 등으로 신라시대에 이 곳이 월지(月池)라고 불렸다는 사실이 확인되었다. 이는 신라 왕궁인 반월성(半月城)과 가까이 있었기 때문이며, 임해전의 이름도 본디 월지궁이었다고 한다. 조선시대에는 폐허가 된 이곳에 기러기와 오리들이 날아들자 조선의 묵객들이 안압지(雁鴨池)라는 이름을 붙였다. 《삼국사기》에 동궁을 임해전(臨海殿), 즉 바다에 면한 건물이라고 불렀다는 기록이 있으며, 여기에서 안압지는 바다를 상징한다.',
'안압지라는 명칭은 조선 초기에 간행된 《동국여지승람》과 《동경잡기》등에 나타나고 있다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 99,239 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.0 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 155.17 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 132.62 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>야노 타쿠지의 동생은 어떤 성격을 가지고 있나요?</code> | <code>중반부터 추가된 새로운 전사. 예전 유스케들을 감싸 볼트에게 살해당한 야노 타쿠지의 동생. 강한 파워 파이터 이기도 하며 형의 사후 도로테 박사들에 의해 라이브맨을 지원하는 훈련을 받고 있었다. 타케시라는 동생이 있으나 극중에서 이들 사이의 관계는 뚜렷하지 않은 편이다. 권투가 뛰어나고 조금 무모하고 말투도 난폭하다. 특히 형들의 적인 볼트가 관련되면 냉정할 수 없게 되지만 실력이 따라주지 않는 경우가 많아서 초기에는 멤버 3명의 발목을 잡기도 했다. 하지만 곤란한 사람은 내버려 둘수없는 상냥한 성격이다.</code> | <code>노노의 남동생이며 게임을 좋아한다.</code> |
| <code>야노 타쿠지의 동생은 어떤 성격을 가지고 있나요?</code> | <code>중반부터 추가된 새로운 전사. 예전 유스케들을 감싸 볼트에게 살해당한 야노 타쿠지의 동생. 강한 파워 파이터 이기도 하며 형의 사후 도로테 박사들에 의해 라이브맨을 지원하는 훈련을 받고 있었다. 타케시라는 동생이 있으나 극중에서 이들 사이의 관계는 뚜렷하지 않은 편이다. 권투가 뛰어나고 조금 무모하고 말투도 난폭하다. 특히 형들의 적인 볼트가 관련되면 냉정할 수 없게 되지만 실력이 따라주지 않는 경우가 많아서 초기에는 멤버 3명의 발목을 잡기도 했다. 하지만 곤란한 사람은 내버려 둘수없는 상냥한 성격이다.</code> | <code>야스히코의 여동생.</code> |
| <code>야노 타쿠지의 동생은 어떤 성격을 가지고 있나요?</code> | <code>중반부터 추가된 새로운 전사. 예전 유스케들을 감싸 볼트에게 살해당한 야노 타쿠지의 동생. 강한 파워 파이터 이기도 하며 형의 사후 도로테 박사들에 의해 라이브맨을 지원하는 훈련을 받고 있었다. 타케시라는 동생이 있으나 극중에서 이들 사이의 관계는 뚜렷하지 않은 편이다. 권투가 뛰어나고 조금 무모하고 말투도 난폭하다. 특히 형들의 적인 볼트가 관련되면 냉정할 수 없게 되지만 실력이 따라주지 않는 경우가 많아서 초기에는 멤버 3명의 발목을 잡기도 했다. 하지만 곤란한 사람은 내버려 둘수없는 상냥한 성격이다.</code> | <code>그의 동생인 고기석 또한 만화가이다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 96
- `learning_rate`: 0.0001
- `adam_epsilon`: 1e-07
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 96
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-07
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0010 | 1 | 6.0761 |
| 0.0019 | 2 | 5.4696 |
| 0.0029 | 3 | 6.0283 |
| 0.0039 | 4 | 5.725 |
| 0.0048 | 5 | 5.6116 |
| 0.0058 | 6 | 5.447 |
| 0.0068 | 7 | 5.4655 |
| 0.0077 | 8 | 5.6126 |
| 0.0087 | 9 | 5.9597 |
| 0.0097 | 10 | 5.6583 |
| 0.0106 | 11 | 5.6846 |
| 0.0116 | 12 | 5.3796 |
| 0.0126 | 13 | 5.9556 |
| 0.0136 | 14 | 5.9142 |
| 0.0145 | 15 | 5.8192 |
| 0.0155 | 16 | 5.4182 |
| 0.0165 | 17 | 5.3768 |
| 0.0174 | 18 | 5.7815 |
| 0.0184 | 19 | 5.5915 |
| 0.0194 | 20 | 5.5445 |
| 0.0203 | 21 | 5.4324 |
| 0.0213 | 22 | 5.365 |
| 0.0223 | 23 | 5.4745 |
| 0.0232 | 24 | 5.4416 |
| 0.0242 | 25 | 5.3282 |
| 0.0252 | 26 | 5.6499 |
| 0.0261 | 27 | 5.537 |
| 0.0271 | 28 | 5.3231 |
| 0.0281 | 29 | 5.5125 |
| 0.0290 | 30 | 5.4939 |
| 0.0300 | 31 | 5.7354 |
| 0.0310 | 32 | 5.2808 |
| 0.0319 | 33 | 5.6719 |
| 0.0329 | 34 | 5.4923 |
| 0.0339 | 35 | 5.2214 |
| 0.0348 | 36 | 5.2656 |
| 0.0358 | 37 | 5.3045 |
| 0.0368 | 38 | 5.6441 |
| 0.0378 | 39 | 5.2787 |
| 0.0387 | 40 | 5.0395 |
| 0.0397 | 41 | 5.398 |
| 0.0407 | 42 | 4.9811 |
| 0.0416 | 43 | 5.6311 |
| 0.0426 | 44 | 5.1735 |
| 0.0436 | 45 | 4.8979 |
| 0.0445 | 46 | 5.0585 |
| 0.0455 | 47 | 4.7773 |
| 0.0465 | 48 | 4.6178 |
| 0.0474 | 49 | 4.896 |
| 0.0484 | 50 | 4.7486 |
| 0.0494 | 51 | 4.6619 |
| 0.0503 | 52 | 4.632 |
| 0.0513 | 53 | 4.7354 |
| 0.0523 | 54 | 4.838 |
| 0.0532 | 55 | 4.694 |
| 0.0542 | 56 | 4.6128 |
| 0.0552 | 57 | 4.4835 |
| 0.0561 | 58 | 4.4826 |
| 0.0571 | 59 | 4.5447 |
| 0.0581 | 60 | 4.3147 |
| 0.0591 | 61 | 4.4533 |
| 0.0600 | 62 | 4.2798 |
| 0.0610 | 63 | 4.1942 |
| 0.0620 | 64 | 4.057 |
| 0.0629 | 65 | 4.2321 |
| 0.0639 | 66 | 3.99 |
| 0.0649 | 67 | 3.8962 |
| 0.0658 | 68 | 3.6657 |
| 0.0668 | 69 | 3.5564 |
| 0.0678 | 70 | 3.5317 |
| 0.0687 | 71 | 3.2773 |
| 0.0697 | 72 | 2.9232 |
| 0.0707 | 73 | 3.2449 |
| 0.0716 | 74 | 3.4219 |
| 0.0726 | 75 | 2.9998 |
| 0.0736 | 76 | 3.0874 |
| 0.0745 | 77 | 3.1482 |
| 0.0755 | 78 | 3.1247 |
| 0.0765 | 79 | 2.8267 |
| 0.0774 | 80 | 2.8905 |
| 0.0784 | 81 | 2.6198 |
| 0.0794 | 82 | 2.5581 |
| 0.0803 | 83 | 2.4113 |
| 0.0813 | 84 | 2.4397 |
| 0.0823 | 85 | 2.1872 |
| 0.0833 | 86 | 2.3872 |
| 0.0842 | 87 | 2.3307 |
| 0.0852 | 88 | 2.0877 |
| 0.0862 | 89 | 2.263 |
| 0.0871 | 90 | 2.3048 |
| 0.0881 | 91 | 2.2513 |
| 0.0891 | 92 | 1.9171 |
| 0.0900 | 93 | 1.8847 |
| 0.0910 | 94 | 1.7235 |
| 0.0920 | 95 | 1.7936 |
| 0.0929 | 96 | 1.5383 |
| 0.0939 | 97 | 1.7893 |
| 0.0949 | 98 | 1.7196 |
| 0.0958 | 99 | 1.3623 |
| 0.0968 | 100 | 1.4125 |
| 0.0978 | 101 | 1.4238 |
| 0.0987 | 102 | 1.3337 |
| 0.0997 | 103 | 1.1182 |
| 0.1007 | 104 | 1.2398 |
| 0.1016 | 105 | 1.2641 |
| 0.1026 | 106 | 1.127 |
| 0.1036 | 107 | 1.0222 |
| 0.1045 | 108 | 0.9893 |
| 0.1055 | 109 | 1.0013 |
| 0.1065 | 110 | 1.0016 |
| 0.1075 | 111 | 0.8696 |
| 0.1084 | 112 | 0.7812 |
| 0.1094 | 113 | 0.8705 |
| 0.1104 | 114 | 0.7513 |
| 0.1113 | 115 | 0.7766 |
| 0.1123 | 116 | 0.7832 |
| 0.1133 | 117 | 0.8242 |
| 0.1142 | 118 | 0.7847 |
| 0.1152 | 119 | 0.6173 |
| 0.1162 | 120 | 0.6534 |
| 0.1171 | 121 | 0.8419 |
| 0.1181 | 122 | 0.643 |
| 0.1191 | 123 | 0.6175 |
| 0.1200 | 124 | 0.6818 |
| 0.1210 | 125 | 0.7784 |
| 0.1220 | 126 | 0.6841 |
| 0.1229 | 127 | 0.5816 |
| 0.1239 | 128 | 0.6703 |
| 0.1249 | 129 | 0.791 |
| 0.1258 | 130 | 0.702 |
| 0.1268 | 131 | 0.7459 |
| 0.1278 | 132 | 0.5366 |
| 0.1288 | 133 | 0.6364 |
| 0.1297 | 134 | 0.5861 |
| 0.1307 | 135 | 0.6055 |
| 0.1317 | 136 | 0.4942 |
| 0.1326 | 137 | 0.7339 |
| 0.1336 | 138 | 0.7158 |
| 0.1346 | 139 | 0.5765 |
| 0.1355 | 140 | 0.5728 |
| 0.1365 | 141 | 0.8744 |
| 0.1375 | 142 | 0.8383 |
| 0.1384 | 143 | 0.6794 |
| 0.1394 | 144 | 0.5059 |
| 0.1404 | 145 | 0.5983 |
| 0.1413 | 146 | 0.4877 |
| 0.1423 | 147 | 0.5052 |
| 0.1433 | 148 | 0.4639 |
| 0.1442 | 149 | 0.8201 |
| 0.1452 | 150 | 0.7705 |
| 0.1462 | 151 | 0.5388 |
| 0.1471 | 152 | 0.4903 |
| 0.1481 | 153 | 0.6167 |
| 0.1491 | 154 | 0.5446 |
| 0.1500 | 155 | 0.4804 |
| 0.1510 | 156 | 0.4164 |
| 0.1520 | 157 | 0.6186 |
| 0.1530 | 158 | 0.626 |
| 0.1539 | 159 | 0.4926 |
| 0.1549 | 160 | 0.3961 |
| 0.1559 | 161 | 0.519 |
| 0.1568 | 162 | 0.5028 |
| 0.1578 | 163 | 0.3303 |
| 0.1588 | 164 | 0.3655 |
| 0.1597 | 165 | 0.5287 |
| 0.1607 | 166 | 0.4638 |
| 0.1617 | 167 | 0.3889 |
| 0.1626 | 168 | 0.2826 |
| 0.1636 | 169 | 0.4772 |
| 0.1646 | 170 | 0.4887 |
| 0.1655 | 171 | 0.3495 |
| 0.1665 | 172 | 0.3662 |
| 0.1675 | 173 | 0.5344 |
| 0.1684 | 174 | 0.5746 |
| 0.1694 | 175 | 0.431 |
| 0.1704 | 176 | 0.4369 |
| 0.1713 | 177 | 0.5007 |
| 0.1723 | 178 | 0.3978 |
| 0.1733 | 179 | 0.2928 |
| 0.1742 | 180 | 0.2547 |
| 0.1752 | 181 | 0.6907 |
| 0.1762 | 182 | 0.4821 |
| 0.1772 | 183 | 0.4012 |
| 0.1781 | 184 | 0.3911 |
| 0.1791 | 185 | 0.6219 |
| 0.1801 | 186 | 0.5665 |
| 0.1810 | 187 | 0.4848 |
| 0.1820 | 188 | 0.3596 |
| 0.1830 | 189 | 0.5269 |
| 0.1839 | 190 | 0.4105 |
| 0.1849 | 191 | 0.4466 |
| 0.1859 | 192 | 0.2562 |
| 0.1868 | 193 | 0.6511 |
| 0.1878 | 194 | 0.4402 |
| 0.1888 | 195 | 0.4975 |
| 0.1897 | 196 | 0.3824 |
| 0.1907 | 197 | 0.4816 |
| 0.1917 | 198 | 0.3311 |
| 0.1926 | 199 | 0.3165 |
| 0.1936 | 200 | 0.2178 |
| 0.1946 | 201 | 0.4455 |
| 0.1955 | 202 | 0.4503 |
| 0.1965 | 203 | 0.2737 |
| 0.1975 | 204 | 0.3218 |
| 0.1985 | 205 | 0.437 |
| 0.1994 | 206 | 0.5546 |
| 0.2004 | 207 | 0.3565 |
| 0.2014 | 208 | 0.3306 |
| 0.2023 | 209 | 0.3767 |
| 0.2033 | 210 | 0.4443 |
| 0.2043 | 211 | 0.329 |
| 0.2052 | 212 | 0.3114 |
| 0.2062 | 213 | 0.41 |
| 0.2072 | 214 | 0.3516 |
| 0.2081 | 215 | 0.2819 |
| 0.2091 | 216 | 0.2407 |
| 0.2101 | 217 | 0.495 |
| 0.2110 | 218 | 0.4937 |
| 0.2120 | 219 | 0.339 |
| 0.2130 | 220 | 0.262 |
| 0.2139 | 221 | 0.4563 |
| 0.2149 | 222 | 0.4161 |
| 0.2159 | 223 | 0.3275 |
| 0.2168 | 224 | 0.2376 |
| 0.2178 | 225 | 0.5993 |
| 0.2188 | 226 | 0.4139 |
| 0.2197 | 227 | 0.38 |
| 0.2207 | 228 | 0.2289 |
| 0.2217 | 229 | 0.316 |
| 0.2227 | 230 | 0.3112 |
| 0.2236 | 231 | 0.2258 |
| 0.2246 | 232 | 0.1842 |
| 0.2256 | 233 | 0.5688 |
| 0.2265 | 234 | 0.4691 |
| 0.2275 | 235 | 0.2783 |
| 0.2285 | 236 | 0.24 |
| 0.2294 | 237 | 0.5951 |
| 0.2304 | 238 | 0.5229 |
| 0.2314 | 239 | 0.2762 |
| 0.2323 | 240 | 0.2738 |
| 0.2333 | 241 | 0.7251 |
| 0.2343 | 242 | 0.6153 |
| 0.2352 | 243 | 0.4335 |
| 0.2362 | 244 | 0.3748 |
| 0.2372 | 245 | 0.4762 |
| 0.2381 | 246 | 0.4231 |
| 0.2391 | 247 | 0.3399 |
| 0.2401 | 248 | 0.3431 |
| 0.2410 | 249 | 0.6332 |
| 0.2420 | 250 | 0.4854 |
| 0.2430 | 251 | 0.3843 |
| 0.2439 | 252 | 0.2856 |
| 0.2449 | 253 | 0.422 |
| 0.2459 | 254 | 0.4584 |
| 0.2469 | 255 | 0.2819 |
| 0.2478 | 256 | 0.2348 |
| 0.2488 | 257 | 0.6342 |
| 0.2498 | 258 | 0.5712 |
| 0.2507 | 259 | 0.4271 |
| 0.2517 | 260 | 0.3668 |
| 0.2527 | 261 | 0.4587 |
| 0.2536 | 262 | 0.398 |
| 0.2546 | 263 | 0.2927 |
| 0.2556 | 264 | 0.1945 |
| 0.2565 | 265 | 0.4647 |
| 0.2575 | 266 | 0.3355 |
| 0.2585 | 267 | 0.2279 |
| 0.2594 | 268 | 0.1717 |
| 0.2604 | 269 | 0.4749 |
| 0.2614 | 270 | 0.354 |
| 0.2623 | 271 | 0.3085 |
| 0.2633 | 272 | 0.1942 |
| 0.2643 | 273 | 0.3184 |
| 0.2652 | 274 | 0.2495 |
| 0.2662 | 275 | 0.1561 |
| 0.2672 | 276 | 0.1227 |
| 0.2682 | 277 | 0.4397 |
| 0.2691 | 278 | 0.3819 |
| 0.2701 | 279 | 0.2299 |
| 0.2711 | 280 | 0.206 |
| 0.2720 | 281 | 0.4568 |
| 0.2730 | 282 | 0.438 |
| 0.2740 | 283 | 0.3627 |
| 0.2749 | 284 | 0.2198 |
| 0.2759 | 285 | 0.4072 |
| 0.2769 | 286 | 0.3747 |
| 0.2778 | 287 | 0.2429 |
| 0.2788 | 288 | 0.1516 |
| 0.2798 | 289 | 0.2968 |
| 0.2807 | 290 | 0.2036 |
| 0.2817 | 291 | 0.1328 |
| 0.2827 | 292 | 0.1259 |
| 0.2836 | 293 | 0.472 |
| 0.2846 | 294 | 0.3646 |
| 0.2856 | 295 | 0.2081 |
| 0.2865 | 296 | 0.2639 |
| 0.2875 | 297 | 0.3555 |
| 0.2885 | 298 | 0.3066 |
| 0.2894 | 299 | 0.2551 |
| 0.2904 | 300 | 0.1732 |
| 0.2914 | 301 | 0.4561 |
| 0.2924 | 302 | 0.3249 |
| 0.2933 | 303 | 0.2603 |
| 0.2943 | 304 | 0.1947 |
| 0.2953 | 305 | 0.3652 |
| 0.2962 | 306 | 0.2871 |
| 0.2972 | 307 | 0.1943 |
| 0.2982 | 308 | 0.1563 |
| 0.2991 | 309 | 0.5157 |
| 0.3001 | 310 | 0.3872 |
| 0.3011 | 311 | 0.3106 |
| 0.3020 | 312 | 0.2399 |
| 0.3030 | 313 | 0.4417 |
| 0.3040 | 314 | 0.3091 |
| 0.3049 | 315 | 0.2586 |
| 0.3059 | 316 | 0.2273 |
| 0.3069 | 317 | 0.4961 |
| 0.3078 | 318 | 0.3 |
| 0.3088 | 319 | 0.2439 |
| 0.3098 | 320 | 0.2437 |
| 0.3107 | 321 | 0.3939 |
| 0.3117 | 322 | 0.3445 |
| 0.3127 | 323 | 0.2056 |
| 0.3136 | 324 | 0.1568 |
| 0.3146 | 325 | 0.326 |
| 0.3156 | 326 | 0.2451 |
| 0.3166 | 327 | 0.154 |
| 0.3175 | 328 | 0.1635 |
| 0.3185 | 329 | 0.2838 |
| 0.3195 | 330 | 0.2521 |
| 0.3204 | 331 | 0.2213 |
| 0.3214 | 332 | 0.1525 |
| 0.3224 | 333 | 0.6077 |
| 0.3233 | 334 | 0.4705 |
| 0.3243 | 335 | 0.3987 |
| 0.3253 | 336 | 0.2981 |
| 0.3262 | 337 | 0.6223 |
| 0.3272 | 338 | 0.4374 |
| 0.3282 | 339 | 0.4085 |
| 0.3291 | 340 | 0.3498 |
| 0.3301 | 341 | 0.54 |
| 0.3311 | 342 | 0.3981 |
| 0.3320 | 343 | 0.3082 |
| 0.3330 | 344 | 0.1925 |
| 0.3340 | 345 | 0.3002 |
| 0.3349 | 346 | 0.311 |
| 0.3359 | 347 | 0.2175 |
| 0.3369 | 348 | 0.2122 |
| 0.3379 | 349 | 0.3598 |
| 0.3388 | 350 | 0.3886 |
| 0.3398 | 351 | 0.2373 |
| 0.3408 | 352 | 0.1875 |
| 0.3417 | 353 | 0.3871 |
| 0.3427 | 354 | 0.336 |
| 0.3437 | 355 | 0.1932 |
| 0.3446 | 356 | 0.133 |
| 0.3456 | 357 | 0.2668 |
| 0.3466 | 358 | 0.1934 |
| 0.3475 | 359 | 0.1639 |
| 0.3485 | 360 | 0.1413 |
| 0.3495 | 361 | 0.3257 |
| 0.3504 | 362 | 0.3719 |
| 0.3514 | 363 | 0.2703 |
| 0.3524 | 364 | 0.1778 |
| 0.3533 | 365 | 0.5176 |
| 0.3543 | 366 | 0.4166 |
| 0.3553 | 367 | 0.2932 |
| 0.3562 | 368 | 0.2162 |
| 0.3572 | 369 | 0.3769 |
| 0.3582 | 370 | 0.3651 |
| 0.3591 | 371 | 0.2789 |
| 0.3601 | 372 | 0.184 |
| 0.3611 | 373 | 0.3129 |
| 0.3621 | 374 | 0.2772 |
| 0.3630 | 375 | 0.1632 |
| 0.3640 | 376 | 0.153 |
| 0.3650 | 377 | 0.3657 |
| 0.3659 | 378 | 0.2425 |
| 0.3669 | 379 | 0.2231 |
| 0.3679 | 380 | 0.2584 |
| 0.3688 | 381 | 0.3426 |
| 0.3698 | 382 | 0.2318 |
| 0.3708 | 383 | 0.2299 |
| 0.3717 | 384 | 0.2259 |
| 0.3727 | 385 | 0.3603 |
| 0.3737 | 386 | 0.3065 |
| 0.3746 | 387 | 0.1952 |
| 0.3756 | 388 | 0.1993 |
| 0.3766 | 389 | 0.2432 |
| 0.3775 | 390 | 0.1803 |
| 0.3785 | 391 | 0.1501 |
| 0.3795 | 392 | 0.1233 |
| 0.3804 | 393 | 0.408 |
| 0.3814 | 394 | 0.3125 |
| 0.3824 | 395 | 0.1763 |
| 0.3833 | 396 | 0.1835 |
| 0.3843 | 397 | 0.2864 |
| 0.3853 | 398 | 0.1949 |
| 0.3863 | 399 | 0.2508 |
| 0.3872 | 400 | 0.1604 |
| 0.3882 | 401 | 0.4943 |
| 0.3892 | 402 | 0.3117 |
| 0.3901 | 403 | 0.1871 |
| 0.3911 | 404 | 0.2269 |
| 0.3921 | 405 | 0.4589 |
| 0.3930 | 406 | 0.4147 |
| 0.3940 | 407 | 0.3166 |
| 0.3950 | 408 | 0.214 |
| 0.3959 | 409 | 0.3653 |
| 0.3969 | 410 | 0.3015 |
| 0.3979 | 411 | 0.2693 |
| 0.3988 | 412 | 0.2255 |
| 0.3998 | 413 | 0.34 |
| 0.4008 | 414 | 0.3124 |
| 0.4017 | 415 | 0.2054 |
| 0.4027 | 416 | 0.143 |
| 0.4037 | 417 | 0.3134 |
| 0.4046 | 418 | 0.2708 |
| 0.4056 | 419 | 0.218 |
| 0.4066 | 420 | 0.1358 |
| 0.4076 | 421 | 0.3048 |
| 0.4085 | 422 | 0.2991 |
| 0.4095 | 423 | 0.1625 |
| 0.4105 | 424 | 0.1424 |
| 0.4114 | 425 | 0.6295 |
| 0.4124 | 426 | 0.4449 |
| 0.4134 | 427 | 0.2376 |
| 0.4143 | 428 | 0.17 |
| 0.4153 | 429 | 0.3693 |
| 0.4163 | 430 | 0.3005 |
| 0.4172 | 431 | 0.2217 |
| 0.4182 | 432 | 0.2106 |
| 0.4192 | 433 | 0.4702 |
| 0.4201 | 434 | 0.3696 |
| 0.4211 | 435 | 0.2559 |
| 0.4221 | 436 | 0.206 |
| 0.4230 | 437 | 0.2921 |
| 0.4240 | 438 | 0.2854 |
| 0.4250 | 439 | 0.1696 |
| 0.4259 | 440 | 0.1717 |
| 0.4269 | 441 | 0.4509 |
| 0.4279 | 442 | 0.3348 |
| 0.4288 | 443 | 0.2641 |
| 0.4298 | 444 | 0.2692 |
| 0.4308 | 445 | 0.3977 |
| 0.4318 | 446 | 0.221 |
| 0.4327 | 447 | 0.185 |
| 0.4337 | 448 | 0.2015 |
| 0.4347 | 449 | 0.3542 |
| 0.4356 | 450 | 0.2652 |
| 0.4366 | 451 | 0.2787 |
| 0.4376 | 452 | 0.1511 |
| 0.4385 | 453 | 0.3545 |
| 0.4395 | 454 | 0.3312 |
| 0.4405 | 455 | 0.2895 |
| 0.4414 | 456 | 0.1381 |
| 0.4424 | 457 | 0.3802 |
| 0.4434 | 458 | 0.3101 |
| 0.4443 | 459 | 0.2186 |
| 0.4453 | 460 | 0.2026 |
| 0.4463 | 461 | 0.4204 |
| 0.4472 | 462 | 0.4106 |
| 0.4482 | 463 | 0.3247 |
| 0.4492 | 464 | 0.2362 |
| 0.4501 | 465 | 0.3277 |
| 0.4511 | 466 | 0.2262 |
| 0.4521 | 467 | 0.1485 |
| 0.4530 | 468 | 0.1806 |
| 0.4540 | 469 | 0.3533 |
| 0.4550 | 470 | 0.318 |
| 0.4560 | 471 | 0.2668 |
| 0.4569 | 472 | 0.2618 |
| 0.4579 | 473 | 0.4159 |
| 0.4589 | 474 | 0.3386 |
| 0.4598 | 475 | 0.2249 |
| 0.4608 | 476 | 0.2795 |
| 0.4618 | 477 | 0.3033 |
| 0.4627 | 478 | 0.3096 |
| 0.4637 | 479 | 0.2442 |
| 0.4647 | 480 | 0.2598 |
| 0.4656 | 481 | 0.3511 |
| 0.4666 | 482 | 0.2941 |
| 0.4676 | 483 | 0.1831 |
| 0.4685 | 484 | 0.1473 |
| 0.4695 | 485 | 0.2603 |
| 0.4705 | 486 | 0.2713 |
| 0.4714 | 487 | 0.178 |
| 0.4724 | 488 | 0.1952 |
| 0.4734 | 489 | 0.2763 |
| 0.4743 | 490 | 0.1591 |
| 0.4753 | 491 | 0.1298 |
| 0.4763 | 492 | 0.0777 |
| 0.4773 | 493 | 0.2729 |
| 0.4782 | 494 | 0.2355 |
| 0.4792 | 495 | 0.1963 |
| 0.4802 | 496 | 0.1306 |
| 0.4811 | 497 | 0.228 |
| 0.4821 | 498 | 0.2404 |
| 0.4831 | 499 | 0.1342 |
| 0.4840 | 500 | 0.193 |
| 0.4850 | 501 | 0.3564 |
| 0.4860 | 502 | 0.2635 |
| 0.4869 | 503 | 0.1732 |
| 0.4879 | 504 | 0.1443 |
| 0.4889 | 505 | 0.31 |
| 0.4898 | 506 | 0.2281 |
| 0.4908 | 507 | 0.1621 |
| 0.4918 | 508 | 0.1507 |
| 0.4927 | 509 | 0.269 |
| 0.4937 | 510 | 0.2455 |
| 0.4947 | 511 | 0.1752 |
| 0.4956 | 512 | 0.1212 |
| 0.4966 | 513 | 0.2895 |
| 0.4976 | 514 | 0.2037 |
| 0.4985 | 515 | 0.1402 |
| 0.4995 | 516 | 0.1659 |
| 0.5005 | 517 | 0.3343 |
| 0.5015 | 518 | 0.236 |
| 0.5024 | 519 | 0.2065 |
| 0.5034 | 520 | 0.1313 |
| 0.5044 | 521 | 0.3198 |
| 0.5053 | 522 | 0.2275 |
| 0.5063 | 523 | 0.1814 |
| 0.5073 | 524 | 0.1572 |
| 0.5082 | 525 | 0.298 |
| 0.5092 | 526 | 0.2303 |
| 0.5102 | 527 | 0.143 |
| 0.5111 | 528 | 0.1499 |
| 0.5121 | 529 | 0.1552 |
| 0.5131 | 530 | 0.1778 |
| 0.5140 | 531 | 0.1637 |
| 0.5150 | 532 | 0.2247 |
| 0.5160 | 533 | 0.264 |
| 0.5169 | 534 | 0.1498 |
| 0.5179 | 535 | 0.1787 |
| 0.5189 | 536 | 0.1682 |
| 0.5198 | 537 | 0.3026 |
| 0.5208 | 538 | 0.2661 |
| 0.5218 | 539 | 0.1063 |
| 0.5227 | 540 | 0.1339 |
| 0.5237 | 541 | 0.205 |
| 0.5247 | 542 | 0.2495 |
| 0.5257 | 543 | 0.1494 |
| 0.5266 | 544 | 0.1265 |
| 0.5276 | 545 | 0.2646 |
| 0.5286 | 546 | 0.2099 |
| 0.5295 | 547 | 0.2223 |
| 0.5305 | 548 | 0.1585 |
| 0.5315 | 549 | 0.3334 |
| 0.5324 | 550 | 0.1909 |
| 0.5334 | 551 | 0.229 |
| 0.5344 | 552 | 0.1434 |
| 0.5353 | 553 | 0.2243 |
| 0.5363 | 554 | 0.1998 |
| 0.5373 | 555 | 0.1558 |
| 0.5382 | 556 | 0.2233 |
| 0.5392 | 557 | 0.2845 |
| 0.5402 | 558 | 0.2471 |
| 0.5411 | 559 | 0.1491 |
| 0.5421 | 560 | 0.1198 |
| 0.5431 | 561 | 0.2872 |
| 0.5440 | 562 | 0.263 |
| 0.5450 | 563 | 0.1803 |
| 0.5460 | 564 | 0.2334 |
| 0.5470 | 565 | 0.2927 |
| 0.5479 | 566 | 0.3225 |
| 0.5489 | 567 | 0.2546 |
| 0.5499 | 568 | 0.1925 |
| 0.5508 | 569 | 0.3453 |
| 0.5518 | 570 | 0.3574 |
| 0.5528 | 571 | 0.2482 |
| 0.5537 | 572 | 0.1775 |
| 0.5547 | 573 | 0.2959 |
| 0.5557 | 574 | 0.2994 |
| 0.5566 | 575 | 0.2631 |
| 0.5576 | 576 | 0.3043 |
| 0.5586 | 577 | 0.2983 |
| 0.5595 | 578 | 0.3466 |
| 0.5605 | 579 | 0.2577 |
| 0.5615 | 580 | 0.2971 |
| 0.5624 | 581 | 0.3473 |
| 0.5634 | 582 | 0.2393 |
| 0.5644 | 583 | 0.2047 |
| 0.5653 | 584 | 0.228 |
| 0.5663 | 585 | 0.3039 |
| 0.5673 | 586 | 0.261 |
| 0.5682 | 587 | 0.2039 |
| 0.5692 | 588 | 0.1687 |
| 0.5702 | 589 | 0.4588 |
| 0.5712 | 590 | 0.3427 |
| 0.5721 | 591 | 0.2351 |
| 0.5731 | 592 | 0.2474 |
| 0.5741 | 593 | 0.339 |
| 0.5750 | 594 | 0.2491 |
| 0.5760 | 595 | 0.2103 |
| 0.5770 | 596 | 0.1619 |
| 0.5779 | 597 | 0.3744 |
| 0.5789 | 598 | 0.2676 |
| 0.5799 | 599 | 0.2709 |
| 0.5808 | 600 | 0.1632 |
| 0.5818 | 601 | 0.3176 |
| 0.5828 | 602 | 0.4045 |
| 0.5837 | 603 | 0.2417 |
| 0.5847 | 604 | 0.2294 |
| 0.5857 | 605 | 0.3201 |
| 0.5866 | 606 | 0.2585 |
| 0.5876 | 607 | 0.2155 |
| 0.5886 | 608 | 0.2254 |
| 0.5895 | 609 | 0.2301 |
| 0.5905 | 610 | 0.2925 |
| 0.5915 | 611 | 0.1517 |
| 0.5924 | 612 | 0.1448 |
| 0.5934 | 613 | 0.2595 |
| 0.5944 | 614 | 0.2984 |
| 0.5954 | 615 | 0.2566 |
| 0.5963 | 616 | 0.319 |
| 0.5973 | 617 | 0.4642 |
| 0.5983 | 618 | 0.4154 |
| 0.5992 | 619 | 0.2348 |
| 0.6002 | 620 | 0.1986 |
| 0.6012 | 621 | 0.4085 |
| 0.6021 | 622 | 0.365 |
| 0.6031 | 623 | 0.3618 |
| 0.6041 | 624 | 0.2765 |
| 0.6050 | 625 | 0.3155 |
| 0.6060 | 626 | 0.3 |
| 0.6070 | 627 | 0.2391 |
| 0.6079 | 628 | 0.3212 |
| 0.6089 | 629 | 0.2718 |
| 0.6099 | 630 | 0.1658 |
| 0.6108 | 631 | 0.2003 |
| 0.6118 | 632 | 0.2386 |
| 0.6128 | 633 | 0.3497 |
| 0.6137 | 634 | 0.401 |
| 0.6147 | 635 | 0.224 |
| 0.6157 | 636 | 0.2143 |
| 0.6167 | 637 | 0.2817 |
| 0.6176 | 638 | 0.27 |
| 0.6186 | 639 | 0.2028 |
| 0.6196 | 640 | 0.1908 |
| 0.6205 | 641 | 0.2203 |
| 0.6215 | 642 | 0.2251 |
| 0.6225 | 643 | 0.1316 |
| 0.6234 | 644 | 0.2055 |
| 0.6244 | 645 | 0.3069 |
| 0.6254 | 646 | 0.2532 |
| 0.6263 | 647 | 0.2085 |
| 0.6273 | 648 | 0.176 |
| 0.6283 | 649 | 0.1893 |
| 0.6292 | 650 | 0.187 |
| 0.6302 | 651 | 0.1931 |
| 0.6312 | 652 | 0.1146 |
| 0.6321 | 653 | 0.2694 |
| 0.6331 | 654 | 0.2045 |
| 0.6341 | 655 | 0.1719 |
| 0.6350 | 656 | 0.1932 |
| 0.6360 | 657 | 0.3392 |
| 0.6370 | 658 | 0.2579 |
| 0.6379 | 659 | 0.1231 |
| 0.6389 | 660 | 0.2016 |
| 0.6399 | 661 | 0.2487 |
| 0.6409 | 662 | 0.2017 |
| 0.6418 | 663 | 0.164 |
| 0.6428 | 664 | 0.1537 |
| 0.6438 | 665 | 0.2535 |
| 0.6447 | 666 | 0.3554 |
| 0.6457 | 667 | 0.2792 |
| 0.6467 | 668 | 0.2299 |
| 0.6476 | 669 | 0.3936 |
| 0.6486 | 670 | 0.2916 |
| 0.6496 | 671 | 0.2188 |
| 0.6505 | 672 | 0.2648 |
| 0.6515 | 673 | 0.2907 |
| 0.6525 | 674 | 0.2079 |
| 0.6534 | 675 | 0.179 |
| 0.6544 | 676 | 0.1183 |
| 0.6554 | 677 | 0.3298 |
| 0.6563 | 678 | 0.2557 |
| 0.6573 | 679 | 0.2164 |
| 0.6583 | 680 | 0.1619 |
| 0.6592 | 681 | 0.2748 |
| 0.6602 | 682 | 0.212 |
| 0.6612 | 683 | 0.1689 |
| 0.6621 | 684 | 0.1625 |
| 0.6631 | 685 | 0.3136 |
| 0.6641 | 686 | 0.3384 |
| 0.6651 | 687 | 0.2052 |
| 0.6660 | 688 | 0.2249 |
| 0.6670 | 689 | 0.249 |
| 0.6680 | 690 | 0.3422 |
| 0.6689 | 691 | 0.228 |
| 0.6699 | 692 | 0.2652 |
| 0.6709 | 693 | 0.2123 |
| 0.6718 | 694 | 0.1951 |
| 0.6728 | 695 | 0.1579 |
| 0.6738 | 696 | 0.1486 |
| 0.6747 | 697 | 0.2718 |
| 0.6757 | 698 | 0.1692 |
| 0.6767 | 699 | 0.1749 |
| 0.6776 | 700 | 0.1648 |
| 0.6786 | 701 | 0.2419 |
| 0.6796 | 702 | 0.1848 |
| 0.6805 | 703 | 0.1995 |
| 0.6815 | 704 | 0.1868 |
| 0.6825 | 705 | 0.2431 |
| 0.6834 | 706 | 0.2886 |
| 0.6844 | 707 | 0.2043 |
| 0.6854 | 708 | 0.2402 |
| 0.6864 | 709 | 0.2462 |
| 0.6873 | 710 | 0.2541 |
| 0.6883 | 711 | 0.189 |
| 0.6893 | 712 | 0.162 |
| 0.6902 | 713 | 0.2729 |
| 0.6912 | 714 | 0.2727 |
| 0.6922 | 715 | 0.1799 |
| 0.6931 | 716 | 0.1735 |
| 0.6941 | 717 | 0.23 |
| 0.6951 | 718 | 0.1824 |
| 0.6960 | 719 | 0.1565 |
| 0.6970 | 720 | 0.1915 |
| 0.6980 | 721 | 0.2603 |
| 0.6989 | 722 | 0.1904 |
| 0.6999 | 723 | 0.1433 |
| 0.7009 | 724 | 0.1984 |
| 0.7018 | 725 | 0.2184 |
| 0.7028 | 726 | 0.1427 |
| 0.7038 | 727 | 0.1356 |
| 0.7047 | 728 | 0.1159 |
| 0.7057 | 729 | 0.276 |
| 0.7067 | 730 | 0.2653 |
| 0.7076 | 731 | 0.1548 |
| 0.7086 | 732 | 0.1489 |
| 0.7096 | 733 | 0.2344 |
| 0.7106 | 734 | 0.253 |
| 0.7115 | 735 | 0.2125 |
| 0.7125 | 736 | 0.1709 |
| 0.7135 | 737 | 0.3855 |
| 0.7144 | 738 | 0.2598 |
| 0.7154 | 739 | 0.2447 |
| 0.7164 | 740 | 0.198 |
| 0.7173 | 741 | 0.4014 |
| 0.7183 | 742 | 0.3409 |
| 0.7193 | 743 | 0.2768 |
| 0.7202 | 744 | 0.2744 |
| 0.7212 | 745 | 0.2719 |
| 0.7222 | 746 | 0.2887 |
| 0.7231 | 747 | 0.2331 |
| 0.7241 | 748 | 0.2412 |
| 0.7251 | 749 | 0.2688 |
| 0.7260 | 750 | 0.3273 |
| 0.7270 | 751 | 0.2119 |
| 0.7280 | 752 | 0.1554 |
| 0.7289 | 753 | 0.2862 |
| 0.7299 | 754 | 0.2208 |
| 0.7309 | 755 | 0.2071 |
| 0.7318 | 756 | 0.1403 |
| 0.7328 | 757 | 0.2807 |
| 0.7338 | 758 | 0.2508 |
| 0.7348 | 759 | 0.2477 |
| 0.7357 | 760 | 0.1387 |
| 0.7367 | 761 | 0.1557 |
| 0.7377 | 762 | 0.2076 |
| 0.7386 | 763 | 0.1346 |
| 0.7396 | 764 | 0.1581 |
| 0.7406 | 765 | 0.2773 |
| 0.7415 | 766 | 0.2387 |
| 0.7425 | 767 | 0.1682 |
| 0.7435 | 768 | 0.1814 |
| 0.7444 | 769 | 0.2244 |
| 0.7454 | 770 | 0.278 |
| 0.7464 | 771 | 0.274 |
| 0.7473 | 772 | 0.2074 |
| 0.7483 | 773 | 0.3504 |
| 0.7493 | 774 | 0.2054 |
| 0.7502 | 775 | 0.198 |
| 0.7512 | 776 | 0.1997 |
| 0.7522 | 777 | 0.2949 |
| 0.7531 | 778 | 0.278 |
| 0.7541 | 779 | 0.2429 |
| 0.7551 | 780 | 0.2305 |
| 0.7561 | 781 | 0.2781 |
| 0.7570 | 782 | 0.2328 |
| 0.7580 | 783 | 0.194 |
| 0.7590 | 784 | 0.1614 |
| 0.7599 | 785 | 0.1778 |
| 0.7609 | 786 | 0.2202 |
| 0.7619 | 787 | 0.1515 |
| 0.7628 | 788 | 0.1178 |
| 0.7638 | 789 | 0.2183 |
| 0.7648 | 790 | 0.1887 |
| 0.7657 | 791 | 0.1505 |
| 0.7667 | 792 | 0.1637 |
| 0.7677 | 793 | 0.2041 |
| 0.7686 | 794 | 0.2123 |
| 0.7696 | 795 | 0.1754 |
| 0.7706 | 796 | 0.1097 |
| 0.7715 | 797 | 0.1998 |
| 0.7725 | 798 | 0.2179 |
| 0.7735 | 799 | 0.1298 |
| 0.7744 | 800 | 0.1464 |
| 0.7754 | 801 | 0.2942 |
| 0.7764 | 802 | 0.1461 |
| 0.7773 | 803 | 0.1513 |
| 0.7783 | 804 | 0.0863 |
| 0.7793 | 805 | 0.2612 |
| 0.7803 | 806 | 0.1898 |
| 0.7812 | 807 | 0.1473 |
| 0.7822 | 808 | 0.1912 |
| 0.7832 | 809 | 0.3559 |
| 0.7841 | 810 | 0.2183 |
| 0.7851 | 811 | 0.2673 |
| 0.7861 | 812 | 0.1637 |
| 0.7870 | 813 | 0.281 |
| 0.7880 | 814 | 0.2632 |
| 0.7890 | 815 | 0.1774 |
| 0.7899 | 816 | 0.1777 |
| 0.7909 | 817 | 0.1481 |
| 0.7919 | 818 | 0.235 |
| 0.7928 | 819 | 0.1498 |
| 0.7938 | 820 | 0.1587 |
| 0.7948 | 821 | 0.3269 |
| 0.7957 | 822 | 0.3765 |
| 0.7967 | 823 | 0.2102 |
| 0.7977 | 824 | 0.1896 |
| 0.7986 | 825 | 0.1723 |
| 0.7996 | 826 | 0.1492 |
| 0.8006 | 827 | 0.1167 |
| 0.8015 | 828 | 0.1479 |
| 0.8025 | 829 | 0.2585 |
| 0.8035 | 830 | 0.234 |
| 0.8045 | 831 | 0.2022 |
| 0.8054 | 832 | 0.1555 |
| 0.8064 | 833 | 0.2906 |
| 0.8074 | 834 | 0.2084 |
| 0.8083 | 835 | 0.1931 |
| 0.8093 | 836 | 0.1421 |
| 0.8103 | 837 | 0.3004 |
| 0.8112 | 838 | 0.1611 |
| 0.8122 | 839 | 0.1267 |
| 0.8132 | 840 | 0.1731 |
| 0.8141 | 841 | 0.2392 |
| 0.8151 | 842 | 0.2635 |
| 0.8161 | 843 | 0.1644 |
| 0.8170 | 844 | 0.167 |
| 0.8180 | 845 | 0.2597 |
| 0.8190 | 846 | 0.2002 |
| 0.8199 | 847 | 0.1326 |
| 0.8209 | 848 | 0.106 |
| 0.8219 | 849 | 0.2799 |
| 0.8228 | 850 | 0.198 |
| 0.8238 | 851 | 0.1743 |
| 0.8248 | 852 | 0.1638 |
| 0.8258 | 853 | 0.1914 |
| 0.8267 | 854 | 0.1987 |
| 0.8277 | 855 | 0.1772 |
| 0.8287 | 856 | 0.1731 |
| 0.8296 | 857 | 0.3019 |
| 0.8306 | 858 | 0.2445 |
| 0.8316 | 859 | 0.2682 |
| 0.8325 | 860 | 0.1793 |
| 0.8335 | 861 | 0.2383 |
| 0.8345 | 862 | 0.2082 |
| 0.8354 | 863 | 0.1818 |
| 0.8364 | 864 | 0.1234 |
| 0.8374 | 865 | 0.1862 |
| 0.8383 | 866 | 0.2367 |
| 0.8393 | 867 | 0.2319 |
| 0.8403 | 868 | 0.16 |
| 0.8412 | 869 | 0.3397 |
| 0.8422 | 870 | 0.324 |
| 0.8432 | 871 | 0.1905 |
| 0.8441 | 872 | 0.1516 |
| 0.8451 | 873 | 0.2282 |
| 0.8461 | 874 | 0.2604 |
| 0.8470 | 875 | 0.129 |
| 0.8480 | 876 | 0.2705 |
| 0.8490 | 877 | 0.2379 |
| 0.8500 | 878 | 0.2433 |
| 0.8509 | 879 | 0.2665 |
| 0.8519 | 880 | 0.1442 |
| 0.8529 | 881 | 0.1955 |
| 0.8538 | 882 | 0.1056 |
| 0.8548 | 883 | 0.1661 |
| 0.8558 | 884 | 0.1385 |
| 0.8567 | 885 | 0.2559 |
| 0.8577 | 886 | 0.3004 |
| 0.8587 | 887 | 0.2068 |
| 0.8596 | 888 | 0.1703 |
| 0.8606 | 889 | 0.1619 |
| 0.8616 | 890 | 0.2356 |
| 0.8625 | 891 | 0.1775 |
| 0.8635 | 892 | 0.1997 |
| 0.8645 | 893 | 0.245 |
| 0.8654 | 894 | 0.1647 |
| 0.8664 | 895 | 0.1303 |
| 0.8674 | 896 | 0.153 |
| 0.8683 | 897 | 0.1475 |
| 0.8693 | 898 | 0.1552 |
| 0.8703 | 899 | 0.1234 |
| 0.8712 | 900 | 0.2111 |
| 0.8722 | 901 | 0.3177 |
| 0.8732 | 902 | 0.3358 |
| 0.8742 | 903 | 0.1544 |
| 0.8751 | 904 | 0.2549 |
| 0.8761 | 905 | 0.3249 |
| 0.8771 | 906 | 0.1918 |
| 0.8780 | 907 | 0.1541 |
| 0.8790 | 908 | 0.1671 |
| 0.8800 | 909 | 0.1929 |
| 0.8809 | 910 | 0.1361 |
| 0.8819 | 911 | 0.1312 |
| 0.8829 | 912 | 0.1366 |
| 0.8838 | 913 | 0.3001 |
| 0.8848 | 914 | 0.1911 |
| 0.8858 | 915 | 0.18 |
| 0.8867 | 916 | 0.1533 |
| 0.8877 | 917 | 0.2777 |
| 0.8887 | 918 | 0.2134 |
| 0.8896 | 919 | 0.1231 |
| 0.8906 | 920 | 0.1581 |
| 0.8916 | 921 | 0.1825 |
| 0.8925 | 922 | 0.1936 |
| 0.8935 | 923 | 0.1673 |
| 0.8945 | 924 | 0.1542 |
| 0.8955 | 925 | 0.1837 |
| 0.8964 | 926 | 0.2207 |
| 0.8974 | 927 | 0.2207 |
| 0.8984 | 928 | 0.1374 |
| 0.8993 | 929 | 0.2619 |
| 0.9003 | 930 | 0.2778 |
| 0.9013 | 931 | 0.1997 |
| 0.9022 | 932 | 0.1156 |
| 0.9032 | 933 | 0.2088 |
| 0.9042 | 934 | 0.2807 |
| 0.9051 | 935 | 0.2751 |
| 0.9061 | 936 | 0.2431 |
| 0.9071 | 937 | 0.2613 |
| 0.9080 | 938 | 0.2697 |
| 0.9090 | 939 | 0.2209 |
| 0.9100 | 940 | 0.2324 |
| 0.9109 | 941 | 0.1753 |
| 0.9119 | 942 | 0.2128 |
| 0.9129 | 943 | 0.1826 |
| 0.9138 | 944 | 0.102 |
| 0.9148 | 945 | 0.176 |
| 0.9158 | 946 | 0.1252 |
| 0.9167 | 947 | 0.1666 |
| 0.9177 | 948 | 0.0822 |
| 0.9187 | 949 | 0.1441 |
| 0.9197 | 950 | 0.1652 |
| 0.9206 | 951 | 0.1324 |
| 0.9216 | 952 | 0.1512 |
| 0.9226 | 953 | 0.2502 |
| 0.9235 | 954 | 0.2076 |
| 0.9245 | 955 | 0.1593 |
| 0.9255 | 956 | 0.1547 |
| 0.9264 | 957 | 0.2741 |
| 0.9274 | 958 | 0.2831 |
| 0.9284 | 959 | 0.1572 |
| 0.9293 | 960 | 0.2008 |
| 0.9303 | 961 | 0.1996 |
| 0.9313 | 962 | 0.1763 |
| 0.9322 | 963 | 0.2071 |
| 0.9332 | 964 | 0.1907 |
| 0.9342 | 965 | 0.3358 |
| 0.9351 | 966 | 0.3179 |
| 0.9361 | 967 | 0.1809 |
| 0.9371 | 968 | 0.1545 |
| 0.9380 | 969 | 0.2253 |
| 0.9390 | 970 | 0.1763 |
| 0.9400 | 971 | 0.1482 |
| 0.9409 | 972 | 0.2351 |
| 0.9419 | 973 | 0.2978 |
| 0.9429 | 974 | 0.3536 |
| 0.9439 | 975 | 0.2114 |
| 0.9448 | 976 | 0.1088 |
| 0.9458 | 977 | 0.1774 |
| 0.9468 | 978 | 0.2082 |
| 0.9477 | 979 | 0.2123 |
| 0.9487 | 980 | 0.1031 |
| 0.9497 | 981 | 0.3155 |
| 0.9506 | 982 | 0.2007 |
| 0.9516 | 983 | 0.2026 |
| 0.9526 | 984 | 0.1902 |
| 0.9535 | 985 | 0.2726 |
| 0.9545 | 986 | 0.3079 |
| 0.9555 | 987 | 0.2452 |
| 0.9564 | 988 | 0.2419 |
| 0.9574 | 989 | 0.336 |
| 0.9584 | 990 | 0.3021 |
| 0.9593 | 991 | 0.2599 |
| 0.9603 | 992 | 0.1757 |
| 0.9613 | 993 | 0.1959 |
| 0.9622 | 994 | 0.1737 |
| 0.9632 | 995 | 0.1216 |
| 0.9642 | 996 | 0.1657 |
| 0.9652 | 997 | 0.2196 |
| 0.9661 | 998 | 0.2473 |
| 0.9671 | 999 | 0.1864 |
| 0.9681 | 1000 | 0.1223 |
| 0.9690 | 1001 | 0.1703 |
| 0.9700 | 1002 | 0.1463 |
| 0.9710 | 1003 | 0.1289 |
| 0.9719 | 1004 | 0.1227 |
| 0.9729 | 1005 | 0.2686 |
| 0.9739 | 1006 | 0.2623 |
| 0.9748 | 1007 | 0.2177 |
| 0.9758 | 1008 | 0.1847 |
| 0.9768 | 1009 | 0.2195 |
| 0.9777 | 1010 | 0.1494 |
| 0.9787 | 1011 | 0.1393 |
| 0.9797 | 1012 | 0.1343 |
| 0.9806 | 1013 | 0.1859 |
| 0.9816 | 1014 | 0.1894 |
| 0.9826 | 1015 | 0.1413 |
| 0.9835 | 1016 | 0.193 |
| 0.9845 | 1017 | 0.2461 |
| 0.9855 | 1018 | 0.1995 |
| 0.9864 | 1019 | 0.1768 |
| 0.9874 | 1020 | 0.1076 |
| 0.9884 | 1021 | 0.1739 |
| 0.9894 | 1022 | 0.1995 |
| 0.9903 | 1023 | 0.1209 |
| 0.9913 | 1024 | 0.2087 |
| 0.9923 | 1025 | 0.2498 |
| 0.9932 | 1026 | 0.1803 |
| 0.9942 | 1027 | 0.1988 |
| 0.9952 | 1028 | 0.1701 |
| 0.9961 | 1029 | 0.1534 |
| 0.9971 | 1030 | 0.187 |
| 0.9981 | 1031 | 0.162 |
| 0.9990 | 1032 | 0.2118 |
| 1.0010 | 1033 | 0.2929 |
| 1.0019 | 1034 | 0.2291 |
| 1.0029 | 1035 | 0.2067 |
| 1.0039 | 1036 | 0.1164 |
| 1.0048 | 1037 | 0.282 |
| 1.0058 | 1038 | 0.2752 |
| 1.0068 | 1039 | 0.174 |
| 1.0077 | 1040 | 0.1497 |
| 1.0087 | 1041 | 0.2671 |
| 1.0097 | 1042 | 0.19 |
| 1.0106 | 1043 | 0.2011 |
| 1.0116 | 1044 | 0.1796 |
| 1.0126 | 1045 | 0.4501 |
| 1.0136 | 1046 | 0.3541 |
| 1.0145 | 1047 | 0.2418 |
| 1.0155 | 1048 | 0.1763 |
| 1.0165 | 1049 | 0.3594 |
| 1.0174 | 1050 | 0.2633 |
| 1.0184 | 1051 | 0.2493 |
| 1.0194 | 1052 | 0.1646 |
| 1.0203 | 1053 | 0.3436 |
| 1.0213 | 1054 | 0.1855 |
| 1.0223 | 1055 | 0.1583 |
| 1.0232 | 1056 | 0.1199 |
| 1.0242 | 1057 | 0.221 |
| 1.0252 | 1058 | 0.2117 |
| 1.0261 | 1059 | 0.1148 |
| 1.0271 | 1060 | 0.0659 |
| 1.0281 | 1061 | 0.2333 |
| 1.0290 | 1062 | 0.1599 |
| 1.0300 | 1063 | 0.1289 |
| 1.0310 | 1064 | 0.1589 |
| 1.0319 | 1065 | 0.1931 |
| 1.0329 | 1066 | 0.103 |
| 1.0339 | 1067 | 0.0816 |
| 1.0348 | 1068 | 0.123 |
| 1.0358 | 1069 | 0.3173 |
| 1.0368 | 1070 | 0.2428 |
| 1.0378 | 1071 | 0.1517 |
| 1.0387 | 1072 | 0.1098 |
| 1.0397 | 1073 | 0.2683 |
| 1.0407 | 1074 | 0.1819 |
| 1.0416 | 1075 | 0.1391 |
| 1.0426 | 1076 | 0.1458 |
| 1.0436 | 1077 | 0.2526 |
| 1.0445 | 1078 | 0.2046 |
| 1.0455 | 1079 | 0.1757 |
| 1.0465 | 1080 | 0.1147 |
| 1.0474 | 1081 | 0.1757 |
| 1.0484 | 1082 | 0.1646 |
| 1.0494 | 1083 | 0.1048 |
| 1.0503 | 1084 | 0.1362 |
| 1.0513 | 1085 | 0.2027 |
| 1.0523 | 1086 | 0.177 |
| 1.0532 | 1087 | 0.1162 |
| 1.0542 | 1088 | 0.0951 |
| 1.0552 | 1089 | 0.1671 |
| 1.0561 | 1090 | 0.1399 |
| 1.0571 | 1091 | 0.1074 |
| 1.0581 | 1092 | 0.0689 |
| 1.0591 | 1093 | 0.2981 |
| 1.0600 | 1094 | 0.2311 |
| 1.0610 | 1095 | 0.2401 |
| 1.0620 | 1096 | 0.1817 |
| 1.0629 | 1097 | 0.2613 |
| 1.0639 | 1098 | 0.299 |
| 1.0649 | 1099 | 0.1697 |
| 1.0658 | 1100 | 0.1452 |
| 1.0668 | 1101 | 0.3065 |
| 1.0678 | 1102 | 0.2141 |
| 1.0687 | 1103 | 0.1252 |
| 1.0697 | 1104 | 0.1234 |
| 1.0707 | 1105 | 0.2461 |
| 1.0716 | 1106 | 0.1347 |
| 1.0726 | 1107 | 0.1144 |
| 1.0736 | 1108 | 0.1377 |
| 1.0745 | 1109 | 0.2411 |
| 1.0755 | 1110 | 0.2086 |
| 1.0765 | 1111 | 0.1262 |
| 1.0774 | 1112 | 0.1295 |
| 1.0784 | 1113 | 0.1562 |
| 1.0794 | 1114 | 0.1432 |
| 1.0803 | 1115 | 0.1153 |
| 1.0813 | 1116 | 0.0903 |
| 1.0823 | 1117 | 0.2769 |
| 1.0833 | 1118 | 0.2323 |
| 1.0842 | 1119 | 0.2161 |
| 1.0852 | 1120 | 0.2143 |
| 1.0862 | 1121 | 0.2071 |
| 1.0871 | 1122 | 0.1218 |
| 1.0881 | 1123 | 0.1081 |
| 1.0891 | 1124 | 0.1384 |
| 1.0900 | 1125 | 0.3029 |
| 1.0910 | 1126 | 0.2298 |
| 1.0920 | 1127 | 0.1164 |
| 1.0929 | 1128 | 0.1005 |
| 1.0939 | 1129 | 0.2394 |
| 1.0949 | 1130 | 0.1573 |
| 1.0958 | 1131 | 0.1618 |
| 1.0968 | 1132 | 0.0811 |
| 1.0978 | 1133 | 0.2848 |
| 1.0987 | 1134 | 0.1559 |
| 1.0997 | 1135 | 0.1678 |
| 1.1007 | 1136 | 0.0844 |
| 1.1016 | 1137 | 0.2772 |
| 1.1026 | 1138 | 0.1883 |
| 1.1036 | 1139 | 0.2094 |
| 1.1045 | 1140 | 0.1637 |
| 1.1055 | 1141 | 0.1903 |
| 1.1065 | 1142 | 0.1275 |
| 1.1075 | 1143 | 0.1122 |
| 1.1084 | 1144 | 0.1134 |
| 1.1094 | 1145 | 0.2118 |
| 1.1104 | 1146 | 0.166 |
| 1.1113 | 1147 | 0.1092 |
| 1.1123 | 1148 | 0.1605 |
| 1.1133 | 1149 | 0.1707 |
| 1.1142 | 1150 | 0.1353 |
| 1.1152 | 1151 | 0.0716 |
| 1.1162 | 1152 | 0.0978 |
| 1.1171 | 1153 | 0.2421 |
| 1.1181 | 1154 | 0.1411 |
| 1.1191 | 1155 | 0.1376 |
| 1.1200 | 1156 | 0.1493 |
| 1.1210 | 1157 | 0.2581 |
| 1.1220 | 1158 | 0.1284 |
| 1.1229 | 1159 | 0.0848 |
| 1.1239 | 1160 | 0.0819 |
| 1.1249 | 1161 | 0.2448 |
| 1.1258 | 1162 | 0.1379 |
| 1.1268 | 1163 | 0.1136 |
| 1.1278 | 1164 | 0.1065 |
| 1.1288 | 1165 | 0.2831 |
| 1.1297 | 1166 | 0.1352 |
| 1.1307 | 1167 | 0.1565 |
| 1.1317 | 1168 | 0.1471 |
| 1.1326 | 1169 | 0.2274 |
| 1.1336 | 1170 | 0.1645 |
| 1.1346 | 1171 | 0.1171 |
| 1.1355 | 1172 | 0.1027 |
| 1.1365 | 1173 | 0.2396 |
| 1.1375 | 1174 | 0.199 |
| 1.1384 | 1175 | 0.1531 |
| 1.1394 | 1176 | 0.0684 |
| 1.1404 | 1177 | 0.1713 |
| 1.1413 | 1178 | 0.1269 |
| 1.1423 | 1179 | 0.1325 |
| 1.1433 | 1180 | 0.1194 |
| 1.1442 | 1181 | 0.1923 |
| 1.1452 | 1182 | 0.1933 |
| 1.1462 | 1183 | 0.1077 |
| 1.1471 | 1184 | 0.1103 |
| 1.1481 | 1185 | 0.305 |
| 1.1491 | 1186 | 0.2384 |
| 1.1500 | 1187 | 0.1546 |
| 1.1510 | 1188 | 0.1281 |
| 1.1520 | 1189 | 0.1747 |
| 1.1530 | 1190 | 0.1437 |
| 1.1539 | 1191 | 0.12 |
| 1.1549 | 1192 | 0.083 |
| 1.1559 | 1193 | 0.1552 |
| 1.1568 | 1194 | 0.1304 |
| 1.1578 | 1195 | 0.076 |
| 1.1588 | 1196 | 0.0767 |
| 1.1597 | 1197 | 0.1429 |
| 1.1607 | 1198 | 0.0799 |
| 1.1617 | 1199 | 0.0871 |
| 1.1626 | 1200 | 0.0636 |
| 1.1636 | 1201 | 0.2394 |
| 1.1646 | 1202 | 0.1423 |
| 1.1655 | 1203 | 0.0919 |
| 1.1665 | 1204 | 0.09 |
| 1.1675 | 1205 | 0.1423 |
| 1.1684 | 1206 | 0.0994 |
| 1.1694 | 1207 | 0.0833 |
| 1.1704 | 1208 | 0.0901 |
| 1.1713 | 1209 | 0.1835 |
| 1.1723 | 1210 | 0.1231 |
| 1.1733 | 1211 | 0.0833 |
| 1.1742 | 1212 | 0.0423 |
| 1.1752 | 1213 | 0.2081 |
| 1.1762 | 1214 | 0.1597 |
| 1.1772 | 1215 | 0.1281 |
| 1.1781 | 1216 | 0.1278 |
| 1.1791 | 1217 | 0.1355 |
| 1.1801 | 1218 | 0.1234 |
| 1.1810 | 1219 | 0.1195 |
| 1.1820 | 1220 | 0.0759 |
| 1.1830 | 1221 | 0.1733 |
| 1.1839 | 1222 | 0.0934 |
| 1.1849 | 1223 | 0.1418 |
| 1.1859 | 1224 | 0.0855 |
| 1.1868 | 1225 | 0.215 |
| 1.1878 | 1226 | 0.1765 |
| 1.1888 | 1227 | 0.0882 |
| 1.1897 | 1228 | 0.1642 |
| 1.1907 | 1229 | 0.1779 |
| 1.1917 | 1230 | 0.1172 |
| 1.1926 | 1231 | 0.0633 |
| 1.1936 | 1232 | 0.0744 |
| 1.1946 | 1233 | 0.2015 |
| 1.1955 | 1234 | 0.1235 |
| 1.1965 | 1235 | 0.0873 |
| 1.1975 | 1236 | 0.1105 |
| 1.1985 | 1237 | 0.161 |
| 1.1994 | 1238 | 0.1019 |
| 1.2004 | 1239 | 0.0768 |
| 1.2014 | 1240 | 0.0723 |
| 1.2023 | 1241 | 0.1006 |
| 1.2033 | 1242 | 0.0855 |
| 1.2043 | 1243 | 0.0797 |
| 1.2052 | 1244 | 0.0508 |
| 1.2062 | 1245 | 0.0761 |
| 1.2072 | 1246 | 0.1045 |
| 1.2081 | 1247 | 0.0651 |
| 1.2091 | 1248 | 0.046 |
| 1.2101 | 1249 | 0.1625 |
| 1.2110 | 1250 | 0.1055 |
| 1.2120 | 1251 | 0.0913 |
| 1.2130 | 1252 | 0.086 |
| 1.2139 | 1253 | 0.0533 |
| 1.2149 | 1254 | 0.0805 |
| 1.2159 | 1255 | 0.0669 |
| 1.2168 | 1256 | 0.0499 |
| 1.2178 | 1257 | 0.1223 |
| 1.2188 | 1258 | 0.1057 |
| 1.2197 | 1259 | 0.0946 |
| 1.2207 | 1260 | 0.0521 |
| 1.2217 | 1261 | 0.0974 |
| 1.2227 | 1262 | 0.1083 |
| 1.2236 | 1263 | 0.0546 |
| 1.2246 | 1264 | 0.0425 |
| 1.2256 | 1265 | 0.1603 |
| 1.2265 | 1266 | 0.1183 |
| 1.2275 | 1267 | 0.0645 |
| 1.2285 | 1268 | 0.0596 |
| 1.2294 | 1269 | 0.1333 |
| 1.2304 | 1270 | 0.0897 |
| 1.2314 | 1271 | 0.0823 |
| 1.2323 | 1272 | 0.0552 |
| 1.2333 | 1273 | 0.1374 |
| 1.2343 | 1274 | 0.096 |
| 1.2352 | 1275 | 0.0659 |
| 1.2362 | 1276 | 0.0505 |
| 1.2372 | 1277 | 0.0929 |
| 1.2381 | 1278 | 0.0855 |
| 1.2391 | 1279 | 0.0538 |
| 1.2401 | 1280 | 0.0513 |
| 1.2410 | 1281 | 0.145 |
| 1.2420 | 1282 | 0.0874 |
| 1.2430 | 1283 | 0.0554 |
| 1.2439 | 1284 | 0.0836 |
| 1.2449 | 1285 | 0.1027 |
| 1.2459 | 1286 | 0.1156 |
| 1.2469 | 1287 | 0.0779 |
| 1.2478 | 1288 | 0.0496 |
| 1.2488 | 1289 | 0.1401 |
| 1.2498 | 1290 | 0.1199 |
| 1.2507 | 1291 | 0.0775 |
| 1.2517 | 1292 | 0.0608 |
| 1.2527 | 1293 | 0.1152 |
| 1.2536 | 1294 | 0.0866 |
| 1.2546 | 1295 | 0.0679 |
| 1.2556 | 1296 | 0.0523 |
| 1.2565 | 1297 | 0.1131 |
| 1.2575 | 1298 | 0.0728 |
| 1.2585 | 1299 | 0.0421 |
| 1.2594 | 1300 | 0.0565 |
| 1.2604 | 1301 | 0.1014 |
| 1.2614 | 1302 | 0.0548 |
| 1.2623 | 1303 | 0.0485 |
| 1.2633 | 1304 | 0.0397 |
| 1.2643 | 1305 | 0.0545 |
| 1.2652 | 1306 | 0.078 |
| 1.2662 | 1307 | 0.0406 |
| 1.2672 | 1308 | 0.023 |
| 1.2682 | 1309 | 0.1545 |
| 1.2691 | 1310 | 0.0712 |
| 1.2701 | 1311 | 0.052 |
| 1.2711 | 1312 | 0.0492 |
| 1.2720 | 1313 | 0.0858 |
| 1.2730 | 1314 | 0.0738 |
| 1.2740 | 1315 | 0.0647 |
| 1.2749 | 1316 | 0.0688 |
| 1.2759 | 1317 | 0.1133 |
| 1.2769 | 1318 | 0.0476 |
| 1.2778 | 1319 | 0.0405 |
| 1.2788 | 1320 | 0.0423 |
| 1.2798 | 1321 | 0.061 |
| 1.2807 | 1322 | 0.0498 |
| 1.2817 | 1323 | 0.0317 |
| 1.2827 | 1324 | 0.0261 |
| 1.2836 | 1325 | 0.12 |
| 1.2846 | 1326 | 0.038 |
| 1.2856 | 1327 | 0.0378 |
| 1.2865 | 1328 | 0.0492 |
| 1.2875 | 1329 | 0.0633 |
| 1.2885 | 1330 | 0.051 |
| 1.2894 | 1331 | 0.0562 |
| 1.2904 | 1332 | 0.0516 |
| 1.2914 | 1333 | 0.084 |
| 1.2924 | 1334 | 0.0488 |
| 1.2933 | 1335 | 0.0425 |
| 1.2943 | 1336 | 0.0281 |
| 1.2953 | 1337 | 0.0628 |
| 1.2962 | 1338 | 0.0527 |
| 1.2972 | 1339 | 0.05 |
| 1.2982 | 1340 | 0.0386 |
| 1.2991 | 1341 | 0.0938 |
| 1.3001 | 1342 | 0.0836 |
| 1.3011 | 1343 | 0.0517 |
| 1.3020 | 1344 | 0.0311 |
| 1.3030 | 1345 | 0.0877 |
| 1.3040 | 1346 | 0.0745 |
| 1.3049 | 1347 | 0.0505 |
| 1.3059 | 1348 | 0.0693 |
| 1.3069 | 1349 | 0.1231 |
| 1.3078 | 1350 | 0.0668 |
| 1.3088 | 1351 | 0.0348 |
| 1.3098 | 1352 | 0.0528 |
| 1.3107 | 1353 | 0.0782 |
| 1.3117 | 1354 | 0.082 |
| 1.3127 | 1355 | 0.0397 |
| 1.3136 | 1356 | 0.0291 |
| 1.3146 | 1357 | 0.0557 |
| 1.3156 | 1358 | 0.0458 |
| 1.3166 | 1359 | 0.0399 |
| 1.3175 | 1360 | 0.0355 |
| 1.3185 | 1361 | 0.0605 |
| 1.3195 | 1362 | 0.0313 |
| 1.3204 | 1363 | 0.0578 |
| 1.3214 | 1364 | 0.0338 |
| 1.3224 | 1365 | 0.1113 |
| 1.3233 | 1366 | 0.0614 |
| 1.3243 | 1367 | 0.0691 |
| 1.3253 | 1368 | 0.051 |
| 1.3262 | 1369 | 0.136 |
| 1.3272 | 1370 | 0.0637 |
| 1.3282 | 1371 | 0.0763 |
| 1.3291 | 1372 | 0.0468 |
| 1.3301 | 1373 | 0.0933 |
| 1.3311 | 1374 | 0.0577 |
| 1.3320 | 1375 | 0.0413 |
| 1.3330 | 1376 | 0.0359 |
| 1.3340 | 1377 | 0.076 |
| 1.3349 | 1378 | 0.0596 |
| 1.3359 | 1379 | 0.0442 |
| 1.3369 | 1380 | 0.0371 |
| 1.3379 | 1381 | 0.1371 |
| 1.3388 | 1382 | 0.0863 |
| 1.3398 | 1383 | 0.0388 |
| 1.3408 | 1384 | 0.0281 |
| 1.3417 | 1385 | 0.0731 |
| 1.3427 | 1386 | 0.0587 |
| 1.3437 | 1387 | 0.0451 |
| 1.3446 | 1388 | 0.0418 |
| 1.3456 | 1389 | 0.0568 |
| 1.3466 | 1390 | 0.0202 |
| 1.3475 | 1391 | 0.0412 |
| 1.3485 | 1392 | 0.0244 |
| 1.3495 | 1393 | 0.0984 |
| 1.3504 | 1394 | 0.0881 |
| 1.3514 | 1395 | 0.0566 |
| 1.3524 | 1396 | 0.05 |
| 1.3533 | 1397 | 0.1549 |
| 1.3543 | 1398 | 0.1042 |
| 1.3553 | 1399 | 0.0565 |
| 1.3562 | 1400 | 0.0293 |
| 1.3572 | 1401 | 0.0585 |
| 1.3582 | 1402 | 0.0342 |
| 1.3591 | 1403 | 0.0408 |
| 1.3601 | 1404 | 0.0269 |
| 1.3611 | 1405 | 0.1212 |
| 1.3621 | 1406 | 0.0568 |
| 1.3630 | 1407 | 0.0338 |
| 1.3640 | 1408 | 0.0378 |
| 1.3650 | 1409 | 0.0868 |
| 1.3659 | 1410 | 0.0545 |
| 1.3669 | 1411 | 0.0284 |
| 1.3679 | 1412 | 0.0559 |
| 1.3688 | 1413 | 0.0659 |
| 1.3698 | 1414 | 0.0518 |
| 1.3708 | 1415 | 0.049 |
| 1.3717 | 1416 | 0.0413 |
| 1.3727 | 1417 | 0.0651 |
| 1.3737 | 1418 | 0.044 |
| 1.3746 | 1419 | 0.0448 |
| 1.3756 | 1420 | 0.0675 |
| 1.3766 | 1421 | 0.0658 |
| 1.3775 | 1422 | 0.0765 |
| 1.3785 | 1423 | 0.0284 |
| 1.3795 | 1424 | 0.0234 |
| 1.3804 | 1425 | 0.088 |
| 1.3814 | 1426 | 0.0707 |
| 1.3824 | 1427 | 0.0494 |
| 1.3833 | 1428 | 0.0322 |
| 1.3843 | 1429 | 0.0496 |
| 1.3853 | 1430 | 0.0611 |
| 1.3863 | 1431 | 0.0479 |
| 1.3872 | 1432 | 0.0387 |
| 1.3882 | 1433 | 0.0742 |
| 1.3892 | 1434 | 0.0475 |
| 1.3901 | 1435 | 0.0299 |
| 1.3911 | 1436 | 0.0857 |
| 1.3921 | 1437 | 0.143 |
| 1.3930 | 1438 | 0.1043 |
| 1.3940 | 1439 | 0.1216 |
| 1.3950 | 1440 | 0.0377 |
| 1.3959 | 1441 | 0.147 |
| 1.3969 | 1442 | 0.0925 |
| 1.3979 | 1443 | 0.0556 |
| 1.3988 | 1444 | 0.0322 |
| 1.3998 | 1445 | 0.1185 |
| 1.4008 | 1446 | 0.0673 |
| 1.4017 | 1447 | 0.049 |
| 1.4027 | 1448 | 0.0342 |
| 1.4037 | 1449 | 0.0501 |
| 1.4046 | 1450 | 0.0606 |
| 1.4056 | 1451 | 0.0406 |
| 1.4066 | 1452 | 0.0498 |
| 1.4076 | 1453 | 0.0598 |
| 1.4085 | 1454 | 0.0416 |
| 1.4095 | 1455 | 0.0258 |
| 1.4105 | 1456 | 0.032 |
| 1.4114 | 1457 | 0.168 |
| 1.4124 | 1458 | 0.0943 |
| 1.4134 | 1459 | 0.0501 |
| 1.4143 | 1460 | 0.0521 |
| 1.4153 | 1461 | 0.0804 |
| 1.4163 | 1462 | 0.0372 |
| 1.4172 | 1463 | 0.0323 |
| 1.4182 | 1464 | 0.0308 |
| 1.4192 | 1465 | 0.0867 |
| 1.4201 | 1466 | 0.0868 |
| 1.4211 | 1467 | 0.0383 |
| 1.4221 | 1468 | 0.0482 |
| 1.4230 | 1469 | 0.0741 |
| 1.4240 | 1470 | 0.0695 |
| 1.4250 | 1471 | 0.0493 |
| 1.4259 | 1472 | 0.0489 |
| 1.4269 | 1473 | 0.0826 |
| 1.4279 | 1474 | 0.0544 |
| 1.4288 | 1475 | 0.0564 |
| 1.4298 | 1476 | 0.046 |
| 1.4308 | 1477 | 0.0672 |
| 1.4318 | 1478 | 0.0395 |
| 1.4327 | 1479 | 0.0499 |
| 1.4337 | 1480 | 0.0369 |
| 1.4347 | 1481 | 0.0839 |
| 1.4356 | 1482 | 0.0597 |
| 1.4366 | 1483 | 0.0476 |
| 1.4376 | 1484 | 0.0557 |
| 1.4385 | 1485 | 0.07 |
| 1.4395 | 1486 | 0.098 |
| 1.4405 | 1487 | 0.0649 |
| 1.4414 | 1488 | 0.0346 |
| 1.4424 | 1489 | 0.0739 |
| 1.4434 | 1490 | 0.0452 |
| 1.4443 | 1491 | 0.0451 |
| 1.4453 | 1492 | 0.035 |
| 1.4463 | 1493 | 0.0749 |
| 1.4472 | 1494 | 0.0704 |
| 1.4482 | 1495 | 0.0486 |
| 1.4492 | 1496 | 0.0474 |
| 1.4501 | 1497 | 0.1025 |
| 1.4511 | 1498 | 0.0402 |
| 1.4521 | 1499 | 0.0479 |
| 1.4530 | 1500 | 0.0263 |
| 1.4540 | 1501 | 0.0555 |
| 1.4550 | 1502 | 0.0655 |
| 1.4560 | 1503 | 0.0779 |
| 1.4569 | 1504 | 0.0453 |
| 1.4579 | 1505 | 0.0816 |
| 1.4589 | 1506 | 0.0539 |
| 1.4598 | 1507 | 0.0511 |
| 1.4608 | 1508 | 0.0645 |
| 1.4618 | 1509 | 0.0602 |
| 1.4627 | 1510 | 0.0507 |
| 1.4637 | 1511 | 0.0478 |
| 1.4647 | 1512 | 0.0456 |
| 1.4656 | 1513 | 0.0494 |
| 1.4666 | 1514 | 0.0764 |
| 1.4676 | 1515 | 0.0231 |
| 1.4685 | 1516 | 0.0312 |
| 1.4695 | 1517 | 0.0627 |
| 1.4705 | 1518 | 0.0561 |
| 1.4714 | 1519 | 0.0426 |
| 1.4724 | 1520 | 0.0407 |
| 1.4734 | 1521 | 0.0642 |
| 1.4743 | 1522 | 0.0263 |
| 1.4753 | 1523 | 0.0284 |
| 1.4763 | 1524 | 0.028 |
| 1.4773 | 1525 | 0.0652 |
| 1.4782 | 1526 | 0.0476 |
| 1.4792 | 1527 | 0.0339 |
| 1.4802 | 1528 | 0.0359 |
| 1.4811 | 1529 | 0.0639 |
| 1.4821 | 1530 | 0.0384 |
| 1.4831 | 1531 | 0.0349 |
| 1.4840 | 1532 | 0.0451 |
| 1.4850 | 1533 | 0.0769 |
| 1.4860 | 1534 | 0.0534 |
| 1.4869 | 1535 | 0.0312 |
| 1.4879 | 1536 | 0.0287 |
| 1.4889 | 1537 | 0.0807 |
| 1.4898 | 1538 | 0.0746 |
| 1.4908 | 1539 | 0.0732 |
| 1.4918 | 1540 | 0.0579 |
| 1.4927 | 1541 | 0.082 |
| 1.4937 | 1542 | 0.0476 |
| 1.4947 | 1543 | 0.0664 |
| 1.4956 | 1544 | 0.0418 |
| 1.4966 | 1545 | 0.0601 |
| 1.4976 | 1546 | 0.0297 |
| 1.4985 | 1547 | 0.025 |
| 1.4995 | 1548 | 0.0364 |
| 1.5005 | 1549 | 0.0697 |
| 1.5015 | 1550 | 0.0482 |
| 1.5024 | 1551 | 0.0578 |
| 1.5034 | 1552 | 0.0229 |
| 1.5044 | 1553 | 0.0445 |
| 1.5053 | 1554 | 0.0419 |
| 1.5063 | 1555 | 0.029 |
| 1.5073 | 1556 | 0.0283 |
| 1.5082 | 1557 | 0.0527 |
| 1.5092 | 1558 | 0.0352 |
| 1.5102 | 1559 | 0.0279 |
| 1.5111 | 1560 | 0.0526 |
| 1.5121 | 1561 | 0.0852 |
| 1.5131 | 1562 | 0.0304 |
| 1.5140 | 1563 | 0.0458 |
| 1.5150 | 1564 | 0.075 |
| 1.5160 | 1565 | 0.0927 |
| 1.5169 | 1566 | 0.0285 |
| 1.5179 | 1567 | 0.0373 |
| 1.5189 | 1568 | 0.037 |
| 1.5198 | 1569 | 0.0571 |
| 1.5208 | 1570 | 0.0577 |
| 1.5218 | 1571 | 0.0316 |
| 1.5227 | 1572 | 0.0316 |
| 1.5237 | 1573 | 0.0609 |
| 1.5247 | 1574 | 0.0495 |
| 1.5257 | 1575 | 0.037 |
| 1.5266 | 1576 | 0.0218 |
| 1.5276 | 1577 | 0.0582 |
| 1.5286 | 1578 | 0.0535 |
| 1.5295 | 1579 | 0.0457 |
| 1.5305 | 1580 | 0.0259 |
| 1.5315 | 1581 | 0.0974 |
| 1.5324 | 1582 | 0.0609 |
| 1.5334 | 1583 | 0.0336 |
| 1.5344 | 1584 | 0.0319 |
| 1.5353 | 1585 | 0.0409 |
| 1.5363 | 1586 | 0.0442 |
| 1.5373 | 1587 | 0.0385 |
| 1.5382 | 1588 | 0.0486 |
| 1.5392 | 1589 | 0.069 |
| 1.5402 | 1590 | 0.0527 |
| 1.5411 | 1591 | 0.029 |
| 1.5421 | 1592 | 0.0222 |
| 1.5431 | 1593 | 0.0808 |
| 1.5440 | 1594 | 0.0558 |
| 1.5450 | 1595 | 0.0459 |
| 1.5460 | 1596 | 0.0383 |
| 1.5470 | 1597 | 0.0581 |
| 1.5479 | 1598 | 0.045 |
| 1.5489 | 1599 | 0.0831 |
| 1.5499 | 1600 | 0.0514 |
| 1.5508 | 1601 | 0.0731 |
| 1.5518 | 1602 | 0.0503 |
| 1.5528 | 1603 | 0.0498 |
| 1.5537 | 1604 | 0.0641 |
| 1.5547 | 1605 | 0.0652 |
| 1.5557 | 1606 | 0.0871 |
| 1.5566 | 1607 | 0.0448 |
| 1.5576 | 1608 | 0.0325 |
| 1.5586 | 1609 | 0.0578 |
| 1.5595 | 1610 | 0.0573 |
| 1.5605 | 1611 | 0.068 |
| 1.5615 | 1612 | 0.0538 |
| 1.5624 | 1613 | 0.0416 |
| 1.5634 | 1614 | 0.0545 |
| 1.5644 | 1615 | 0.0316 |
| 1.5653 | 1616 | 0.0495 |
| 1.5663 | 1617 | 0.1282 |
| 1.5673 | 1618 | 0.0566 |
| 1.5682 | 1619 | 0.0567 |
| 1.5692 | 1620 | 0.0697 |
| 1.5702 | 1621 | 0.1178 |
| 1.5712 | 1622 | 0.0681 |
| 1.5721 | 1623 | 0.0402 |
| 1.5731 | 1624 | 0.0338 |
| 1.5741 | 1625 | 0.0636 |
| 1.5750 | 1626 | 0.0334 |
| 1.5760 | 1627 | 0.0286 |
| 1.5770 | 1628 | 0.0239 |
| 1.5779 | 1629 | 0.0553 |
| 1.5789 | 1630 | 0.0879 |
| 1.5799 | 1631 | 0.0423 |
| 1.5808 | 1632 | 0.0362 |
| 1.5818 | 1633 | 0.0477 |
| 1.5828 | 1634 | 0.0911 |
| 1.5837 | 1635 | 0.0235 |
| 1.5847 | 1636 | 0.0599 |
| 1.5857 | 1637 | 0.0705 |
| 1.5866 | 1638 | 0.0912 |
| 1.5876 | 1639 | 0.0494 |
| 1.5886 | 1640 | 0.0255 |
| 1.5895 | 1641 | 0.0518 |
| 1.5905 | 1642 | 0.0261 |
| 1.5915 | 1643 | 0.0266 |
| 1.5924 | 1644 | 0.0409 |
| 1.5934 | 1645 | 0.0797 |
| 1.5944 | 1646 | 0.0591 |
| 1.5954 | 1647 | 0.0362 |
| 1.5963 | 1648 | 0.0594 |
| 1.5973 | 1649 | 0.0736 |
| 1.5983 | 1650 | 0.0486 |
| 1.5992 | 1651 | 0.0432 |
| 1.6002 | 1652 | 0.0428 |
| 1.6012 | 1653 | 0.0625 |
| 1.6021 | 1654 | 0.1024 |
| 1.6031 | 1655 | 0.068 |
| 1.6041 | 1656 | 0.0764 |
| 1.6050 | 1657 | 0.071 |
| 1.6060 | 1658 | 0.0554 |
| 1.6070 | 1659 | 0.0328 |
| 1.6079 | 1660 | 0.0511 |
| 1.6089 | 1661 | 0.0467 |
| 1.6099 | 1662 | 0.0461 |
| 1.6108 | 1663 | 0.0365 |
| 1.6118 | 1664 | 0.0462 |
| 1.6128 | 1665 | 0.0884 |
| 1.6137 | 1666 | 0.1012 |
| 1.6147 | 1667 | 0.0728 |
| 1.6157 | 1668 | 0.0493 |
| 1.6167 | 1669 | 0.0603 |
| 1.6176 | 1670 | 0.0545 |
| 1.6186 | 1671 | 0.0418 |
| 1.6196 | 1672 | 0.0305 |
| 1.6205 | 1673 | 0.0746 |
| 1.6215 | 1674 | 0.0579 |
| 1.6225 | 1675 | 0.0303 |
| 1.6234 | 1676 | 0.0379 |
| 1.6244 | 1677 | 0.0593 |
| 1.6254 | 1678 | 0.0869 |
| 1.6263 | 1679 | 0.0535 |
| 1.6273 | 1680 | 0.0558 |
| 1.6283 | 1681 | 0.0573 |
| 1.6292 | 1682 | 0.0336 |
| 1.6302 | 1683 | 0.0402 |
| 1.6312 | 1684 | 0.0438 |
| 1.6321 | 1685 | 0.0535 |
| 1.6331 | 1686 | 0.0506 |
| 1.6341 | 1687 | 0.0477 |
| 1.6350 | 1688 | 0.0315 |
| 1.6360 | 1689 | 0.0648 |
| 1.6370 | 1690 | 0.0313 |
| 1.6379 | 1691 | 0.0365 |
| 1.6389 | 1692 | 0.0358 |
| 1.6399 | 1693 | 0.0601 |
| 1.6409 | 1694 | 0.0638 |
| 1.6418 | 1695 | 0.0643 |
| 1.6428 | 1696 | 0.0376 |
| 1.6438 | 1697 | 0.0585 |
| 1.6447 | 1698 | 0.0479 |
| 1.6457 | 1699 | 0.0507 |
| 1.6467 | 1700 | 0.0663 |
| 1.6476 | 1701 | 0.068 |
| 1.6486 | 1702 | 0.0695 |
| 1.6496 | 1703 | 0.0297 |
| 1.6505 | 1704 | 0.0378 |
| 1.6515 | 1705 | 0.0662 |
| 1.6525 | 1706 | 0.0468 |
| 1.6534 | 1707 | 0.0309 |
| 1.6544 | 1708 | 0.024 |
| 1.6554 | 1709 | 0.0728 |
| 1.6563 | 1710 | 0.0337 |
| 1.6573 | 1711 | 0.0492 |
| 1.6583 | 1712 | 0.0419 |
| 1.6592 | 1713 | 0.074 |
| 1.6602 | 1714 | 0.051 |
| 1.6612 | 1715 | 0.0465 |
| 1.6621 | 1716 | 0.0379 |
| 1.6631 | 1717 | 0.0742 |
| 1.6641 | 1718 | 0.0589 |
| 1.6651 | 1719 | 0.0449 |
| 1.6660 | 1720 | 0.047 |
| 1.6670 | 1721 | 0.0575 |
| 1.6680 | 1722 | 0.0971 |
| 1.6689 | 1723 | 0.0417 |
| 1.6699 | 1724 | 0.0633 |
| 1.6709 | 1725 | 0.0392 |
| 1.6718 | 1726 | 0.0484 |
| 1.6728 | 1727 | 0.0428 |
| 1.6738 | 1728 | 0.0582 |
| 1.6747 | 1729 | 0.0261 |
| 1.6757 | 1730 | 0.037 |
| 1.6767 | 1731 | 0.0272 |
| 1.6776 | 1732 | 0.0377 |
| 1.6786 | 1733 | 0.0443 |
| 1.6796 | 1734 | 0.0448 |
| 1.6805 | 1735 | 0.067 |
| 1.6815 | 1736 | 0.0556 |
| 1.6825 | 1737 | 0.0596 |
| 1.6834 | 1738 | 0.0638 |
| 1.6844 | 1739 | 0.0383 |
| 1.6854 | 1740 | 0.044 |
| 1.6864 | 1741 | 0.0514 |
| 1.6873 | 1742 | 0.0421 |
| 1.6883 | 1743 | 0.0476 |
| 1.6893 | 1744 | 0.0503 |
| 1.6902 | 1745 | 0.0764 |
| 1.6912 | 1746 | 0.0579 |
| 1.6922 | 1747 | 0.0468 |
| 1.6931 | 1748 | 0.0302 |
| 1.6941 | 1749 | 0.0619 |
| 1.6951 | 1750 | 0.0551 |
| 1.6960 | 1751 | 0.0294 |
| 1.6970 | 1752 | 0.0533 |
| 1.6980 | 1753 | 0.0355 |
| 1.6989 | 1754 | 0.0192 |
| 1.6999 | 1755 | 0.0191 |
| 1.7009 | 1756 | 0.0494 |
| 1.7018 | 1757 | 0.0655 |
| 1.7028 | 1758 | 0.0363 |
| 1.7038 | 1759 | 0.0312 |
| 1.7047 | 1760 | 0.0392 |
| 1.7057 | 1761 | 0.0454 |
| 1.7067 | 1762 | 0.0601 |
| 1.7076 | 1763 | 0.0318 |
| 1.7086 | 1764 | 0.0481 |
| 1.7096 | 1765 | 0.0827 |
| 1.7106 | 1766 | 0.084 |
| 1.7115 | 1767 | 0.0398 |
| 1.7125 | 1768 | 0.0708 |
| 1.7135 | 1769 | 0.0755 |
| 1.7144 | 1770 | 0.0587 |
| 1.7154 | 1771 | 0.0378 |
| 1.7164 | 1772 | 0.0368 |
| 1.7173 | 1773 | 0.1474 |
| 1.7183 | 1774 | 0.0643 |
| 1.7193 | 1775 | 0.0451 |
| 1.7202 | 1776 | 0.0527 |
| 1.7212 | 1777 | 0.0673 |
| 1.7222 | 1778 | 0.0744 |
| 1.7231 | 1779 | 0.0311 |
| 1.7241 | 1780 | 0.0581 |
| 1.7251 | 1781 | 0.065 |
| 1.7260 | 1782 | 0.0875 |
| 1.7270 | 1783 | 0.0341 |
| 1.7280 | 1784 | 0.0402 |
| 1.7289 | 1785 | 0.0342 |
| 1.7299 | 1786 | 0.0268 |
| 1.7309 | 1787 | 0.0284 |
| 1.7318 | 1788 | 0.0332 |
| 1.7328 | 1789 | 0.0527 |
| 1.7338 | 1790 | 0.0382 |
| 1.7348 | 1791 | 0.0422 |
| 1.7357 | 1792 | 0.0416 |
| 1.7367 | 1793 | 0.0515 |
| 1.7377 | 1794 | 0.0327 |
| 1.7386 | 1795 | 0.0348 |
| 1.7396 | 1796 | 0.0435 |
| 1.7406 | 1797 | 0.0523 |
| 1.7415 | 1798 | 0.0576 |
| 1.7425 | 1799 | 0.0494 |
| 1.7435 | 1800 | 0.0309 |
| 1.7444 | 1801 | 0.0371 |
| 1.7454 | 1802 | 0.0487 |
| 1.7464 | 1803 | 0.0487 |
| 1.7473 | 1804 | 0.0352 |
| 1.7483 | 1805 | 0.0484 |
| 1.7493 | 1806 | 0.0603 |
| 1.7502 | 1807 | 0.0398 |
| 1.7512 | 1808 | 0.0462 |
| 1.7522 | 1809 | 0.0372 |
| 1.7531 | 1810 | 0.043 |
| 1.7541 | 1811 | 0.0423 |
| 1.7551 | 1812 | 0.0345 |
| 1.7561 | 1813 | 0.055 |
| 1.7570 | 1814 | 0.0384 |
| 1.7580 | 1815 | 0.0595 |
| 1.7590 | 1816 | 0.0327 |
| 1.7599 | 1817 | 0.0398 |
| 1.7609 | 1818 | 0.0671 |
| 1.7619 | 1819 | 0.0446 |
| 1.7628 | 1820 | 0.0445 |
| 1.7638 | 1821 | 0.0503 |
| 1.7648 | 1822 | 0.0511 |
| 1.7657 | 1823 | 0.0352 |
| 1.7667 | 1824 | 0.02 |
| 1.7677 | 1825 | 0.0695 |
| 1.7686 | 1826 | 0.0409 |
| 1.7696 | 1827 | 0.0323 |
| 1.7706 | 1828 | 0.0439 |
| 1.7715 | 1829 | 0.0538 |
| 1.7725 | 1830 | 0.0556 |
| 1.7735 | 1831 | 0.03 |
| 1.7744 | 1832 | 0.0547 |
| 1.7754 | 1833 | 0.0771 |
| 1.7764 | 1834 | 0.0271 |
| 1.7773 | 1835 | 0.0375 |
| 1.7783 | 1836 | 0.0299 |
| 1.7793 | 1837 | 0.0481 |
| 1.7803 | 1838 | 0.029 |
| 1.7812 | 1839 | 0.0166 |
| 1.7822 | 1840 | 0.0568 |
| 1.7832 | 1841 | 0.0895 |
| 1.7841 | 1842 | 0.057 |
| 1.7851 | 1843 | 0.0438 |
| 1.7861 | 1844 | 0.0335 |
| 1.7870 | 1845 | 0.0912 |
| 1.7880 | 1846 | 0.0701 |
| 1.7890 | 1847 | 0.0296 |
| 1.7899 | 1848 | 0.0372 |
| 1.7909 | 1849 | 0.0386 |
| 1.7919 | 1850 | 0.0434 |
| 1.7928 | 1851 | 0.0279 |
| 1.7938 | 1852 | 0.0333 |
| 1.7948 | 1853 | 0.07 |
| 1.7957 | 1854 | 0.072 |
| 1.7967 | 1855 | 0.0458 |
| 1.7977 | 1856 | 0.059 |
| 1.7986 | 1857 | 0.0375 |
| 1.7996 | 1858 | 0.0517 |
| 1.8006 | 1859 | 0.0284 |
| 1.8015 | 1860 | 0.0401 |
| 1.8025 | 1861 | 0.0451 |
| 1.8035 | 1862 | 0.0294 |
| 1.8045 | 1863 | 0.0486 |
| 1.8054 | 1864 | 0.0442 |
| 1.8064 | 1865 | 0.0885 |
| 1.8074 | 1866 | 0.0481 |
| 1.8083 | 1867 | 0.046 |
| 1.8093 | 1868 | 0.031 |
| 1.8103 | 1869 | 0.0835 |
| 1.8112 | 1870 | 0.0547 |
| 1.8122 | 1871 | 0.0438 |
| 1.8132 | 1872 | 0.0364 |
| 1.8141 | 1873 | 0.0722 |
| 1.8151 | 1874 | 0.0559 |
| 1.8161 | 1875 | 0.0349 |
| 1.8170 | 1876 | 0.0411 |
| 1.8180 | 1877 | 0.0598 |
| 1.8190 | 1878 | 0.0646 |
| 1.8199 | 1879 | 0.0341 |
| 1.8209 | 1880 | 0.0258 |
| 1.8219 | 1881 | 0.051 |
| 1.8228 | 1882 | 0.0455 |
| 1.8238 | 1883 | 0.0752 |
| 1.8248 | 1884 | 0.04 |
| 1.8258 | 1885 | 0.0323 |
| 1.8267 | 1886 | 0.0343 |
| 1.8277 | 1887 | 0.0462 |
| 1.8287 | 1888 | 0.0635 |
| 1.8296 | 1889 | 0.0656 |
| 1.8306 | 1890 | 0.0461 |
| 1.8316 | 1891 | 0.036 |
| 1.8325 | 1892 | 0.0281 |
| 1.8335 | 1893 | 0.0487 |
| 1.8345 | 1894 | 0.0538 |
| 1.8354 | 1895 | 0.0405 |
| 1.8364 | 1896 | 0.0221 |
| 1.8374 | 1897 | 0.0537 |
| 1.8383 | 1898 | 0.0323 |
| 1.8393 | 1899 | 0.043 |
| 1.8403 | 1900 | 0.0316 |
| 1.8412 | 1901 | 0.0633 |
| 1.8422 | 1902 | 0.0627 |
| 1.8432 | 1903 | 0.0334 |
| 1.8441 | 1904 | 0.038 |
| 1.8451 | 1905 | 0.0733 |
| 1.8461 | 1906 | 0.0575 |
| 1.8470 | 1907 | 0.0298 |
| 1.8480 | 1908 | 0.0602 |
| 1.8490 | 1909 | 0.0762 |
| 1.8500 | 1910 | 0.0528 |
| 1.8509 | 1911 | 0.0582 |
| 1.8519 | 1912 | 0.0384 |
| 1.8529 | 1913 | 0.0405 |
| 1.8538 | 1914 | 0.0292 |
| 1.8548 | 1915 | 0.0337 |
| 1.8558 | 1916 | 0.0257 |
| 1.8567 | 1917 | 0.0551 |
| 1.8577 | 1918 | 0.061 |
| 1.8587 | 1919 | 0.0636 |
| 1.8596 | 1920 | 0.0334 |
| 1.8606 | 1921 | 0.0516 |
| 1.8616 | 1922 | 0.071 |
| 1.8625 | 1923 | 0.0344 |
| 1.8635 | 1924 | 0.0368 |
| 1.8645 | 1925 | 0.0841 |
| 1.8654 | 1926 | 0.0388 |
| 1.8664 | 1927 | 0.0255 |
| 1.8674 | 1928 | 0.0402 |
| 1.8683 | 1929 | 0.0377 |
| 1.8693 | 1930 | 0.0416 |
| 1.8703 | 1931 | 0.0338 |
| 1.8712 | 1932 | 0.0407 |
| 1.8722 | 1933 | 0.0773 |
| 1.8732 | 1934 | 0.0669 |
| 1.8742 | 1935 | 0.0409 |
| 1.8751 | 1936 | 0.0834 |
| 1.8761 | 1937 | 0.057 |
| 1.8771 | 1938 | 0.0486 |
| 1.8780 | 1939 | 0.0472 |
| 1.8790 | 1940 | 0.0439 |
| 1.8800 | 1941 | 0.0312 |
| 1.8809 | 1942 | 0.0304 |
| 1.8819 | 1943 | 0.0398 |
| 1.8829 | 1944 | 0.0399 |
| 1.8838 | 1945 | 0.0736 |
| 1.8848 | 1946 | 0.0331 |
| 1.8858 | 1947 | 0.0351 |
| 1.8867 | 1948 | 0.0333 |
| 1.8877 | 1949 | 0.073 |
| 1.8887 | 1950 | 0.0461 |
| 1.8896 | 1951 | 0.0351 |
| 1.8906 | 1952 | 0.0442 |
| 1.8916 | 1953 | 0.0329 |
| 1.8925 | 1954 | 0.0386 |
| 1.8935 | 1955 | 0.0337 |
| 1.8945 | 1956 | 0.0309 |
| 1.8955 | 1957 | 0.0529 |
| 1.8964 | 1958 | 0.058 |
| 1.8974 | 1959 | 0.0778 |
| 1.8984 | 1960 | 0.0279 |
| 1.8993 | 1961 | 0.0532 |
| 1.9003 | 1962 | 0.0496 |
| 1.9013 | 1963 | 0.0554 |
| 1.9022 | 1964 | 0.0242 |
| 1.9032 | 1965 | 0.0589 |
| 1.9042 | 1966 | 0.0479 |
| 1.9051 | 1967 | 0.0424 |
| 1.9061 | 1968 | 0.0342 |
| 1.9071 | 1969 | 0.0791 |
| 1.9080 | 1970 | 0.0439 |
| 1.9090 | 1971 | 0.0533 |
| 1.9100 | 1972 | 0.0455 |
| 1.9109 | 1973 | 0.0417 |
| 1.9119 | 1974 | 0.0735 |
| 1.9129 | 1975 | 0.0306 |
| 1.9138 | 1976 | 0.0189 |
| 1.9148 | 1977 | 0.0305 |
| 1.9158 | 1978 | 0.0344 |
| 1.9167 | 1979 | 0.0358 |
| 1.9177 | 1980 | 0.0196 |
| 1.9187 | 1981 | 0.0508 |
| 1.9197 | 1982 | 0.0292 |
| 1.9206 | 1983 | 0.0216 |
| 1.9216 | 1984 | 0.0323 |
| 1.9226 | 1985 | 0.0765 |
| 1.9235 | 1986 | 0.0774 |
| 1.9245 | 1987 | 0.0367 |
| 1.9255 | 1988 | 0.0278 |
| 1.9264 | 1989 | 0.041 |
| 1.9274 | 1990 | 0.0481 |
| 1.9284 | 1991 | 0.032 |
| 1.9293 | 1992 | 0.0413 |
| 1.9303 | 1993 | 0.0463 |
| 1.9313 | 1994 | 0.0316 |
| 1.9322 | 1995 | 0.0417 |
| 1.9332 | 1996 | 0.0547 |
| 1.9342 | 1997 | 0.0738 |
| 1.9351 | 1998 | 0.0834 |
| 1.9361 | 1999 | 0.0454 |
| 1.9371 | 2000 | 0.0341 |
| 1.9380 | 2001 | 0.0567 |
| 1.9390 | 2002 | 0.0475 |
| 1.9400 | 2003 | 0.0473 |
| 1.9409 | 2004 | 0.047 |
| 1.9419 | 2005 | 0.0575 |
| 1.9429 | 2006 | 0.0655 |
| 1.9439 | 2007 | 0.0558 |
| 1.9448 | 2008 | 0.031 |
| 1.9458 | 2009 | 0.046 |
| 1.9468 | 2010 | 0.057 |
| 1.9477 | 2011 | 0.0613 |
| 1.9487 | 2012 | 0.0344 |
| 1.9497 | 2013 | 0.0572 |
| 1.9506 | 2014 | 0.03 |
| 1.9516 | 2015 | 0.0305 |
| 1.9526 | 2016 | 0.0328 |
| 1.9535 | 2017 | 0.0599 |
| 1.9545 | 2018 | 0.0626 |
| 1.9555 | 2019 | 0.0406 |
| 1.9564 | 2020 | 0.0366 |
| 1.9574 | 2021 | 0.0832 |
| 1.9584 | 2022 | 0.0404 |
| 1.9593 | 2023 | 0.0362 |
| 1.9603 | 2024 | 0.0289 |
| 1.9613 | 2025 | 0.0395 |
| 1.9622 | 2026 | 0.0278 |
| 1.9632 | 2027 | 0.0363 |
| 1.9642 | 2028 | 0.0335 |
| 1.9652 | 2029 | 0.0394 |
| 1.9661 | 2030 | 0.0533 |
| 1.9671 | 2031 | 0.0244 |
| 1.9681 | 2032 | 0.0226 |
| 1.9690 | 2033 | 0.0502 |
| 1.9700 | 2034 | 0.0405 |
| 1.9710 | 2035 | 0.0211 |
| 1.9719 | 2036 | 0.0237 |
| 1.9729 | 2037 | 0.061 |
| 1.9739 | 2038 | 0.0346 |
| 1.9748 | 2039 | 0.0327 |
| 1.9758 | 2040 | 0.0448 |
| 1.9768 | 2041 | 0.0607 |
| 1.9777 | 2042 | 0.07 |
| 1.9787 | 2043 | 0.0422 |
| 1.9797 | 2044 | 0.0441 |
| 1.9806 | 2045 | 0.0452 |
| 1.9816 | 2046 | 0.0282 |
| 1.9826 | 2047 | 0.0445 |
| 1.9835 | 2048 | 0.0721 |
| 1.9845 | 2049 | 0.0442 |
| 1.9855 | 2050 | 0.056 |
| 1.9864 | 2051 | 0.0302 |
| 1.9874 | 2052 | 0.0412 |
| 1.9884 | 2053 | 0.0468 |
| 1.9894 | 2054 | 0.0417 |
| 1.9903 | 2055 | 0.0315 |
| 1.9913 | 2056 | 0.0492 |
| 1.9923 | 2057 | 0.0589 |
| 1.9932 | 2058 | 0.0827 |
| 1.9942 | 2059 | 0.0659 |
| 1.9952 | 2060 | 0.045 |
| 1.9961 | 2061 | 0.0364 |
| 1.9971 | 2062 | 0.042 |
| 1.9981 | 2063 | 0.0425 |
| 1.9990 | 2064 | 0.0818 |
| 2.0010 | 2065 | 0.0754 |
| 2.0019 | 2066 | 0.0576 |
| 2.0029 | 2067 | 0.0407 |
| 2.0039 | 2068 | 0.0228 |
| 2.0048 | 2069 | 0.0618 |
| 2.0058 | 2070 | 0.0549 |
| 2.0068 | 2071 | 0.0443 |
| 2.0077 | 2072 | 0.0611 |
| 2.0087 | 2073 | 0.0602 |
| 2.0097 | 2074 | 0.0586 |
| 2.0106 | 2075 | 0.0422 |
| 2.0116 | 2076 | 0.0535 |
| 2.0126 | 2077 | 0.0921 |
| 2.0136 | 2078 | 0.0767 |
| 2.0145 | 2079 | 0.0302 |
| 2.0155 | 2080 | 0.0256 |
| 2.0165 | 2081 | 0.0911 |
| 2.0174 | 2082 | 0.0464 |
| 2.0184 | 2083 | 0.0486 |
| 2.0194 | 2084 | 0.0506 |
| 2.0203 | 2085 | 0.086 |
| 2.0213 | 2086 | 0.0733 |
| 2.0223 | 2087 | 0.0356 |
| 2.0232 | 2088 | 0.0327 |
| 2.0242 | 2089 | 0.0501 |
| 2.0252 | 2090 | 0.0488 |
| 2.0261 | 2091 | 0.0296 |
| 2.0271 | 2092 | 0.0271 |
| 2.0281 | 2093 | 0.0738 |
| 2.0290 | 2094 | 0.0226 |
| 2.0300 | 2095 | 0.0357 |
| 2.0310 | 2096 | 0.021 |
| 2.0319 | 2097 | 0.0365 |
| 2.0329 | 2098 | 0.0224 |
| 2.0339 | 2099 | 0.0176 |
| 2.0348 | 2100 | 0.0331 |
| 2.0358 | 2101 | 0.0496 |
| 2.0368 | 2102 | 0.0559 |
| 2.0378 | 2103 | 0.0368 |
| 2.0387 | 2104 | 0.0326 |
| 2.0397 | 2105 | 0.0602 |
| 2.0407 | 2106 | 0.0395 |
| 2.0416 | 2107 | 0.0319 |
| 2.0426 | 2108 | 0.033 |
| 2.0436 | 2109 | 0.0726 |
| 2.0445 | 2110 | 0.0404 |
| 2.0455 | 2111 | 0.0497 |
| 2.0465 | 2112 | 0.0478 |
| 2.0474 | 2113 | 0.0635 |
| 2.0484 | 2114 | 0.0259 |
| 2.0494 | 2115 | 0.0159 |
| 2.0503 | 2116 | 0.0287 |
| 2.0513 | 2117 | 0.0574 |
| 2.0523 | 2118 | 0.0345 |
| 2.0532 | 2119 | 0.0181 |
| 2.0542 | 2120 | 0.0255 |
| 2.0552 | 2121 | 0.0551 |
| 2.0561 | 2122 | 0.0398 |
| 2.0571 | 2123 | 0.0226 |
| 2.0581 | 2124 | 0.021 |
| 2.0591 | 2125 | 0.0827 |
| 2.0600 | 2126 | 0.0603 |
| 2.0610 | 2127 | 0.0414 |
| 2.0620 | 2128 | 0.0595 |
| 2.0629 | 2129 | 0.0945 |
| 2.0639 | 2130 | 0.064 |
| 2.0649 | 2131 | 0.0436 |
| 2.0658 | 2132 | 0.0254 |
| 2.0668 | 2133 | 0.0802 |
| 2.0678 | 2134 | 0.0389 |
| 2.0687 | 2135 | 0.0377 |
| 2.0697 | 2136 | 0.0283 |
| 2.0707 | 2137 | 0.0725 |
| 2.0716 | 2138 | 0.0494 |
| 2.0726 | 2139 | 0.0417 |
| 2.0736 | 2140 | 0.0276 |
| 2.0745 | 2141 | 0.0638 |
| 2.0755 | 2142 | 0.0467 |
| 2.0765 | 2143 | 0.0352 |
| 2.0774 | 2144 | 0.0347 |
| 2.0784 | 2145 | 0.0414 |
| 2.0794 | 2146 | 0.0329 |
| 2.0803 | 2147 | 0.014 |
| 2.0813 | 2148 | 0.0141 |
| 2.0823 | 2149 | 0.0692 |
| 2.0833 | 2150 | 0.0456 |
| 2.0842 | 2151 | 0.0362 |
| 2.0852 | 2152 | 0.0325 |
| 2.0862 | 2153 | 0.0318 |
| 2.0871 | 2154 | 0.0169 |
| 2.0881 | 2155 | 0.0216 |
| 2.0891 | 2156 | 0.024 |
| 2.0900 | 2157 | 0.0673 |
| 2.0910 | 2158 | 0.0435 |
| 2.0920 | 2159 | 0.0398 |
| 2.0929 | 2160 | 0.0233 |
| 2.0939 | 2161 | 0.0642 |
| 2.0949 | 2162 | 0.0326 |
| 2.0958 | 2163 | 0.0311 |
| 2.0968 | 2164 | 0.0226 |
| 2.0978 | 2165 | 0.0466 |
| 2.0987 | 2166 | 0.022 |
| 2.0997 | 2167 | 0.0311 |
| 2.1007 | 2168 | 0.0205 |
| 2.1016 | 2169 | 0.0598 |
| 2.1026 | 2170 | 0.0474 |
| 2.1036 | 2171 | 0.0513 |
| 2.1045 | 2172 | 0.0603 |
| 2.1055 | 2173 | 0.0385 |
| 2.1065 | 2174 | 0.035 |
| 2.1075 | 2175 | 0.0282 |
| 2.1084 | 2176 | 0.0225 |
| 2.1094 | 2177 | 0.0586 |
| 2.1104 | 2178 | 0.0613 |
| 2.1113 | 2179 | 0.0415 |
| 2.1123 | 2180 | 0.0366 |
| 2.1133 | 2181 | 0.0716 |
| 2.1142 | 2182 | 0.0315 |
| 2.1152 | 2183 | 0.0192 |
| 2.1162 | 2184 | 0.0294 |
| 2.1171 | 2185 | 0.0531 |
| 2.1181 | 2186 | 0.0358 |
| 2.1191 | 2187 | 0.0246 |
| 2.1200 | 2188 | 0.0377 |
| 2.1210 | 2189 | 0.0838 |
| 2.1220 | 2190 | 0.0751 |
| 2.1229 | 2191 | 0.0284 |
| 2.1239 | 2192 | 0.0217 |
| 2.1249 | 2193 | 0.0726 |
| 2.1258 | 2194 | 0.0222 |
| 2.1268 | 2195 | 0.0317 |
| 2.1278 | 2196 | 0.0236 |
| 2.1288 | 2197 | 0.0953 |
| 2.1297 | 2198 | 0.0425 |
| 2.1307 | 2199 | 0.042 |
| 2.1317 | 2200 | 0.0288 |
| 2.1326 | 2201 | 0.0646 |
| 2.1336 | 2202 | 0.0578 |
| 2.1346 | 2203 | 0.0245 |
| 2.1355 | 2204 | 0.0319 |
| 2.1365 | 2205 | 0.0605 |
| 2.1375 | 2206 | 0.0488 |
| 2.1384 | 2207 | 0.0387 |
| 2.1394 | 2208 | 0.0236 |
| 2.1404 | 2209 | 0.0538 |
| 2.1413 | 2210 | 0.0357 |
| 2.1423 | 2211 | 0.0355 |
| 2.1433 | 2212 | 0.0279 |
| 2.1442 | 2213 | 0.0378 |
| 2.1452 | 2214 | 0.0324 |
| 2.1462 | 2215 | 0.0238 |
| 2.1471 | 2216 | 0.0174 |
| 2.1481 | 2217 | 0.1084 |
| 2.1491 | 2218 | 0.0533 |
| 2.1500 | 2219 | 0.0496 |
| 2.1510 | 2220 | 0.0406 |
| 2.1520 | 2221 | 0.1079 |
| 2.1530 | 2222 | 0.0305 |
| 2.1539 | 2223 | 0.0301 |
| 2.1549 | 2224 | 0.0277 |
| 2.1559 | 2225 | 0.1052 |
| 2.1568 | 2226 | 0.049 |
| 2.1578 | 2227 | 0.0139 |
| 2.1588 | 2228 | 0.0223 |
| 2.1597 | 2229 | 0.034 |
| 2.1607 | 2230 | 0.0206 |
| 2.1617 | 2231 | 0.0255 |
| 2.1626 | 2232 | 0.0228 |
| 2.1636 | 2233 | 0.0456 |
| 2.1646 | 2234 | 0.0378 |
| 2.1655 | 2235 | 0.0199 |
| 2.1665 | 2236 | 0.02 |
| 2.1675 | 2237 | 0.0459 |
| 2.1684 | 2238 | 0.064 |
| 2.1694 | 2239 | 0.0346 |
| 2.1704 | 2240 | 0.0271 |
| 2.1713 | 2241 | 0.0472 |
| 2.1723 | 2242 | 0.0294 |
| 2.1733 | 2243 | 0.0278 |
| 2.1742 | 2244 | 0.0181 |
| 2.1752 | 2245 | 0.0466 |
| 2.1762 | 2246 | 0.0363 |
| 2.1772 | 2247 | 0.0276 |
| 2.1781 | 2248 | 0.028 |
| 2.1791 | 2249 | 0.0565 |
| 2.1801 | 2250 | 0.0399 |
| 2.1810 | 2251 | 0.0301 |
| 2.1820 | 2252 | 0.0278 |
| 2.1830 | 2253 | 0.0562 |
| 2.1839 | 2254 | 0.0402 |
| 2.1849 | 2255 | 0.0328 |
| 2.1859 | 2256 | 0.0228 |
| 2.1868 | 2257 | 0.0762 |
| 2.1878 | 2258 | 0.0567 |
| 2.1888 | 2259 | 0.03 |
| 2.1897 | 2260 | 0.0525 |
| 2.1907 | 2261 | 0.063 |
| 2.1917 | 2262 | 0.0351 |
| 2.1926 | 2263 | 0.0176 |
| 2.1936 | 2264 | 0.0156 |
| 2.1946 | 2265 | 0.0574 |
| 2.1955 | 2266 | 0.0302 |
| 2.1965 | 2267 | 0.0205 |
| 2.1975 | 2268 | 0.021 |
| 2.1985 | 2269 | 0.0713 |
| 2.1994 | 2270 | 0.0265 |
| 2.2004 | 2271 | 0.0218 |
| 2.2014 | 2272 | 0.0183 |
| 2.2023 | 2273 | 0.0318 |
| 2.2033 | 2274 | 0.0325 |
| 2.2043 | 2275 | 0.0194 |
| 2.2052 | 2276 | 0.0144 |
| 2.2062 | 2277 | 0.0331 |
| 2.2072 | 2278 | 0.0312 |
| 2.2081 | 2279 | 0.0198 |
| 2.2091 | 2280 | 0.0163 |
| 2.2101 | 2281 | 0.0636 |
| 2.2110 | 2282 | 0.0301 |
| 2.2120 | 2283 | 0.0282 |
| 2.2130 | 2284 | 0.027 |
| 2.2139 | 2285 | 0.0214 |
| 2.2149 | 2286 | 0.0306 |
| 2.2159 | 2287 | 0.0179 |
| 2.2168 | 2288 | 0.0156 |
| 2.2178 | 2289 | 0.038 |
| 2.2188 | 2290 | 0.0366 |
| 2.2197 | 2291 | 0.0233 |
| 2.2207 | 2292 | 0.0144 |
| 2.2217 | 2293 | 0.0483 |
| 2.2227 | 2294 | 0.0295 |
| 2.2236 | 2295 | 0.0184 |
| 2.2246 | 2296 | 0.0173 |
| 2.2256 | 2297 | 0.0535 |
| 2.2265 | 2298 | 0.0438 |
| 2.2275 | 2299 | 0.0211 |
| 2.2285 | 2300 | 0.0201 |
| 2.2294 | 2301 | 0.0826 |
| 2.2304 | 2302 | 0.031 |
| 2.2314 | 2303 | 0.0191 |
| 2.2323 | 2304 | 0.0128 |
| 2.2333 | 2305 | 0.0639 |
| 2.2343 | 2306 | 0.051 |
| 2.2352 | 2307 | 0.0226 |
| 2.2362 | 2308 | 0.0195 |
| 2.2372 | 2309 | 0.0457 |
| 2.2381 | 2310 | 0.0218 |
| 2.2391 | 2311 | 0.0251 |
| 2.2401 | 2312 | 0.0159 |
| 2.2410 | 2313 | 0.0425 |
| 2.2420 | 2314 | 0.0358 |
| 2.2430 | 2315 | 0.021 |
| 2.2439 | 2316 | 0.0176 |
| 2.2449 | 2317 | 0.0304 |
| 2.2459 | 2318 | 0.042 |
| 2.2469 | 2319 | 0.0221 |
| 2.2478 | 2320 | 0.014 |
| 2.2488 | 2321 | 0.055 |
| 2.2498 | 2322 | 0.0497 |
| 2.2507 | 2323 | 0.025 |
| 2.2517 | 2324 | 0.015 |
| 2.2527 | 2325 | 0.0361 |
| 2.2536 | 2326 | 0.0275 |
| 2.2546 | 2327 | 0.0223 |
| 2.2556 | 2328 | 0.0257 |
| 2.2565 | 2329 | 0.0339 |
| 2.2575 | 2330 | 0.0212 |
| 2.2585 | 2331 | 0.0134 |
| 2.2594 | 2332 | 0.016 |
| 2.2604 | 2333 | 0.0281 |
| 2.2614 | 2334 | 0.018 |
| 2.2623 | 2335 | 0.0136 |
| 2.2633 | 2336 | 0.0113 |
| 2.2643 | 2337 | 0.0189 |
| 2.2652 | 2338 | 0.0142 |
| 2.2662 | 2339 | 0.0097 |
| 2.2672 | 2340 | 0.0069 |
| 2.2682 | 2341 | 0.0771 |
| 2.2691 | 2342 | 0.0221 |
| 2.2701 | 2343 | 0.0174 |
| 2.2711 | 2344 | 0.0149 |
| 2.2720 | 2345 | 0.0424 |
| 2.2730 | 2346 | 0.0258 |
| 2.2740 | 2347 | 0.0259 |
| 2.2749 | 2348 | 0.0212 |
| 2.2759 | 2349 | 0.0376 |
| 2.2769 | 2350 | 0.0159 |
| 2.2778 | 2351 | 0.0173 |
| 2.2788 | 2352 | 0.0133 |
| 2.2798 | 2353 | 0.0196 |
| 2.2807 | 2354 | 0.0178 |
| 2.2817 | 2355 | 0.0164 |
| 2.2827 | 2356 | 0.0103 |
| 2.2836 | 2357 | 0.0612 |
| 2.2846 | 2358 | 0.0159 |
| 2.2856 | 2359 | 0.0181 |
| 2.2865 | 2360 | 0.0153 |
| 2.2875 | 2361 | 0.0298 |
| 2.2885 | 2362 | 0.0197 |
| 2.2894 | 2363 | 0.0202 |
| 2.2904 | 2364 | 0.0192 |
| 2.2914 | 2365 | 0.0347 |
| 2.2924 | 2366 | 0.0148 |
| 2.2933 | 2367 | 0.0122 |
| 2.2943 | 2368 | 0.0161 |
| 2.2953 | 2369 | 0.0253 |
| 2.2962 | 2370 | 0.0206 |
| 2.2972 | 2371 | 0.0187 |
| 2.2982 | 2372 | 0.0194 |
| 2.2991 | 2373 | 0.0372 |
| 2.3001 | 2374 | 0.0211 |
| 2.3011 | 2375 | 0.0187 |
| 2.3020 | 2376 | 0.0123 |
| 2.3030 | 2377 | 0.0362 |
| 2.3040 | 2378 | 0.0317 |
| 2.3049 | 2379 | 0.019 |
| 2.3059 | 2380 | 0.0336 |
| 2.3069 | 2381 | 0.0287 |
| 2.3078 | 2382 | 0.0204 |
| 2.3088 | 2383 | 0.0138 |
| 2.3098 | 2384 | 0.0214 |
| 2.3107 | 2385 | 0.0462 |
| 2.3117 | 2386 | 0.0259 |
| 2.3127 | 2387 | 0.0144 |
| 2.3136 | 2388 | 0.018 |
| 2.3146 | 2389 | 0.019 |
| 2.3156 | 2390 | 0.0174 |
| 2.3166 | 2391 | 0.0173 |
| 2.3175 | 2392 | 0.009 |
| 2.3185 | 2393 | 0.0216 |
| 2.3195 | 2394 | 0.0131 |
| 2.3204 | 2395 | 0.0153 |
| 2.3214 | 2396 | 0.0117 |
| 2.3224 | 2397 | 0.0407 |
| 2.3233 | 2398 | 0.0264 |
| 2.3243 | 2399 | 0.0267 |
| 2.3253 | 2400 | 0.0194 |
| 2.3262 | 2401 | 0.0459 |
| 2.3272 | 2402 | 0.0249 |
| 2.3282 | 2403 | 0.0188 |
| 2.3291 | 2404 | 0.0148 |
| 2.3301 | 2405 | 0.0353 |
| 2.3311 | 2406 | 0.0187 |
| 2.3320 | 2407 | 0.0178 |
| 2.3330 | 2408 | 0.0093 |
| 2.3340 | 2409 | 0.0252 |
| 2.3349 | 2410 | 0.025 |
| 2.3359 | 2411 | 0.0156 |
| 2.3369 | 2412 | 0.0125 |
| 2.3379 | 2413 | 0.0512 |
| 2.3388 | 2414 | 0.0252 |
| 2.3398 | 2415 | 0.0147 |
| 2.3408 | 2416 | 0.0101 |
| 2.3417 | 2417 | 0.0295 |
| 2.3427 | 2418 | 0.0162 |
| 2.3437 | 2419 | 0.0165 |
| 2.3446 | 2420 | 0.0148 |
| 2.3456 | 2421 | 0.0334 |
| 2.3466 | 2422 | 0.0206 |
| 2.3475 | 2423 | 0.015 |
| 2.3485 | 2424 | 0.0101 |
| 2.3495 | 2425 | 0.0337 |
| 2.3504 | 2426 | 0.0311 |
| 2.3514 | 2427 | 0.0147 |
| 2.3524 | 2428 | 0.0207 |
| 2.3533 | 2429 | 0.0681 |
| 2.3543 | 2430 | 0.0341 |
| 2.3553 | 2431 | 0.0175 |
| 2.3562 | 2432 | 0.0162 |
| 2.3572 | 2433 | 0.0213 |
| 2.3582 | 2434 | 0.0131 |
| 2.3591 | 2435 | 0.013 |
| 2.3601 | 2436 | 0.0131 |
| 2.3611 | 2437 | 0.0368 |
| 2.3621 | 2438 | 0.0137 |
| 2.3630 | 2439 | 0.0135 |
| 2.3640 | 2440 | 0.0174 |
| 2.3650 | 2441 | 0.0437 |
| 2.3659 | 2442 | 0.0211 |
| 2.3669 | 2443 | 0.0075 |
| 2.3679 | 2444 | 0.0167 |
| 2.3688 | 2445 | 0.0247 |
| 2.3698 | 2446 | 0.0228 |
| 2.3708 | 2447 | 0.0171 |
| 2.3717 | 2448 | 0.0171 |
| 2.3727 | 2449 | 0.0278 |
| 2.3737 | 2450 | 0.0161 |
| 2.3746 | 2451 | 0.0189 |
| 2.3756 | 2452 | 0.0205 |
| 2.3766 | 2453 | 0.0249 |
| 2.3775 | 2454 | 0.0301 |
| 2.3785 | 2455 | 0.0131 |
| 2.3795 | 2456 | 0.0103 |
| 2.3804 | 2457 | 0.0389 |
| 2.3814 | 2458 | 0.0259 |
| 2.3824 | 2459 | 0.0135 |
| 2.3833 | 2460 | 0.0125 |
| 2.3843 | 2461 | 0.0159 |
| 2.3853 | 2462 | 0.0187 |
| 2.3863 | 2463 | 0.0181 |
| 2.3872 | 2464 | 0.0118 |
| 2.3882 | 2465 | 0.0343 |
| 2.3892 | 2466 | 0.0213 |
| 2.3901 | 2467 | 0.0133 |
| 2.3911 | 2468 | 0.0162 |
| 2.3921 | 2469 | 0.0407 |
| 2.3930 | 2470 | 0.0308 |
| 2.3940 | 2471 | 0.0272 |
| 2.3950 | 2472 | 0.0207 |
| 2.3959 | 2473 | 0.0528 |
| 2.3969 | 2474 | 0.0205 |
| 2.3979 | 2475 | 0.0152 |
| 2.3988 | 2476 | 0.0107 |
| 2.3998 | 2477 | 0.0286 |
| 2.4008 | 2478 | 0.0188 |
| 2.4017 | 2479 | 0.0136 |
| 2.4027 | 2480 | 0.013 |
| 2.4037 | 2481 | 0.0262 |
| 2.4046 | 2482 | 0.0157 |
| 2.4056 | 2483 | 0.0154 |
| 2.4066 | 2484 | 0.0211 |
| 2.4076 | 2485 | 0.0228 |
| 2.4085 | 2486 | 0.0242 |
| 2.4095 | 2487 | 0.0089 |
| 2.4105 | 2488 | 0.0085 |
| 2.4114 | 2489 | 0.0541 |
| 2.4124 | 2490 | 0.0409 |
| 2.4134 | 2491 | 0.0271 |
| 2.4143 | 2492 | 0.0232 |
| 2.4153 | 2493 | 0.0321 |
| 2.4163 | 2494 | 0.0178 |
| 2.4172 | 2495 | 0.0254 |
| 2.4182 | 2496 | 0.0114 |
| 2.4192 | 2497 | 0.0301 |
| 2.4201 | 2498 | 0.0531 |
| 2.4211 | 2499 | 0.0207 |
| 2.4221 | 2500 | 0.0287 |
| 2.4230 | 2501 | 0.0555 |
| 2.4240 | 2502 | 0.0196 |
| 2.4250 | 2503 | 0.0277 |
| 2.4259 | 2504 | 0.0157 |
| 2.4269 | 2505 | 0.0428 |
| 2.4279 | 2506 | 0.0173 |
| 2.4288 | 2507 | 0.013 |
| 2.4298 | 2508 | 0.0131 |
| 2.4308 | 2509 | 0.0235 |
| 2.4318 | 2510 | 0.013 |
| 2.4327 | 2511 | 0.0129 |
| 2.4337 | 2512 | 0.0385 |
| 2.4347 | 2513 | 0.0398 |
| 2.4356 | 2514 | 0.0252 |
| 2.4366 | 2515 | 0.018 |
| 2.4376 | 2516 | 0.0165 |
| 2.4385 | 2517 | 0.0291 |
| 2.4395 | 2518 | 0.0318 |
| 2.4405 | 2519 | 0.019 |
| 2.4414 | 2520 | 0.0133 |
| 2.4424 | 2521 | 0.0364 |
| 2.4434 | 2522 | 0.0164 |
| 2.4443 | 2523 | 0.0129 |
| 2.4453 | 2524 | 0.011 |
| 2.4463 | 2525 | 0.0417 |
| 2.4472 | 2526 | 0.0238 |
| 2.4482 | 2527 | 0.016 |
| 2.4492 | 2528 | 0.0154 |
| 2.4501 | 2529 | 0.042 |
| 2.4511 | 2530 | 0.0103 |
| 2.4521 | 2531 | 0.0117 |
| 2.4530 | 2532 | 0.0136 |
| 2.4540 | 2533 | 0.03 |
| 2.4550 | 2534 | 0.0268 |
| 2.4560 | 2535 | 0.0297 |
| 2.4569 | 2536 | 0.0165 |
| 2.4579 | 2537 | 0.0427 |
| 2.4589 | 2538 | 0.02 |
| 2.4598 | 2539 | 0.0212 |
| 2.4608 | 2540 | 0.0275 |
| 2.4618 | 2541 | 0.0392 |
| 2.4627 | 2542 | 0.0176 |
| 2.4637 | 2543 | 0.0145 |
| 2.4647 | 2544 | 0.027 |
| 2.4656 | 2545 | 0.0351 |
| 2.4666 | 2546 | 0.0221 |
| 2.4676 | 2547 | 0.0106 |
| 2.4685 | 2548 | 0.0084 |
| 2.4695 | 2549 | 0.0272 |
| 2.4705 | 2550 | 0.0168 |
| 2.4714 | 2551 | 0.0263 |
| 2.4724 | 2552 | 0.0222 |
| 2.4734 | 2553 | 0.0222 |
| 2.4743 | 2554 | 0.0135 |
| 2.4753 | 2555 | 0.0238 |
| 2.4763 | 2556 | 0.0268 |
| 2.4773 | 2557 | 0.0191 |
| 2.4782 | 2558 | 0.0132 |
| 2.4792 | 2559 | 0.0101 |
| 2.4802 | 2560 | 0.0128 |
| 2.4811 | 2561 | 0.0258 |
| 2.4821 | 2562 | 0.0143 |
| 2.4831 | 2563 | 0.0155 |
| 2.4840 | 2564 | 0.0193 |
| 2.4850 | 2565 | 0.0338 |
| 2.4860 | 2566 | 0.0236 |
| 2.4869 | 2567 | 0.0105 |
| 2.4879 | 2568 | 0.0079 |
| 2.4889 | 2569 | 0.0376 |
| 2.4898 | 2570 | 0.0247 |
| 2.4908 | 2571 | 0.0121 |
| 2.4918 | 2572 | 0.0159 |
| 2.4927 | 2573 | 0.0341 |
| 2.4937 | 2574 | 0.0223 |
| 2.4947 | 2575 | 0.0276 |
| 2.4956 | 2576 | 0.0097 |
| 2.4966 | 2577 | 0.0265 |
| 2.4976 | 2578 | 0.0181 |
| 2.4985 | 2579 | 0.008 |
| 2.4995 | 2580 | 0.0172 |
| 2.5005 | 2581 | 0.0336 |
| 2.5015 | 2582 | 0.0111 |
| 2.5024 | 2583 | 0.014 |
| 2.5034 | 2584 | 0.0135 |
| 2.5044 | 2585 | 0.0188 |
| 2.5053 | 2586 | 0.0439 |
| 2.5063 | 2587 | 0.023 |
| 2.5073 | 2588 | 0.0108 |
| 2.5082 | 2589 | 0.0278 |
| 2.5092 | 2590 | 0.0173 |
| 2.5102 | 2591 | 0.0106 |
| 2.5111 | 2592 | 0.0157 |
| 2.5121 | 2593 | 0.0207 |
| 2.5131 | 2594 | 0.0157 |
| 2.5140 | 2595 | 0.0131 |
| 2.5150 | 2596 | 0.0195 |
| 2.5160 | 2597 | 0.028 |
| 2.5169 | 2598 | 0.0224 |
| 2.5179 | 2599 | 0.0157 |
| 2.5189 | 2600 | 0.0099 |
| 2.5198 | 2601 | 0.0324 |
| 2.5208 | 2602 | 0.0139 |
| 2.5218 | 2603 | 0.0114 |
| 2.5227 | 2604 | 0.0157 |
| 2.5237 | 2605 | 0.0363 |
| 2.5247 | 2606 | 0.0321 |
| 2.5257 | 2607 | 0.0174 |
| 2.5266 | 2608 | 0.0145 |
| 2.5276 | 2609 | 0.0253 |
| 2.5286 | 2610 | 0.0161 |
| 2.5295 | 2611 | 0.0276 |
| 2.5305 | 2612 | 0.019 |
| 2.5315 | 2613 | 0.0437 |
| 2.5324 | 2614 | 0.0203 |
| 2.5334 | 2615 | 0.0236 |
| 2.5344 | 2616 | 0.0113 |
| 2.5353 | 2617 | 0.0168 |
| 2.5363 | 2618 | 0.0267 |
| 2.5373 | 2619 | 0.0171 |
| 2.5382 | 2620 | 0.0266 |
| 2.5392 | 2621 | 0.0334 |
| 2.5402 | 2622 | 0.0227 |
| 2.5411 | 2623 | 0.0185 |
| 2.5421 | 2624 | 0.016 |
| 2.5431 | 2625 | 0.0323 |
| 2.5440 | 2626 | 0.0238 |
| 2.5450 | 2627 | 0.036 |
| 2.5460 | 2628 | 0.0238 |
| 2.5470 | 2629 | 0.0173 |
| 2.5479 | 2630 | 0.0253 |
| 2.5489 | 2631 | 0.0297 |
| 2.5499 | 2632 | 0.0378 |
| 2.5508 | 2633 | 0.0282 |
| 2.5518 | 2634 | 0.0255 |
| 2.5528 | 2635 | 0.0233 |
| 2.5537 | 2636 | 0.0208 |
| 2.5547 | 2637 | 0.0246 |
| 2.5557 | 2638 | 0.0271 |
| 2.5566 | 2639 | 0.0208 |
| 2.5576 | 2640 | 0.028 |
| 2.5586 | 2641 | 0.0184 |
| 2.5595 | 2642 | 0.0238 |
| 2.5605 | 2643 | 0.0229 |
| 2.5615 | 2644 | 0.0177 |
| 2.5624 | 2645 | 0.0161 |
| 2.5634 | 2646 | 0.0215 |
| 2.5644 | 2647 | 0.0191 |
| 2.5653 | 2648 | 0.0261 |
| 2.5663 | 2649 | 0.0266 |
| 2.5673 | 2650 | 0.0248 |
| 2.5682 | 2651 | 0.0314 |
| 2.5692 | 2652 | 0.0508 |
| 2.5702 | 2653 | 0.0422 |
| 2.5712 | 2654 | 0.0268 |
| 2.5721 | 2655 | 0.0148 |
| 2.5731 | 2656 | 0.0179 |
| 2.5741 | 2657 | 0.0219 |
| 2.5750 | 2658 | 0.0093 |
| 2.5760 | 2659 | 0.0189 |
| 2.5770 | 2660 | 0.0157 |
| 2.5779 | 2661 | 0.0313 |
| 2.5789 | 2662 | 0.0199 |
| 2.5799 | 2663 | 0.0151 |
| 2.5808 | 2664 | 0.0297 |
| 2.5818 | 2665 | 0.0391 |
| 2.5828 | 2666 | 0.0469 |
| 2.5837 | 2667 | 0.02 |
| 2.5847 | 2668 | 0.021 |
| 2.5857 | 2669 | 0.0272 |
| 2.5866 | 2670 | 0.0271 |
| 2.5876 | 2671 | 0.0319 |
| 2.5886 | 2672 | 0.0181 |
| 2.5895 | 2673 | 0.0181 |
| 2.5905 | 2674 | 0.0228 |
| 2.5915 | 2675 | 0.0125 |
| 2.5924 | 2676 | 0.0184 |
| 2.5934 | 2677 | 0.0272 |
| 2.5944 | 2678 | 0.0189 |
| 2.5954 | 2679 | 0.0228 |
| 2.5963 | 2680 | 0.0132 |
| 2.5973 | 2681 | 0.0255 |
| 2.5983 | 2682 | 0.0186 |
| 2.5992 | 2683 | 0.0127 |
| 2.6002 | 2684 | 0.0113 |
| 2.6012 | 2685 | 0.0285 |
| 2.6021 | 2686 | 0.0361 |
| 2.6031 | 2687 | 0.0293 |
| 2.6041 | 2688 | 0.0238 |
| 2.6050 | 2689 | 0.0317 |
| 2.6060 | 2690 | 0.0328 |
| 2.6070 | 2691 | 0.0173 |
| 2.6079 | 2692 | 0.029 |
| 2.6089 | 2693 | 0.0167 |
| 2.6099 | 2694 | 0.0129 |
| 2.6108 | 2695 | 0.0357 |
| 2.6118 | 2696 | 0.0168 |
| 2.6128 | 2697 | 0.0382 |
| 2.6137 | 2698 | 0.0345 |
| 2.6147 | 2699 | 0.0246 |
| 2.6157 | 2700 | 0.0274 |
| 2.6167 | 2701 | 0.0411 |
| 2.6176 | 2702 | 0.0179 |
| 2.6186 | 2703 | 0.0385 |
| 2.6196 | 2704 | 0.0125 |
| 2.6205 | 2705 | 0.0397 |
| 2.6215 | 2706 | 0.0529 |
| 2.6225 | 2707 | 0.0109 |
| 2.6234 | 2708 | 0.0213 |
| 2.6244 | 2709 | 0.0311 |
| 2.6254 | 2710 | 0.042 |
| 2.6263 | 2711 | 0.0213 |
| 2.6273 | 2712 | 0.0256 |
| 2.6283 | 2713 | 0.019 |
| 2.6292 | 2714 | 0.0118 |
| 2.6302 | 2715 | 0.02 |
| 2.6312 | 2716 | 0.0227 |
| 2.6321 | 2717 | 0.0236 |
| 2.6331 | 2718 | 0.0169 |
| 2.6341 | 2719 | 0.0154 |
| 2.6350 | 2720 | 0.0108 |
| 2.6360 | 2721 | 0.0198 |
| 2.6370 | 2722 | 0.0184 |
| 2.6379 | 2723 | 0.0139 |
| 2.6389 | 2724 | 0.0118 |
| 2.6399 | 2725 | 0.0363 |
| 2.6409 | 2726 | 0.0293 |
| 2.6418 | 2727 | 0.0223 |
| 2.6428 | 2728 | 0.0142 |
| 2.6438 | 2729 | 0.0202 |
| 2.6447 | 2730 | 0.0268 |
| 2.6457 | 2731 | 0.0183 |
| 2.6467 | 2732 | 0.0182 |
| 2.6476 | 2733 | 0.0329 |
| 2.6486 | 2734 | 0.0285 |
| 2.6496 | 2735 | 0.0127 |
| 2.6505 | 2736 | 0.0318 |
| 2.6515 | 2737 | 0.0258 |
| 2.6525 | 2738 | 0.0209 |
| 2.6534 | 2739 | 0.0192 |
| 2.6544 | 2740 | 0.0153 |
| 2.6554 | 2741 | 0.0373 |
| 2.6563 | 2742 | 0.0287 |
| 2.6573 | 2743 | 0.02 |
| 2.6583 | 2744 | 0.016 |
| 2.6592 | 2745 | 0.0323 |
| 2.6602 | 2746 | 0.0275 |
| 2.6612 | 2747 | 0.0119 |
| 2.6621 | 2748 | 0.0135 |
| 2.6631 | 2749 | 0.0322 |
| 2.6641 | 2750 | 0.0206 |
| 2.6651 | 2751 | 0.0167 |
| 2.6660 | 2752 | 0.0228 |
| 2.6670 | 2753 | 0.0267 |
| 2.6680 | 2754 | 0.0293 |
| 2.6689 | 2755 | 0.0206 |
| 2.6699 | 2756 | 0.0247 |
| 2.6709 | 2757 | 0.0226 |
| 2.6718 | 2758 | 0.0214 |
| 2.6728 | 2759 | 0.0268 |
| 2.6738 | 2760 | 0.0236 |
| 2.6747 | 2761 | 0.022 |
| 2.6757 | 2762 | 0.0168 |
| 2.6767 | 2763 | 0.0085 |
| 2.6776 | 2764 | 0.0154 |
| 2.6786 | 2765 | 0.0217 |
| 2.6796 | 2766 | 0.0159 |
| 2.6805 | 2767 | 0.0279 |
| 2.6815 | 2768 | 0.0203 |
| 2.6825 | 2769 | 0.0334 |
| 2.6834 | 2770 | 0.0388 |
| 2.6844 | 2771 | 0.0304 |
| 2.6854 | 2772 | 0.022 |
| 2.6864 | 2773 | 0.033 |
| 2.6873 | 2774 | 0.0276 |
| 2.6883 | 2775 | 0.0148 |
| 2.6893 | 2776 | 0.0132 |
| 2.6902 | 2777 | 0.037 |
| 2.6912 | 2778 | 0.0253 |
| 2.6922 | 2779 | 0.0147 |
| 2.6931 | 2780 | 0.0162 |
| 2.6941 | 2781 | 0.0178 |
| 2.6951 | 2782 | 0.0203 |
| 2.6960 | 2783 | 0.0158 |
| 2.6970 | 2784 | 0.0275 |
| 2.6980 | 2785 | 0.0223 |
| 2.6989 | 2786 | 0.0194 |
| 2.6999 | 2787 | 0.0108 |
| 2.7009 | 2788 | 0.0145 |
| 2.7018 | 2789 | 0.0393 |
| 2.7028 | 2790 | 0.0137 |
| 2.7038 | 2791 | 0.0202 |
| 2.7047 | 2792 | 0.0186 |
| 2.7057 | 2793 | 0.0248 |
| 2.7067 | 2794 | 0.0208 |
| 2.7076 | 2795 | 0.0148 |
| 2.7086 | 2796 | 0.0235 |
| 2.7096 | 2797 | 0.0331 |
| 2.7106 | 2798 | 0.0423 |
| 2.7115 | 2799 | 0.0224 |
| 2.7125 | 2800 | 0.031 |
| 2.7135 | 2801 | 0.0276 |
| 2.7144 | 2802 | 0.027 |
| 2.7154 | 2803 | 0.0217 |
| 2.7164 | 2804 | 0.0147 |
| 2.7173 | 2805 | 0.0664 |
| 2.7183 | 2806 | 0.034 |
| 2.7193 | 2807 | 0.0164 |
| 2.7202 | 2808 | 0.0397 |
| 2.7212 | 2809 | 0.0264 |
| 2.7222 | 2810 | 0.0342 |
| 2.7231 | 2811 | 0.0197 |
| 2.7241 | 2812 | 0.0206 |
| 2.7251 | 2813 | 0.0367 |
| 2.7260 | 2814 | 0.0376 |
| 2.7270 | 2815 | 0.0213 |
| 2.7280 | 2816 | 0.0198 |
| 2.7289 | 2817 | 0.0299 |
| 2.7299 | 2818 | 0.0149 |
| 2.7309 | 2819 | 0.0121 |
| 2.7318 | 2820 | 0.0189 |
| 2.7328 | 2821 | 0.0281 |
| 2.7338 | 2822 | 0.0169 |
| 2.7348 | 2823 | 0.0327 |
| 2.7357 | 2824 | 0.0168 |
| 2.7367 | 2825 | 0.0261 |
| 2.7377 | 2826 | 0.0157 |
| 2.7386 | 2827 | 0.0112 |
| 2.7396 | 2828 | 0.0355 |
| 2.7406 | 2829 | 0.0277 |
| 2.7415 | 2830 | 0.0167 |
| 2.7425 | 2831 | 0.026 |
| 2.7435 | 2832 | 0.0155 |
| 2.7444 | 2833 | 0.0138 |
| 2.7454 | 2834 | 0.0261 |
| 2.7464 | 2835 | 0.0285 |
| 2.7473 | 2836 | 0.016 |
| 2.7483 | 2837 | 0.0289 |
| 2.7493 | 2838 | 0.0275 |
| 2.7502 | 2839 | 0.0218 |
| 2.7512 | 2840 | 0.0273 |
| 2.7522 | 2841 | 0.015 |
| 2.7531 | 2842 | 0.0196 |
| 2.7541 | 2843 | 0.0183 |
| 2.7551 | 2844 | 0.0135 |
| 2.7561 | 2845 | 0.0286 |
| 2.7570 | 2846 | 0.0265 |
| 2.7580 | 2847 | 0.0216 |
| 2.7590 | 2848 | 0.0195 |
| 2.7599 | 2849 | 0.0307 |
| 2.7609 | 2850 | 0.0315 |
| 2.7619 | 2851 | 0.0178 |
| 2.7628 | 2852 | 0.0194 |
| 2.7638 | 2853 | 0.0202 |
| 2.7648 | 2854 | 0.0156 |
| 2.7657 | 2855 | 0.0121 |
| 2.7667 | 2856 | 0.0124 |
| 2.7677 | 2857 | 0.0219 |
| 2.7686 | 2858 | 0.0159 |
| 2.7696 | 2859 | 0.0253 |
| 2.7706 | 2860 | 0.0156 |
| 2.7715 | 2861 | 0.0271 |
| 2.7725 | 2862 | 0.0232 |
| 2.7735 | 2863 | 0.0129 |
| 2.7744 | 2864 | 0.0233 |
| 2.7754 | 2865 | 0.0252 |
| 2.7764 | 2866 | 0.0144 |
| 2.7773 | 2867 | 0.0214 |
| 2.7783 | 2868 | 0.0246 |
| 2.7793 | 2869 | 0.0189 |
| 2.7803 | 2870 | 0.019 |
| 2.7812 | 2871 | 0.0158 |
| 2.7822 | 2872 | 0.0243 |
| 2.7832 | 2873 | 0.0491 |
| 2.7841 | 2874 | 0.022 |
| 2.7851 | 2875 | 0.0186 |
| 2.7861 | 2876 | 0.0265 |
| 2.7870 | 2877 | 0.0213 |
| 2.7880 | 2878 | 0.0263 |
| 2.7890 | 2879 | 0.0121 |
| 2.7899 | 2880 | 0.0188 |
| 2.7909 | 2881 | 0.015 |
| 2.7919 | 2882 | 0.0148 |
| 2.7928 | 2883 | 0.011 |
| 2.7938 | 2884 | 0.0104 |
| 2.7948 | 2885 | 0.0278 |
| 2.7957 | 2886 | 0.017 |
| 2.7967 | 2887 | 0.0223 |
| 2.7977 | 2888 | 0.0268 |
| 2.7986 | 2889 | 0.0313 |
| 2.7996 | 2890 | 0.0221 |
| 2.8006 | 2891 | 0.0102 |
| 2.8015 | 2892 | 0.013 |
| 2.8025 | 2893 | 0.0255 |
| 2.8035 | 2894 | 0.0188 |
| 2.8045 | 2895 | 0.0178 |
| 2.8054 | 2896 | 0.032 |
| 2.8064 | 2897 | 0.0578 |
| 2.8074 | 2898 | 0.0174 |
| 2.8083 | 2899 | 0.0152 |
| 2.8093 | 2900 | 0.0102 |
| 2.8103 | 2901 | 0.0416 |
| 2.8112 | 2902 | 0.0299 |
| 2.8122 | 2903 | 0.0139 |
| 2.8132 | 2904 | 0.0219 |
| 2.8141 | 2905 | 0.0262 |
| 2.8151 | 2906 | 0.0401 |
| 2.8161 | 2907 | 0.0175 |
| 2.8170 | 2908 | 0.0239 |
| 2.8180 | 2909 | 0.0311 |
| 2.8190 | 2910 | 0.0232 |
| 2.8199 | 2911 | 0.017 |
| 2.8209 | 2912 | 0.0139 |
| 2.8219 | 2913 | 0.0207 |
| 2.8228 | 2914 | 0.0195 |
| 2.8238 | 2915 | 0.019 |
| 2.8248 | 2916 | 0.0137 |
| 2.8258 | 2917 | 0.0188 |
| 2.8267 | 2918 | 0.0167 |
| 2.8277 | 2919 | 0.0151 |
| 2.8287 | 2920 | 0.0394 |
| 2.8296 | 2921 | 0.0272 |
| 2.8306 | 2922 | 0.0299 |
| 2.8316 | 2923 | 0.0149 |
| 2.8325 | 2924 | 0.0207 |
| 2.8335 | 2925 | 0.0293 |
| 2.8345 | 2926 | 0.0222 |
| 2.8354 | 2927 | 0.0345 |
| 2.8364 | 2928 | 0.0244 |
| 2.8374 | 2929 | 0.0224 |
| 2.8383 | 2930 | 0.0206 |
| 2.8393 | 2931 | 0.0143 |
| 2.8403 | 2932 | 0.0184 |
| 2.8412 | 2933 | 0.051 |
| 2.8422 | 2934 | 0.0338 |
| 2.8432 | 2935 | 0.0122 |
| 2.8441 | 2936 | 0.0171 |
| 2.8451 | 2937 | 0.0224 |
| 2.8461 | 2938 | 0.0279 |
| 2.8470 | 2939 | 0.0149 |
| 2.8480 | 2940 | 0.028 |
| 2.8490 | 2941 | 0.0204 |
| 2.8500 | 2942 | 0.024 |
| 2.8509 | 2943 | 0.0206 |
| 2.8519 | 2944 | 0.0177 |
| 2.8529 | 2945 | 0.0241 |
| 2.8538 | 2946 | 0.0142 |
| 2.8548 | 2947 | 0.019 |
| 2.8558 | 2948 | 0.015 |
| 2.8567 | 2949 | 0.0235 |
| 2.8577 | 2950 | 0.024 |
| 2.8587 | 2951 | 0.0286 |
| 2.8596 | 2952 | 0.0156 |
| 2.8606 | 2953 | 0.0205 |
| 2.8616 | 2954 | 0.0445 |
| 2.8625 | 2955 | 0.0139 |
| 2.8635 | 2956 | 0.0279 |
| 2.8645 | 2957 | 0.0277 |
| 2.8654 | 2958 | 0.0152 |
| 2.8664 | 2959 | 0.0179 |
| 2.8674 | 2960 | 0.0237 |
| 2.8683 | 2961 | 0.0162 |
| 2.8693 | 2962 | 0.0253 |
| 2.8703 | 2963 | 0.0206 |
| 2.8712 | 2964 | 0.0279 |
| 2.8722 | 2965 | 0.0232 |
| 2.8732 | 2966 | 0.0354 |
| 2.8742 | 2967 | 0.0205 |
| 2.8751 | 2968 | 0.0265 |
| 2.8761 | 2969 | 0.0322 |
| 2.8771 | 2970 | 0.0244 |
| 2.8780 | 2971 | 0.0166 |
| 2.8790 | 2972 | 0.0209 |
| 2.8800 | 2973 | 0.0149 |
| 2.8809 | 2974 | 0.0117 |
| 2.8819 | 2975 | 0.0162 |
| 2.8829 | 2976 | 0.0273 |
| 2.8838 | 2977 | 0.0402 |
| 2.8848 | 2978 | 0.0138 |
| 2.8858 | 2979 | 0.025 |
| 2.8867 | 2980 | 0.0133 |
| 2.8877 | 2981 | 0.039 |
| 2.8887 | 2982 | 0.0226 |
| 2.8896 | 2983 | 0.0148 |
| 2.8906 | 2984 | 0.0244 |
| 2.8916 | 2985 | 0.0198 |
| 2.8925 | 2986 | 0.0166 |
| 2.8935 | 2987 | 0.0162 |
| 2.8945 | 2988 | 0.0161 |
| 2.8955 | 2989 | 0.0326 |
| 2.8964 | 2990 | 0.024 |
| 2.8974 | 2991 | 0.0286 |
| 2.8984 | 2992 | 0.019 |
| 2.8993 | 2993 | 0.0252 |
| 2.9003 | 2994 | 0.026 |
| 2.9013 | 2995 | 0.0195 |
| 2.9022 | 2996 | 0.013 |
| 2.9032 | 2997 | 0.03 |
| 2.9042 | 2998 | 0.0208 |
| 2.9051 | 2999 | 0.0258 |
| 2.9061 | 3000 | 0.0195 |
| 2.9071 | 3001 | 0.0446 |
| 2.9080 | 3002 | 0.0249 |
| 2.9090 | 3003 | 0.0322 |
| 2.9100 | 3004 | 0.0209 |
| 2.9109 | 3005 | 0.0394 |
| 2.9119 | 3006 | 0.041 |
| 2.9129 | 3007 | 0.0235 |
| 2.9138 | 3008 | 0.0139 |
| 2.9148 | 3009 | 0.0147 |
| 2.9158 | 3010 | 0.0126 |
| 2.9167 | 3011 | 0.0254 |
| 2.9177 | 3012 | 0.0097 |
| 2.9187 | 3013 | 0.0234 |
| 2.9197 | 3014 | 0.0118 |
| 2.9206 | 3015 | 0.0154 |
| 2.9216 | 3016 | 0.0209 |
| 2.9226 | 3017 | 0.0332 |
| 2.9235 | 3018 | 0.0289 |
| 2.9245 | 3019 | 0.0151 |
| 2.9255 | 3020 | 0.0205 |
| 2.9264 | 3021 | 0.0323 |
| 2.9274 | 3022 | 0.021 |
| 2.9284 | 3023 | 0.0114 |
| 2.9293 | 3024 | 0.0312 |
| 2.9303 | 3025 | 0.0186 |
| 2.9313 | 3026 | 0.0243 |
| 2.9322 | 3027 | 0.0261 |
| 2.9332 | 3028 | 0.0203 |
| 2.9342 | 3029 | 0.0323 |
| 2.9351 | 3030 | 0.0466 |
| 2.9361 | 3031 | 0.0217 |
| 2.9371 | 3032 | 0.0193 |
| 2.9380 | 3033 | 0.0233 |
| 2.9390 | 3034 | 0.021 |
| 2.9400 | 3035 | 0.0221 |
| 2.9409 | 3036 | 0.0203 |
| 2.9419 | 3037 | 0.0315 |
| 2.9429 | 3038 | 0.0366 |
| 2.9439 | 3039 | 0.0177 |
| 2.9448 | 3040 | 0.0159 |
| 2.9458 | 3041 | 0.0194 |
| 2.9468 | 3042 | 0.0287 |
| 2.9477 | 3043 | 0.0188 |
| 2.9487 | 3044 | 0.0155 |
| 2.9497 | 3045 | 0.0206 |
| 2.9506 | 3046 | 0.0111 |
| 2.9516 | 3047 | 0.0109 |
| 2.9526 | 3048 | 0.0261 |
| 2.9535 | 3049 | 0.0274 |
| 2.9545 | 3050 | 0.03 |
| 2.9555 | 3051 | 0.0191 |
| 2.9564 | 3052 | 0.0463 |
| 2.9574 | 3053 | 0.0417 |
| 2.9584 | 3054 | 0.0369 |
| 2.9593 | 3055 | 0.0263 |
| 2.9603 | 3056 | 0.0195 |
| 2.9613 | 3057 | 0.0201 |
| 2.9622 | 3058 | 0.0159 |
| 2.9632 | 3059 | 0.0193 |
| 2.9642 | 3060 | 0.0279 |
| 2.9652 | 3061 | 0.0187 |
| 2.9661 | 3062 | 0.0367 |
| 2.9671 | 3063 | 0.012 |
| 2.9681 | 3064 | 0.0208 |
| 2.9690 | 3065 | 0.0314 |
| 2.9700 | 3066 | 0.0197 |
| 2.9710 | 3067 | 0.0137 |
| 2.9719 | 3068 | 0.0119 |
| 2.9729 | 3069 | 0.0312 |
| 2.9739 | 3070 | 0.0329 |
| 2.9748 | 3071 | 0.0161 |
| 2.9758 | 3072 | 0.0195 |
| 2.9768 | 3073 | 0.0331 |
| 2.9777 | 3074 | 0.0289 |
| 2.9787 | 3075 | 0.0179 |
| 2.9797 | 3076 | 0.0215 |
| 2.9806 | 3077 | 0.0307 |
| 2.9816 | 3078 | 0.0296 |
| 2.9826 | 3079 | 0.0159 |
| 2.9835 | 3080 | 0.0243 |
| 2.9845 | 3081 | 0.0226 |
| 2.9855 | 3082 | 0.0199 |
| 2.9864 | 3083 | 0.013 |
| 2.9874 | 3084 | 0.0129 |
| 2.9884 | 3085 | 0.0301 |
| 2.9894 | 3086 | 0.0251 |
| 2.9903 | 3087 | 0.0127 |
| 2.9913 | 3088 | 0.0247 |
| 2.9923 | 3089 | 0.0344 |
| 2.9932 | 3090 | 0.0307 |
| 2.9942 | 3091 | 0.0243 |
| 2.9952 | 3092 | 0.0222 |
| 2.9961 | 3093 | 0.0227 |
| 2.9971 | 3094 | 0.0287 |
| 2.9981 | 3095 | 0.0315 |
| 2.9990 | 3096 | 0.0348 |
| 3.0010 | 3097 | 0.0447 |
| 3.0019 | 3098 | 0.0214 |
| 3.0029 | 3099 | 0.0243 |
| 3.0039 | 3100 | 0.0113 |
| 3.0048 | 3101 | 0.0428 |
| 3.0058 | 3102 | 0.0275 |
| 3.0068 | 3103 | 0.022 |
| 3.0077 | 3104 | 0.0402 |
| 3.0087 | 3105 | 0.035 |
| 3.0097 | 3106 | 0.0248 |
| 3.0106 | 3107 | 0.0193 |
| 3.0116 | 3108 | 0.0255 |
| 3.0126 | 3109 | 0.047 |
| 3.0136 | 3110 | 0.0275 |
| 3.0145 | 3111 | 0.0223 |
| 3.0155 | 3112 | 0.0147 |
| 3.0165 | 3113 | 0.0554 |
| 3.0174 | 3114 | 0.0187 |
| 3.0184 | 3115 | 0.0259 |
| 3.0194 | 3116 | 0.0191 |
| 3.0203 | 3117 | 0.0658 |
| 3.0213 | 3118 | 0.023 |
| 3.0223 | 3119 | 0.0195 |
| 3.0232 | 3120 | 0.0151 |
| 3.0242 | 3121 | 0.0246 |
| 3.0252 | 3122 | 0.0196 |
| 3.0261 | 3123 | 0.0126 |
| 3.0271 | 3124 | 0.0101 |
| 3.0281 | 3125 | 0.0273 |
| 3.0290 | 3126 | 0.0123 |
| 3.0300 | 3127 | 0.0141 |
| 3.0310 | 3128 | 0.0129 |
| 3.0319 | 3129 | 0.0413 |
| 3.0329 | 3130 | 0.0144 |
| 3.0339 | 3131 | 0.0105 |
| 3.0348 | 3132 | 0.0157 |
| 3.0358 | 3133 | 0.0308 |
| 3.0368 | 3134 | 0.0259 |
| 3.0378 | 3135 | 0.011 |
| 3.0387 | 3136 | 0.0093 |
| 3.0397 | 3137 | 0.0257 |
| 3.0407 | 3138 | 0.0165 |
| 3.0416 | 3139 | 0.0182 |
| 3.0426 | 3140 | 0.016 |
| 3.0436 | 3141 | 0.0379 |
| 3.0445 | 3142 | 0.0135 |
| 3.0455 | 3143 | 0.0255 |
| 3.0465 | 3144 | 0.0129 |
| 3.0474 | 3145 | 0.0364 |
| 3.0484 | 3146 | 0.0185 |
| 3.0494 | 3147 | 0.0175 |
| 3.0503 | 3148 | 0.0145 |
| 3.0513 | 3149 | 0.0541 |
| 3.0523 | 3150 | 0.0187 |
| 3.0532 | 3151 | 0.0149 |
| 3.0542 | 3152 | 0.0138 |
| 3.0552 | 3153 | 0.0262 |
| 3.0561 | 3154 | 0.0223 |
| 3.0571 | 3155 | 0.0115 |
| 3.0581 | 3156 | 0.0122 |
| 3.0591 | 3157 | 0.0332 |
| 3.0600 | 3158 | 0.0241 |
| 3.0610 | 3159 | 0.0328 |
| 3.0620 | 3160 | 0.0293 |
| 3.0629 | 3161 | 0.0463 |
| 3.0639 | 3162 | 0.0289 |
| 3.0649 | 3163 | 0.0393 |
| 3.0658 | 3164 | 0.0114 |
| 3.0668 | 3165 | 0.0418 |
| 3.0678 | 3166 | 0.0244 |
| 3.0687 | 3167 | 0.0113 |
| 3.0697 | 3168 | 0.0123 |
| 3.0707 | 3169 | 0.0316 |
| 3.0716 | 3170 | 0.0139 |
| 3.0726 | 3171 | 0.0169 |
| 3.0736 | 3172 | 0.0101 |
| 3.0745 | 3173 | 0.0246 |
| 3.0755 | 3174 | 0.0261 |
| 3.0765 | 3175 | 0.0145 |
| 3.0774 | 3176 | 0.013 |
| 3.0784 | 3177 | 0.0216 |
| 3.0794 | 3178 | 0.0231 |
| 3.0803 | 3179 | 0.0142 |
| 3.0813 | 3180 | 0.0124 |
| 3.0823 | 3181 | 0.0352 |
| 3.0833 | 3182 | 0.0187 |
| 3.0842 | 3183 | 0.0189 |
| 3.0852 | 3184 | 0.0189 |
| 3.0862 | 3185 | 0.0152 |
| 3.0871 | 3186 | 0.0096 |
| 3.0881 | 3187 | 0.0097 |
| 3.0891 | 3188 | 0.0099 |
| 3.0900 | 3189 | 0.0302 |
| 3.0910 | 3190 | 0.0264 |
| 3.0920 | 3191 | 0.0114 |
| 3.0929 | 3192 | 0.0098 |
| 3.0939 | 3193 | 0.0409 |
| 3.0949 | 3194 | 0.0219 |
| 3.0958 | 3195 | 0.0207 |
| 3.0968 | 3196 | 0.0243 |
| 3.0978 | 3197 | 0.0411 |
| 3.0987 | 3198 | 0.0149 |
| 3.0997 | 3199 | 0.0159 |
| 3.1007 | 3200 | 0.0099 |
| 3.1016 | 3201 | 0.03 |
| 3.1026 | 3202 | 0.0297 |
| 3.1036 | 3203 | 0.023 |
| 3.1045 | 3204 | 0.0264 |
| 3.1055 | 3205 | 0.021 |
| 3.1065 | 3206 | 0.0212 |
| 3.1075 | 3207 | 0.0121 |
| 3.1084 | 3208 | 0.0141 |
| 3.1094 | 3209 | 0.028 |
| 3.1104 | 3210 | 0.0227 |
| 3.1113 | 3211 | 0.0144 |
| 3.1123 | 3212 | 0.024 |
| 3.1133 | 3213 | 0.0341 |
| 3.1142 | 3214 | 0.0146 |
| 3.1152 | 3215 | 0.0113 |
| 3.1162 | 3216 | 0.0129 |
| 3.1171 | 3217 | 0.0271 |
| 3.1181 | 3218 | 0.0178 |
| 3.1191 | 3219 | 0.0195 |
| 3.1200 | 3220 | 0.0147 |
| 3.1210 | 3221 | 0.0403 |
| 3.1220 | 3222 | 0.0416 |
| 3.1229 | 3223 | 0.0113 |
| 3.1239 | 3224 | 0.0122 |
| 3.1249 | 3225 | 0.0385 |
| 3.1258 | 3226 | 0.0091 |
| 3.1268 | 3227 | 0.0119 |
| 3.1278 | 3228 | 0.0154 |
| 3.1288 | 3229 | 0.0432 |
| 3.1297 | 3230 | 0.0223 |
| 3.1307 | 3231 | 0.0301 |
| 3.1317 | 3232 | 0.0247 |
| 3.1326 | 3233 | 0.0266 |
| 3.1336 | 3234 | 0.0184 |
| 3.1346 | 3235 | 0.0085 |
| 3.1355 | 3236 | 0.0222 |
| 3.1365 | 3237 | 0.032 |
| 3.1375 | 3238 | 0.0356 |
| 3.1384 | 3239 | 0.0179 |
| 3.1394 | 3240 | 0.016 |
| 3.1404 | 3241 | 0.0405 |
| 3.1413 | 3242 | 0.0217 |
| 3.1423 | 3243 | 0.0145 |
| 3.1433 | 3244 | 0.0219 |
| 3.1442 | 3245 | 0.0285 |
| 3.1452 | 3246 | 0.0201 |
| 3.1462 | 3247 | 0.0162 |
| 3.1471 | 3248 | 0.013 |
| 3.1481 | 3249 | 0.0818 |
| 3.1491 | 3250 | 0.0269 |
| 3.1500 | 3251 | 0.0176 |
| 3.1510 | 3252 | 0.0262 |
| 3.1520 | 3253 | 0.0313 |
| 3.1530 | 3254 | 0.0199 |
| 3.1539 | 3255 | 0.0133 |
| 3.1549 | 3256 | 0.0132 |
| 3.1559 | 3257 | 0.0334 |
| 3.1568 | 3258 | 0.0186 |
| 3.1578 | 3259 | 0.0083 |
| 3.1588 | 3260 | 0.0135 |
| 3.1597 | 3261 | 0.0138 |
| 3.1607 | 3262 | 0.0113 |
| 3.1617 | 3263 | 0.023 |
| 3.1626 | 3264 | 0.0101 |
| 3.1636 | 3265 | 0.0271 |
| 3.1646 | 3266 | 0.0229 |
| 3.1655 | 3267 | 0.0153 |
| 3.1665 | 3268 | 0.0134 |
| 3.1675 | 3269 | 0.0348 |
| 3.1684 | 3270 | 0.0185 |
| 3.1694 | 3271 | 0.0182 |
| 3.1704 | 3272 | 0.0161 |
| 3.1713 | 3273 | 0.0287 |
| 3.1723 | 3274 | 0.0153 |
| 3.1733 | 3275 | 0.0109 |
| 3.1742 | 3276 | 0.0082 |
| 3.1752 | 3277 | 0.0383 |
| 3.1762 | 3278 | 0.0218 |
| 3.1772 | 3279 | 0.023 |
| 3.1781 | 3280 | 0.0144 |
| 3.1791 | 3281 | 0.0383 |
| 3.1801 | 3282 | 0.0246 |
| 3.1810 | 3283 | 0.0222 |
| 3.1820 | 3284 | 0.0142 |
| 3.1830 | 3285 | 0.0254 |
| 3.1839 | 3286 | 0.0216 |
| 3.1849 | 3287 | 0.014 |
| 3.1859 | 3288 | 0.0134 |
| 3.1868 | 3289 | 0.0549 |
| 3.1878 | 3290 | 0.0257 |
| 3.1888 | 3291 | 0.0127 |
| 3.1897 | 3292 | 0.0297 |
| 3.1907 | 3293 | 0.0375 |
| 3.1917 | 3294 | 0.0158 |
| 3.1926 | 3295 | 0.0098 |
| 3.1936 | 3296 | 0.0102 |
| 3.1946 | 3297 | 0.04 |
| 3.1955 | 3298 | 0.0141 |
| 3.1965 | 3299 | 0.0124 |
| 3.1975 | 3300 | 0.0104 |
| 3.1985 | 3301 | 0.0356 |
| 3.1994 | 3302 | 0.0135 |
| 3.2004 | 3303 | 0.0116 |
| 3.2014 | 3304 | 0.0121 |
| 3.2023 | 3305 | 0.0288 |
| 3.2033 | 3306 | 0.0186 |
| 3.2043 | 3307 | 0.0087 |
| 3.2052 | 3308 | 0.009 |
| 3.2062 | 3309 | 0.0238 |
| 3.2072 | 3310 | 0.026 |
| 3.2081 | 3311 | 0.0124 |
| 3.2091 | 3312 | 0.0087 |
| 3.2101 | 3313 | 0.0243 |
| 3.2110 | 3314 | 0.0173 |
| 3.2120 | 3315 | 0.0267 |
| 3.2130 | 3316 | 0.018 |
| 3.2139 | 3317 | 0.0154 |
| 3.2149 | 3318 | 0.0133 |
| 3.2159 | 3319 | 0.0158 |
| 3.2168 | 3320 | 0.021 |
| 3.2178 | 3321 | 0.0257 |
| 3.2188 | 3322 | 0.0249 |
| 3.2197 | 3323 | 0.0118 |
| 3.2207 | 3324 | 0.0124 |
| 3.2217 | 3325 | 0.0243 |
| 3.2227 | 3326 | 0.0145 |
| 3.2236 | 3327 | 0.0093 |
| 3.2246 | 3328 | 0.0115 |
| 3.2256 | 3329 | 0.0317 |
| 3.2265 | 3330 | 0.0451 |
| 3.2275 | 3331 | 0.0139 |
| 3.2285 | 3332 | 0.0149 |
| 3.2294 | 3333 | 0.029 |
| 3.2304 | 3334 | 0.0281 |
| 3.2314 | 3335 | 0.0111 |
| 3.2323 | 3336 | 0.0091 |
| 3.2333 | 3337 | 0.0518 |
| 3.2343 | 3338 | 0.0287 |
| 3.2352 | 3339 | 0.019 |
| 3.2362 | 3340 | 0.0126 |
| 3.2372 | 3341 | 0.0315 |
| 3.2381 | 3342 | 0.0256 |
| 3.2391 | 3343 | 0.0115 |
| 3.2401 | 3344 | 0.016 |
| 3.2410 | 3345 | 0.024 |
| 3.2420 | 3346 | 0.0176 |
| 3.2430 | 3347 | 0.0126 |
| 3.2439 | 3348 | 0.0154 |
| 3.2449 | 3349 | 0.0184 |
| 3.2459 | 3350 | 0.0193 |
| 3.2469 | 3351 | 0.0127 |
| 3.2478 | 3352 | 0.0086 |
| 3.2488 | 3353 | 0.0337 |
| 3.2498 | 3354 | 0.0241 |
| 3.2507 | 3355 | 0.0148 |
| 3.2517 | 3356 | 0.013 |
| 3.2527 | 3357 | 0.0255 |
| 3.2536 | 3358 | 0.0101 |
| 3.2546 | 3359 | 0.0088 |
| 3.2556 | 3360 | 0.0122 |
| 3.2565 | 3361 | 0.0168 |
| 3.2575 | 3362 | 0.0123 |
| 3.2585 | 3363 | 0.0086 |
| 3.2594 | 3364 | 0.014 |
| 3.2604 | 3365 | 0.0222 |
| 3.2614 | 3366 | 0.0108 |
| 3.2623 | 3367 | 0.0114 |
| 3.2633 | 3368 | 0.0091 |
| 3.2643 | 3369 | 0.0129 |
| 3.2652 | 3370 | 0.0215 |
| 3.2662 | 3371 | 0.0065 |
| 3.2672 | 3372 | 0.0066 |
| 3.2682 | 3373 | 0.0464 |
| 3.2691 | 3374 | 0.014 |
| 3.2701 | 3375 | 0.0134 |
| 3.2711 | 3376 | 0.0151 |
| 3.2720 | 3377 | 0.0183 |
| 3.2730 | 3378 | 0.0125 |
| 3.2740 | 3379 | 0.013 |
| 3.2749 | 3380 | 0.0098 |
| 3.2759 | 3381 | 0.0227 |
| 3.2769 | 3382 | 0.0161 |
| 3.2778 | 3383 | 0.0109 |
| 3.2788 | 3384 | 0.0093 |
| 3.2798 | 3385 | 0.021 |
| 3.2807 | 3386 | 0.0065 |
| 3.2817 | 3387 | 0.0071 |
| 3.2827 | 3388 | 0.005 |
| 3.2836 | 3389 | 0.0315 |
| 3.2846 | 3390 | 0.0167 |
| 3.2856 | 3391 | 0.0132 |
| 3.2865 | 3392 | 0.0139 |
| 3.2875 | 3393 | 0.0276 |
| 3.2885 | 3394 | 0.0105 |
| 3.2894 | 3395 | 0.0112 |
| 3.2904 | 3396 | 0.0123 |
| 3.2914 | 3397 | 0.0319 |
| 3.2924 | 3398 | 0.015 |
| 3.2933 | 3399 | 0.0105 |
| 3.2943 | 3400 | 0.0096 |
| 3.2953 | 3401 | 0.0237 |
| 3.2962 | 3402 | 0.0132 |
| 3.2972 | 3403 | 0.0074 |
| 3.2982 | 3404 | 0.0078 |
| 3.2991 | 3405 | 0.0408 |
| 3.3001 | 3406 | 0.0149 |
| 3.3011 | 3407 | 0.0134 |
| 3.3020 | 3408 | 0.0099 |
| 3.3030 | 3409 | 0.0158 |
| 3.3040 | 3410 | 0.0124 |
| 3.3049 | 3411 | 0.0138 |
| 3.3059 | 3412 | 0.0151 |
| 3.3069 | 3413 | 0.0138 |
| 3.3078 | 3414 | 0.0135 |
| 3.3088 | 3415 | 0.0086 |
| 3.3098 | 3416 | 0.0134 |
| 3.3107 | 3417 | 0.0359 |
| 3.3117 | 3418 | 0.0117 |
| 3.3127 | 3419 | 0.0132 |
| 3.3136 | 3420 | 0.0116 |
| 3.3146 | 3421 | 0.015 |
| 3.3156 | 3422 | 0.0103 |
| 3.3166 | 3423 | 0.0139 |
| 3.3175 | 3424 | 0.0123 |
| 3.3185 | 3425 | 0.0152 |
| 3.3195 | 3426 | 0.0094 |
| 3.3204 | 3427 | 0.0155 |
| 3.3214 | 3428 | 0.0119 |
| 3.3224 | 3429 | 0.0292 |
| 3.3233 | 3430 | 0.0251 |
| 3.3243 | 3431 | 0.0126 |
| 3.3253 | 3432 | 0.0111 |
| 3.3262 | 3433 | 0.0274 |
| 3.3272 | 3434 | 0.0147 |
| 3.3282 | 3435 | 0.0141 |
| 3.3291 | 3436 | 0.0121 |
| 3.3301 | 3437 | 0.0375 |
| 3.3311 | 3438 | 0.0131 |
| 3.3320 | 3439 | 0.0135 |
| 3.3330 | 3440 | 0.0139 |
| 3.3340 | 3441 | 0.0141 |
| 3.3349 | 3442 | 0.0121 |
| 3.3359 | 3443 | 0.0084 |
| 3.3369 | 3444 | 0.0105 |
| 3.3379 | 3445 | 0.0456 |
| 3.3388 | 3446 | 0.0136 |
| 3.3398 | 3447 | 0.0098 |
| 3.3408 | 3448 | 0.0085 |
| 3.3417 | 3449 | 0.0281 |
| 3.3427 | 3450 | 0.0108 |
| 3.3437 | 3451 | 0.02 |
| 3.3446 | 3452 | 0.0098 |
| 3.3456 | 3453 | 0.0218 |
| 3.3466 | 3454 | 0.0123 |
| 3.3475 | 3455 | 0.0114 |
| 3.3485 | 3456 | 0.0089 |
| 3.3495 | 3457 | 0.0261 |
| 3.3504 | 3458 | 0.0184 |
| 3.3514 | 3459 | 0.0112 |
| 3.3524 | 3460 | 0.0148 |
| 3.3533 | 3461 | 0.0394 |
| 3.3543 | 3462 | 0.0222 |
| 3.3553 | 3463 | 0.0121 |
| 3.3562 | 3464 | 0.0149 |
| 3.3572 | 3465 | 0.0176 |
| 3.3582 | 3466 | 0.0086 |
| 3.3591 | 3467 | 0.0111 |
| 3.3601 | 3468 | 0.0079 |
| 3.3611 | 3469 | 0.0272 |
| 3.3621 | 3470 | 0.0126 |
| 3.3630 | 3471 | 0.0098 |
| 3.3640 | 3472 | 0.0134 |
| 3.3650 | 3473 | 0.0248 |
| 3.3659 | 3474 | 0.0156 |
| 3.3669 | 3475 | 0.0099 |
| 3.3679 | 3476 | 0.0118 |
| 3.3688 | 3477 | 0.0218 |
| 3.3698 | 3478 | 0.0145 |
| 3.3708 | 3479 | 0.0146 |
| 3.3717 | 3480 | 0.0116 |
| 3.3727 | 3481 | 0.0225 |
| 3.3737 | 3482 | 0.012 |
| 3.3746 | 3483 | 0.0078 |
| 3.3756 | 3484 | 0.0178 |
| 3.3766 | 3485 | 0.0207 |
| 3.3775 | 3486 | 0.0149 |
| 3.3785 | 3487 | 0.01 |
| 3.3795 | 3488 | 0.0071 |
| 3.3804 | 3489 | 0.017 |
| 3.3814 | 3490 | 0.014 |
| 3.3824 | 3491 | 0.0103 |
| 3.3833 | 3492 | 0.0095 |
| 3.3843 | 3493 | 0.0105 |
| 3.3853 | 3494 | 0.0115 |
| 3.3863 | 3495 | 0.0114 |
| 3.3872 | 3496 | 0.0082 |
| 3.3882 | 3497 | 0.0181 |
| 3.3892 | 3498 | 0.013 |
| 3.3901 | 3499 | 0.0063 |
| 3.3911 | 3500 | 0.0125 |
| 3.3921 | 3501 | 0.0283 |
| 3.3930 | 3502 | 0.0277 |
| 3.3940 | 3503 | 0.0243 |
| 3.3950 | 3504 | 0.0154 |
| 3.3959 | 3505 | 0.0299 |
| 3.3969 | 3506 | 0.0159 |
| 3.3979 | 3507 | 0.0173 |
| 3.3988 | 3508 | 0.0074 |
| 3.3998 | 3509 | 0.0305 |
| 3.4008 | 3510 | 0.015 |
| 3.4017 | 3511 | 0.0129 |
| 3.4027 | 3512 | 0.0087 |
| 3.4037 | 3513 | 0.0178 |
| 3.4046 | 3514 | 0.0084 |
| 3.4056 | 3515 | 0.0159 |
| 3.4066 | 3516 | 0.0123 |
| 3.4076 | 3517 | 0.0159 |
| 3.4085 | 3518 | 0.0115 |
| 3.4095 | 3519 | 0.009 |
| 3.4105 | 3520 | 0.0104 |
| 3.4114 | 3521 | 0.0264 |
| 3.4124 | 3522 | 0.0179 |
| 3.4134 | 3523 | 0.0118 |
| 3.4143 | 3524 | 0.019 |
| 3.4153 | 3525 | 0.0225 |
| 3.4163 | 3526 | 0.0105 |
| 3.4172 | 3527 | 0.0133 |
| 3.4182 | 3528 | 0.01 |
| 3.4192 | 3529 | 0.0211 |
| 3.4201 | 3530 | 0.0203 |
| 3.4211 | 3531 | 0.0157 |
| 3.4221 | 3532 | 0.0122 |
| 3.4230 | 3533 | 0.0205 |
| 3.4240 | 3534 | 0.0202 |
| 3.4250 | 3535 | 0.013 |
| 3.4259 | 3536 | 0.0113 |
| 3.4269 | 3537 | 0.0226 |
| 3.4279 | 3538 | 0.0125 |
| 3.4288 | 3539 | 0.009 |
| 3.4298 | 3540 | 0.0109 |
| 3.4308 | 3541 | 0.0204 |
| 3.4318 | 3542 | 0.0124 |
| 3.4327 | 3543 | 0.0131 |
| 3.4337 | 3544 | 0.0145 |
| 3.4347 | 3545 | 0.0202 |
| 3.4356 | 3546 | 0.0335 |
| 3.4366 | 3547 | 0.0094 |
| 3.4376 | 3548 | 0.0114 |
| 3.4385 | 3549 | 0.0183 |
| 3.4395 | 3550 | 0.0172 |
| 3.4405 | 3551 | 0.0107 |
| 3.4414 | 3552 | 0.0125 |
| 3.4424 | 3553 | 0.0262 |
| 3.4434 | 3554 | 0.0147 |
| 3.4443 | 3555 | 0.0095 |
| 3.4453 | 3556 | 0.0131 |
| 3.4463 | 3557 | 0.0232 |
| 3.4472 | 3558 | 0.0197 |
| 3.4482 | 3559 | 0.01 |
| 3.4492 | 3560 | 0.0104 |
| 3.4501 | 3561 | 0.0435 |
| 3.4511 | 3562 | 0.0101 |
| 3.4521 | 3563 | 0.0086 |
| 3.4530 | 3564 | 0.012 |
| 3.4540 | 3565 | 0.0268 |
| 3.4550 | 3566 | 0.0118 |
| 3.4560 | 3567 | 0.0163 |
| 3.4569 | 3568 | 0.0143 |
| 3.4579 | 3569 | 0.0282 |
| 3.4589 | 3570 | 0.0189 |
| 3.4598 | 3571 | 0.0174 |
| 3.4608 | 3572 | 0.0147 |
| 3.4618 | 3573 | 0.0259 |
| 3.4627 | 3574 | 0.0122 |
| 3.4637 | 3575 | 0.0158 |
| 3.4647 | 3576 | 0.0305 |
| 3.4656 | 3577 | 0.02 |
| 3.4666 | 3578 | 0.0122 |
| 3.4676 | 3579 | 0.0108 |
| 3.4685 | 3580 | 0.0118 |
| 3.4695 | 3581 | 0.0157 |
| 3.4705 | 3582 | 0.013 |
| 3.4714 | 3583 | 0.0145 |
| 3.4724 | 3584 | 0.0123 |
| 3.4734 | 3585 | 0.0133 |
| 3.4743 | 3586 | 0.0127 |
| 3.4753 | 3587 | 0.0082 |
| 3.4763 | 3588 | 0.008 |
| 3.4773 | 3589 | 0.0162 |
| 3.4782 | 3590 | 0.0105 |
| 3.4792 | 3591 | 0.011 |
| 3.4802 | 3592 | 0.0094 |
| 3.4811 | 3593 | 0.0139 |
| 3.4821 | 3594 | 0.0092 |
| 3.4831 | 3595 | 0.0085 |
| 3.4840 | 3596 | 0.0144 |
| 3.4850 | 3597 | 0.0214 |
| 3.4860 | 3598 | 0.0145 |
| 3.4869 | 3599 | 0.0086 |
| 3.4879 | 3600 | 0.0084 |
| 3.4889 | 3601 | 0.0191 |
| 3.4898 | 3602 | 0.0149 |
| 3.4908 | 3603 | 0.0107 |
| 3.4918 | 3604 | 0.0093 |
| 3.4927 | 3605 | 0.0247 |
| 3.4937 | 3606 | 0.0187 |
| 3.4947 | 3607 | 0.0161 |
| 3.4956 | 3608 | 0.01 |
| 3.4966 | 3609 | 0.0226 |
| 3.4976 | 3610 | 0.0107 |
| 3.4985 | 3611 | 0.0074 |
| 3.4995 | 3612 | 0.0079 |
| 3.5005 | 3613 | 0.0215 |
| 3.5015 | 3614 | 0.0074 |
| 3.5024 | 3615 | 0.011 |
| 3.5034 | 3616 | 0.0084 |
| 3.5044 | 3617 | 0.0145 |
| 3.5053 | 3618 | 0.0144 |
| 3.5063 | 3619 | 0.0146 |
| 3.5073 | 3620 | 0.0195 |
| 3.5082 | 3621 | 0.0234 |
| 3.5092 | 3622 | 0.0128 |
| 3.5102 | 3623 | 0.0077 |
| 3.5111 | 3624 | 0.0127 |
| 3.5121 | 3625 | 0.0129 |
| 3.5131 | 3626 | 0.0136 |
| 3.5140 | 3627 | 0.0094 |
| 3.5150 | 3628 | 0.0158 |
| 3.5160 | 3629 | 0.0223 |
| 3.5169 | 3630 | 0.0121 |
| 3.5179 | 3631 | 0.0095 |
| 3.5189 | 3632 | 0.0103 |
| 3.5198 | 3633 | 0.0207 |
| 3.5208 | 3634 | 0.0129 |
| 3.5218 | 3635 | 0.0088 |
| 3.5227 | 3636 | 0.0119 |
| 3.5237 | 3637 | 0.0219 |
| 3.5247 | 3638 | 0.0119 |
| 3.5257 | 3639 | 0.0122 |
| 3.5266 | 3640 | 0.009 |
| 3.5276 | 3641 | 0.0221 |
| 3.5286 | 3642 | 0.0103 |
| 3.5295 | 3643 | 0.0158 |
| 3.5305 | 3644 | 0.01 |
| 3.5315 | 3645 | 0.0323 |
| 3.5324 | 3646 | 0.018 |
| 3.5334 | 3647 | 0.0149 |
| 3.5344 | 3648 | 0.013 |
| 3.5353 | 3649 | 0.0162 |
| 3.5363 | 3650 | 0.0213 |
| 3.5373 | 3651 | 0.0103 |
| 3.5382 | 3652 | 0.014 |
| 3.5392 | 3653 | 0.0167 |
| 3.5402 | 3654 | 0.0243 |
| 3.5411 | 3655 | 0.0093 |
| 3.5421 | 3656 | 0.0083 |
| 3.5431 | 3657 | 0.0191 |
| 3.5440 | 3658 | 0.0229 |
| 3.5450 | 3659 | 0.0208 |
| 3.5460 | 3660 | 0.0159 |
| 3.5470 | 3661 | 0.0176 |
| 3.5479 | 3662 | 0.0113 |
| 3.5489 | 3663 | 0.013 |
| 3.5499 | 3664 | 0.0106 |
| 3.5508 | 3665 | 0.0245 |
| 3.5518 | 3666 | 0.0103 |
| 3.5528 | 3667 | 0.0178 |
| 3.5537 | 3668 | 0.0158 |
| 3.5547 | 3669 | 0.0124 |
| 3.5557 | 3670 | 0.0231 |
| 3.5566 | 3671 | 0.0192 |
| 3.5576 | 3672 | 0.0144 |
| 3.5586 | 3673 | 0.0176 |
| 3.5595 | 3674 | 0.013 |
| 3.5605 | 3675 | 0.0114 |
| 3.5615 | 3676 | 0.0128 |
| 3.5624 | 3677 | 0.0121 |
| 3.5634 | 3678 | 0.013 |
| 3.5644 | 3679 | 0.0097 |
| 3.5653 | 3680 | 0.0138 |
| 3.5663 | 3681 | 0.0146 |
| 3.5673 | 3682 | 0.0261 |
| 3.5682 | 3683 | 0.019 |
| 3.5692 | 3684 | 0.063 |
| 3.5702 | 3685 | 0.0207 |
| 3.5712 | 3686 | 0.0141 |
| 3.5721 | 3687 | 0.0102 |
| 3.5731 | 3688 | 0.0143 |
| 3.5741 | 3689 | 0.0128 |
| 3.5750 | 3690 | 0.0067 |
| 3.5760 | 3691 | 0.0148 |
| 3.5770 | 3692 | 0.0086 |
| 3.5779 | 3693 | 0.0178 |
| 3.5789 | 3694 | 0.0135 |
| 3.5799 | 3695 | 0.0129 |
| 3.5808 | 3696 | 0.0148 |
| 3.5818 | 3697 | 0.0225 |
| 3.5828 | 3698 | 0.0188 |
| 3.5837 | 3699 | 0.0116 |
| 3.5847 | 3700 | 0.0114 |
| 3.5857 | 3701 | 0.0226 |
| 3.5866 | 3702 | 0.0173 |
| 3.5876 | 3703 | 0.012 |
| 3.5886 | 3704 | 0.011 |
| 3.5895 | 3705 | 0.0136 |
| 3.5905 | 3706 | 0.0172 |
| 3.5915 | 3707 | 0.0132 |
| 3.5924 | 3708 | 0.0168 |
| 3.5934 | 3709 | 0.0161 |
| 3.5944 | 3710 | 0.0112 |
| 3.5954 | 3711 | 0.0119 |
| 3.5963 | 3712 | 0.0126 |
| 3.5973 | 3713 | 0.0194 |
| 3.5983 | 3714 | 0.0196 |
| 3.5992 | 3715 | 0.0104 |
| 3.6002 | 3716 | 0.009 |
| 3.6012 | 3717 | 0.0205 |
| 3.6021 | 3718 | 0.0171 |
| 3.6031 | 3719 | 0.0288 |
| 3.6041 | 3720 | 0.0213 |
| 3.6050 | 3721 | 0.0135 |
| 3.6060 | 3722 | 0.0196 |
| 3.6070 | 3723 | 0.0098 |
| 3.6079 | 3724 | 0.0165 |
| 3.6089 | 3725 | 0.0201 |
| 3.6099 | 3726 | 0.0104 |
| 3.6108 | 3727 | 0.0195 |
| 3.6118 | 3728 | 0.0161 |
| 3.6128 | 3729 | 0.0291 |
| 3.6137 | 3730 | 0.0255 |
| 3.6147 | 3731 | 0.0237 |
| 3.6157 | 3732 | 0.012 |
| 3.6167 | 3733 | 0.0233 |
| 3.6176 | 3734 | 0.0172 |
| 3.6186 | 3735 | 0.0334 |
| 3.6196 | 3736 | 0.0086 |
| 3.6205 | 3737 | 0.0189 |
| 3.6215 | 3738 | 0.0189 |
| 3.6225 | 3739 | 0.0096 |
| 3.6234 | 3740 | 0.0134 |
| 3.6244 | 3741 | 0.0203 |
| 3.6254 | 3742 | 0.0167 |
| 3.6263 | 3743 | 0.0153 |
| 3.6273 | 3744 | 0.0155 |
| 3.6283 | 3745 | 0.0124 |
| 3.6292 | 3746 | 0.0108 |
| 3.6302 | 3747 | 0.0151 |
| 3.6312 | 3748 | 0.0213 |
| 3.6321 | 3749 | 0.0131 |
| 3.6331 | 3750 | 0.0165 |
| 3.6341 | 3751 | 0.011 |
| 3.6350 | 3752 | 0.009 |
| 3.6360 | 3753 | 0.0157 |
| 3.6370 | 3754 | 0.0164 |
| 3.6379 | 3755 | 0.0074 |
| 3.6389 | 3756 | 0.0107 |
| 3.6399 | 3757 | 0.0192 |
| 3.6409 | 3758 | 0.0165 |
| 3.6418 | 3759 | 0.0158 |
| 3.6428 | 3760 | 0.0141 |
| 3.6438 | 3761 | 0.014 |
| 3.6447 | 3762 | 0.0123 |
| 3.6457 | 3763 | 0.0108 |
| 3.6467 | 3764 | 0.0164 |
| 3.6476 | 3765 | 0.0382 |
| 3.6486 | 3766 | 0.0174 |
| 3.6496 | 3767 | 0.0092 |
| 3.6505 | 3768 | 0.0164 |
| 3.6515 | 3769 | 0.0195 |
| 3.6525 | 3770 | 0.0133 |
| 3.6534 | 3771 | 0.0097 |
| 3.6544 | 3772 | 0.0199 |
| 3.6554 | 3773 | 0.0249 |
| 3.6563 | 3774 | 0.0129 |
| 3.6573 | 3775 | 0.0178 |
| 3.6583 | 3776 | 0.0126 |
| 3.6592 | 3777 | 0.019 |
| 3.6602 | 3778 | 0.0124 |
| 3.6612 | 3779 | 0.0074 |
| 3.6621 | 3780 | 0.0082 |
| 3.6631 | 3781 | 0.032 |
| 3.6641 | 3782 | 0.0202 |
| 3.6651 | 3783 | 0.012 |
| 3.6660 | 3784 | 0.0199 |
| 3.6670 | 3785 | 0.014 |
| 3.6680 | 3786 | 0.0204 |
| 3.6689 | 3787 | 0.0124 |
| 3.6699 | 3788 | 0.0181 |
| 3.6709 | 3789 | 0.0126 |
| 3.6718 | 3790 | 0.0138 |
| 3.6728 | 3791 | 0.0143 |
| 3.6738 | 3792 | 0.0159 |
| 3.6747 | 3793 | 0.0117 |
| 3.6757 | 3794 | 0.0156 |
| 3.6767 | 3795 | 0.0073 |
| 3.6776 | 3796 | 0.0126 |
| 3.6786 | 3797 | 0.0105 |
| 3.6796 | 3798 | 0.0124 |
| 3.6805 | 3799 | 0.0153 |
| 3.6815 | 3800 | 0.0172 |
| 3.6825 | 3801 | 0.0187 |
| 3.6834 | 3802 | 0.0229 |
| 3.6844 | 3803 | 0.0111 |
| 3.6854 | 3804 | 0.0183 |
| 3.6864 | 3805 | 0.0204 |
| 3.6873 | 3806 | 0.0161 |
| 3.6883 | 3807 | 0.0115 |
| 3.6893 | 3808 | 0.0116 |
| 3.6902 | 3809 | 0.0343 |
| 3.6912 | 3810 | 0.0144 |
| 3.6922 | 3811 | 0.0084 |
| 3.6931 | 3812 | 0.0096 |
| 3.6941 | 3813 | 0.0247 |
| 3.6951 | 3814 | 0.0147 |
| 3.6960 | 3815 | 0.0106 |
| 3.6970 | 3816 | 0.0121 |
| 3.6980 | 3817 | 0.0197 |
| 3.6989 | 3818 | 0.009 |
| 3.6999 | 3819 | 0.0121 |
| 3.7009 | 3820 | 0.0146 |
| 3.7018 | 3821 | 0.0227 |
| 3.7028 | 3822 | 0.007 |
| 3.7038 | 3823 | 0.0095 |
| 3.7047 | 3824 | 0.0207 |
| 3.7057 | 3825 | 0.0154 |
| 3.7067 | 3826 | 0.0239 |
| 3.7076 | 3827 | 0.0121 |
| 3.7086 | 3828 | 0.0303 |
| 3.7096 | 3829 | 0.0215 |
| 3.7106 | 3830 | 0.0249 |
| 3.7115 | 3831 | 0.0112 |
| 3.7125 | 3832 | 0.0182 |
| 3.7135 | 3833 | 0.0207 |
| 3.7144 | 3834 | 0.015 |
| 3.7154 | 3835 | 0.0158 |
| 3.7164 | 3836 | 0.0124 |
| 3.7173 | 3837 | 0.0401 |
| 3.7183 | 3838 | 0.0275 |
| 3.7193 | 3839 | 0.0121 |
| 3.7202 | 3840 | 0.0137 |
| 3.7212 | 3841 | 0.0152 |
| 3.7222 | 3842 | 0.0181 |
| 3.7231 | 3843 | 0.0135 |
| 3.7241 | 3844 | 0.0183 |
| 3.7251 | 3845 | 0.0316 |
| 3.7260 | 3846 | 0.016 |
| 3.7270 | 3847 | 0.0116 |
| 3.7280 | 3848 | 0.0101 |
| 3.7289 | 3849 | 0.0143 |
| 3.7299 | 3850 | 0.0166 |
| 3.7309 | 3851 | 0.0103 |
| 3.7318 | 3852 | 0.0082 |
| 3.7328 | 3853 | 0.0188 |
| 3.7338 | 3854 | 0.0199 |
| 3.7348 | 3855 | 0.0196 |
| 3.7357 | 3856 | 0.0132 |
| 3.7367 | 3857 | 0.0457 |
| 3.7377 | 3858 | 0.0092 |
| 3.7386 | 3859 | 0.0117 |
| 3.7396 | 3860 | 0.0226 |
| 3.7406 | 3861 | 0.0136 |
| 3.7415 | 3862 | 0.0133 |
| 3.7425 | 3863 | 0.016 |
| 3.7435 | 3864 | 0.0108 |
| 3.7444 | 3865 | 0.0129 |
| 3.7454 | 3866 | 0.0224 |
| 3.7464 | 3867 | 0.0176 |
| 3.7473 | 3868 | 0.0124 |
| 3.7483 | 3869 | 0.0222 |
| 3.7493 | 3870 | 0.016 |
| 3.7502 | 3871 | 0.0135 |
| 3.7512 | 3872 | 0.0262 |
| 3.7522 | 3873 | 0.0172 |
| 3.7531 | 3874 | 0.017 |
| 3.7541 | 3875 | 0.0153 |
| 3.7551 | 3876 | 0.0146 |
| 3.7561 | 3877 | 0.0588 |
| 3.7570 | 3878 | 0.0224 |
| 3.7580 | 3879 | 0.0263 |
| 3.7590 | 3880 | 0.0149 |
| 3.7599 | 3881 | 0.0239 |
| 3.7609 | 3882 | 0.015 |
| 3.7619 | 3883 | 0.0156 |
| 3.7628 | 3884 | 0.0109 |
| 3.7638 | 3885 | 0.0182 |
| 3.7648 | 3886 | 0.0136 |
| 3.7657 | 3887 | 0.011 |
| 3.7667 | 3888 | 0.0092 |
| 3.7677 | 3889 | 0.0244 |
| 3.7686 | 3890 | 0.0147 |
| 3.7696 | 3891 | 0.0173 |
| 3.7706 | 3892 | 0.014 |
| 3.7715 | 3893 | 0.0183 |
| 3.7725 | 3894 | 0.0129 |
| 3.7735 | 3895 | 0.0096 |
| 3.7744 | 3896 | 0.0239 |
| 3.7754 | 3897 | 0.0208 |
| 3.7764 | 3898 | 0.0101 |
| 3.7773 | 3899 | 0.011 |
| 3.7783 | 3900 | 0.0186 |
| 3.7793 | 3901 | 0.0132 |
| 3.7803 | 3902 | 0.0084 |
| 3.7812 | 3903 | 0.0089 |
| 3.7822 | 3904 | 0.0196 |
| 3.7832 | 3905 | 0.025 |
| 3.7841 | 3906 | 0.0244 |
| 3.7851 | 3907 | 0.0218 |
| 3.7861 | 3908 | 0.0144 |
| 3.7870 | 3909 | 0.0255 |
| 3.7880 | 3910 | 0.0202 |
| 3.7890 | 3911 | 0.0106 |
| 3.7899 | 3912 | 0.0171 |
| 3.7909 | 3913 | 0.016 |
| 3.7919 | 3914 | 0.0139 |
| 3.7928 | 3915 | 0.0094 |
| 3.7938 | 3916 | 0.0105 |
| 3.7948 | 3917 | 0.018 |
| 3.7957 | 3918 | 0.0255 |
| 3.7967 | 3919 | 0.0168 |
| 3.7977 | 3920 | 0.0199 |
| 3.7986 | 3921 | 0.0165 |
| 3.7996 | 3922 | 0.0151 |
| 3.8006 | 3923 | 0.0073 |
| 3.8015 | 3924 | 0.0113 |
| 3.8025 | 3925 | 0.0171 |
| 3.8035 | 3926 | 0.013 |
| 3.8045 | 3927 | 0.0201 |
| 3.8054 | 3928 | 0.0176 |
| 3.8064 | 3929 | 0.0319 |
| 3.8074 | 3930 | 0.0139 |
| 3.8083 | 3931 | 0.0118 |
| 3.8093 | 3932 | 0.009 |
| 3.8103 | 3933 | 0.0238 |
| 3.8112 | 3934 | 0.0238 |
| 3.8122 | 3935 | 0.0102 |
| 3.8132 | 3936 | 0.0171 |
| 3.8141 | 3937 | 0.0293 |
| 3.8151 | 3938 | 0.0163 |
| 3.8161 | 3939 | 0.0183 |
| 3.8170 | 3940 | 0.0194 |
| 3.8180 | 3941 | 0.0146 |
| 3.8190 | 3942 | 0.027 |
| 3.8199 | 3943 | 0.015 |
| 3.8209 | 3944 | 0.013 |
| 3.8219 | 3945 | 0.0142 |
| 3.8228 | 3946 | 0.0078 |
| 3.8238 | 3947 | 0.0184 |
| 3.8248 | 3948 | 0.0091 |
| 3.8258 | 3949 | 0.0128 |
| 3.8267 | 3950 | 0.0133 |
| 3.8277 | 3951 | 0.0166 |
| 3.8287 | 3952 | 0.0326 |
| 3.8296 | 3953 | 0.0284 |
| 3.8306 | 3954 | 0.0227 |
| 3.8316 | 3955 | 0.0122 |
| 3.8325 | 3956 | 0.0174 |
| 3.8335 | 3957 | 0.0175 |
| 3.8345 | 3958 | 0.0215 |
| 3.8354 | 3959 | 0.0238 |
| 3.8364 | 3960 | 0.0164 |
| 3.8374 | 3961 | 0.0196 |
| 3.8383 | 3962 | 0.0109 |
| 3.8393 | 3963 | 0.0107 |
| 3.8403 | 3964 | 0.0106 |
| 3.8412 | 3965 | 0.0275 |
| 3.8422 | 3966 | 0.0397 |
| 3.8432 | 3967 | 0.012 |
| 3.8441 | 3968 | 0.0163 |
| 3.8451 | 3969 | 0.021 |
| 3.8461 | 3970 | 0.0184 |
| 3.8470 | 3971 | 0.0118 |
| 3.8480 | 3972 | 0.0223 |
| 3.8490 | 3973 | 0.0193 |
| 3.8500 | 3974 | 0.0262 |
| 3.8509 | 3975 | 0.0126 |
| 3.8519 | 3976 | 0.0196 |
| 3.8529 | 3977 | 0.0231 |
| 3.8538 | 3978 | 0.0082 |
| 3.8548 | 3979 | 0.0115 |
| 3.8558 | 3980 | 0.0083 |
| 3.8567 | 3981 | 0.0236 |
| 3.8577 | 3982 | 0.0146 |
| 3.8587 | 3983 | 0.0326 |
| 3.8596 | 3984 | 0.0122 |
| 3.8606 | 3985 | 0.0143 |
| 3.8616 | 3986 | 0.0226 |
| 3.8625 | 3987 | 0.0141 |
| 3.8635 | 3988 | 0.0186 |
| 3.8645 | 3989 | 0.0238 |
| 3.8654 | 3990 | 0.0094 |
| 3.8664 | 3991 | 0.0123 |
| 3.8674 | 3992 | 0.0148 |
| 3.8683 | 3993 | 0.0158 |
| 3.8693 | 3994 | 0.0149 |
| 3.8703 | 3995 | 0.0149 |
| 3.8712 | 3996 | 0.019 |
| 3.8722 | 3997 | 0.0162 |
| 3.8732 | 3998 | 0.0407 |
| 3.8742 | 3999 | 0.0116 |
| 3.8751 | 4000 | 0.0216 |
| 3.8761 | 4001 | 0.0203 |
| 3.8771 | 4002 | 0.0218 |
| 3.8780 | 4003 | 0.0167 |
| 3.8790 | 4004 | 0.0162 |
| 3.8800 | 4005 | 0.0156 |
| 3.8809 | 4006 | 0.0099 |
| 3.8819 | 4007 | 0.0124 |
| 3.8829 | 4008 | 0.0141 |
| 3.8838 | 4009 | 0.0247 |
| 3.8848 | 4010 | 0.0089 |
| 3.8858 | 4011 | 0.0192 |
| 3.8867 | 4012 | 0.0091 |
| 3.8877 | 4013 | 0.0284 |
| 3.8887 | 4014 | 0.0212 |
| 3.8896 | 4015 | 0.0127 |
| 3.8906 | 4016 | 0.0127 |
| 3.8916 | 4017 | 0.0198 |
| 3.8925 | 4018 | 0.0138 |
| 3.8935 | 4019 | 0.0135 |
| 3.8945 | 4020 | 0.014 |
| 3.8955 | 4021 | 0.0259 |
| 3.8964 | 4022 | 0.0192 |
| 3.8974 | 4023 | 0.0167 |
| 3.8984 | 4024 | 0.015 |
| 3.8993 | 4025 | 0.0229 |
| 3.9003 | 4026 | 0.0127 |
| 3.9013 | 4027 | 0.0105 |
| 3.9022 | 4028 | 0.0083 |
| 3.9032 | 4029 | 0.0234 |
| 3.9042 | 4030 | 0.0172 |
| 3.9051 | 4031 | 0.0207 |
| 3.9061 | 4032 | 0.014 |
| 3.9071 | 4033 | 0.0349 |
| 3.9080 | 4034 | 0.0151 |
| 3.9090 | 4035 | 0.0179 |
| 3.9100 | 4036 | 0.0158 |
| 3.9109 | 4037 | 0.0228 |
| 3.9119 | 4038 | 0.0227 |
| 3.9129 | 4039 | 0.0106 |
| 3.9138 | 4040 | 0.015 |
| 3.9148 | 4041 | 0.0131 |
| 3.9158 | 4042 | 0.0142 |
| 3.9167 | 4043 | 0.0173 |
| 3.9177 | 4044 | 0.007 |
| 3.9187 | 4045 | 0.0178 |
| 3.9197 | 4046 | 0.0137 |
| 3.9206 | 4047 | 0.0082 |
| 3.9216 | 4048 | 0.0122 |
| 3.9226 | 4049 | 0.0348 |
| 3.9235 | 4050 | 0.0131 |
| 3.9245 | 4051 | 0.0126 |
| 3.9255 | 4052 | 0.0109 |
| 3.9264 | 4053 | 0.0188 |
| 3.9274 | 4054 | 0.0167 |
| 3.9284 | 4055 | 0.0088 |
| 3.9293 | 4056 | 0.0107 |
| 3.9303 | 4057 | 0.0125 |
| 3.9313 | 4058 | 0.0131 |
| 3.9322 | 4059 | 0.0143 |
| 3.9332 | 4060 | 0.018 |
| 3.9342 | 4061 | 0.0324 |
| 3.9351 | 4062 | 0.0411 |
| 3.9361 | 4063 | 0.0181 |
| 3.9371 | 4064 | 0.0155 |
| 3.9380 | 4065 | 0.0359 |
| 3.9390 | 4066 | 0.0151 |
| 3.9400 | 4067 | 0.013 |
| 3.9409 | 4068 | 0.016 |
| 3.9419 | 4069 | 0.0228 |
| 3.9429 | 4070 | 0.0251 |
| 3.9439 | 4071 | 0.0208 |
| 3.9448 | 4072 | 0.0086 |
| 3.9458 | 4073 | 0.0146 |
| 3.9468 | 4074 | 0.0163 |
| 3.9477 | 4075 | 0.0177 |
| 3.9487 | 4076 | 0.0146 |
| 3.9497 | 4077 | 0.0157 |
| 3.9506 | 4078 | 0.0106 |
| 3.9516 | 4079 | 0.0094 |
| 3.9526 | 4080 | 0.0113 |
| 3.9535 | 4081 | 0.0204 |
| 3.9545 | 4082 | 0.0179 |
| 3.9555 | 4083 | 0.0098 |
| 3.9564 | 4084 | 0.0205 |
| 3.9574 | 4085 | 0.0664 |
| 3.9584 | 4086 | 0.0192 |
| 3.9593 | 4087 | 0.0201 |
| 3.9603 | 4088 | 0.0147 |
| 3.9613 | 4089 | 0.0166 |
| 3.9622 | 4090 | 0.0086 |
| 3.9632 | 4091 | 0.0165 |
| 3.9642 | 4092 | 0.0178 |
| 3.9652 | 4093 | 0.0168 |
| 3.9661 | 4094 | 0.0176 |
| 3.9671 | 4095 | 0.0115 |
| 3.9681 | 4096 | 0.0107 |
| 3.9690 | 4097 | 0.0271 |
| 3.9700 | 4098 | 0.0192 |
| 3.9710 | 4099 | 0.0129 |
| 3.9719 | 4100 | 0.0121 |
| 3.9729 | 4101 | 0.0194 |
| 3.9739 | 4102 | 0.0162 |
| 3.9748 | 4103 | 0.0156 |
| 3.9758 | 4104 | 0.0177 |
| 3.9768 | 4105 | 0.0227 |
| 3.9777 | 4106 | 0.0187 |
| 3.9787 | 4107 | 0.023 |
| 3.9797 | 4108 | 0.0133 |
| 3.9806 | 4109 | 0.0271 |
| 3.9816 | 4110 | 0.0193 |
| 3.9826 | 4111 | 0.0191 |
| 3.9835 | 4112 | 0.029 |
| 3.9845 | 4113 | 0.0157 |
| 3.9855 | 4114 | 0.0168 |
| 3.9864 | 4115 | 0.0068 |
| 3.9874 | 4116 | 0.011 |
| 3.9884 | 4117 | 0.0258 |
| 3.9894 | 4118 | 0.0187 |
| 3.9903 | 4119 | 0.0088 |
| 3.9913 | 4120 | 0.0195 |
| 3.9923 | 4121 | 0.0265 |
| 3.9932 | 4122 | 0.0244 |
| 3.9942 | 4123 | 0.0278 |
| 3.9952 | 4124 | 0.0146 |
| 3.9961 | 4125 | 0.0125 |
| 3.9971 | 4126 | 0.0203 |
| 3.9981 | 4127 | 0.0271 |
| 3.9990 | 4128 | 0.0328 |
| 4.0010 | 4129 | 0.0361 |
| 4.0019 | 4130 | 0.018 |
| 4.0029 | 4131 | 0.013 |
| 4.0039 | 4132 | 0.0116 |
| 4.0048 | 4133 | 0.0218 |
| 4.0058 | 4134 | 0.0179 |
| 4.0068 | 4135 | 0.0191 |
| 4.0077 | 4136 | 0.0281 |
| 4.0087 | 4137 | 0.0199 |
| 4.0097 | 4138 | 0.0234 |
| 4.0106 | 4139 | 0.011 |
| 4.0116 | 4140 | 0.0134 |
| 4.0126 | 4141 | 0.0324 |
| 4.0136 | 4142 | 0.0268 |
| 4.0145 | 4143 | 0.0142 |
| 4.0155 | 4144 | 0.0117 |
| 4.0165 | 4145 | 0.0485 |
| 4.0174 | 4146 | 0.0137 |
| 4.0184 | 4147 | 0.0217 |
| 4.0194 | 4148 | 0.0178 |
| 4.0203 | 4149 | 0.0344 |
| 4.0213 | 4150 | 0.0219 |
| 4.0223 | 4151 | 0.0136 |
| 4.0232 | 4152 | 0.0114 |
| 4.0242 | 4153 | 0.0152 |
| 4.0252 | 4154 | 0.0225 |
| 4.0261 | 4155 | 0.0185 |
| 4.0271 | 4156 | 0.0073 |
| 4.0281 | 4157 | 0.032 |
| 4.0290 | 4158 | 0.0084 |
| 4.0300 | 4159 | 0.0135 |
| 4.0310 | 4160 | 0.0188 |
| 4.0319 | 4161 | 0.0228 |
| 4.0329 | 4162 | 0.0104 |
| 4.0339 | 4163 | 0.0099 |
| 4.0348 | 4164 | 0.0134 |
| 4.0358 | 4165 | 0.025 |
| 4.0368 | 4166 | 0.025 |
| 4.0378 | 4167 | 0.013 |
| 4.0387 | 4168 | 0.0104 |
| 4.0397 | 4169 | 0.0191 |
| 4.0407 | 4170 | 0.0165 |
| 4.0416 | 4171 | 0.012 |
| 4.0426 | 4172 | 0.016 |
| 4.0436 | 4173 | 0.0322 |
| 4.0445 | 4174 | 0.0125 |
| 4.0455 | 4175 | 0.0181 |
| 4.0465 | 4176 | 0.0098 |
| 4.0474 | 4177 | 0.0187 |
| 4.0484 | 4178 | 0.0145 |
| 4.0494 | 4179 | 0.0083 |
| 4.0503 | 4180 | 0.0147 |
| 4.0513 | 4181 | 0.0274 |
| 4.0523 | 4182 | 0.0139 |
| 4.0532 | 4183 | 0.0141 |
| 4.0542 | 4184 | 0.0109 |
| 4.0552 | 4185 | 0.0243 |
| 4.0561 | 4186 | 0.0179 |
| 4.0571 | 4187 | 0.009 |
| 4.0581 | 4188 | 0.0126 |
| 4.0591 | 4189 | 0.0252 |
| 4.0600 | 4190 | 0.0233 |
| 4.0610 | 4191 | 0.0205 |
| 4.0620 | 4192 | 0.0153 |
| 4.0629 | 4193 | 0.033 |
| 4.0639 | 4194 | 0.0255 |
| 4.0649 | 4195 | 0.0197 |
| 4.0658 | 4196 | 0.0144 |
| 4.0668 | 4197 | 0.0378 |
| 4.0678 | 4198 | 0.0182 |
| 4.0687 | 4199 | 0.0176 |
| 4.0697 | 4200 | 0.0131 |
| 4.0707 | 4201 | 0.0277 |
| 4.0716 | 4202 | 0.0185 |
| 4.0726 | 4203 | 0.0133 |
| 4.0736 | 4204 | 0.008 |
| 4.0745 | 4205 | 0.0143 |
| 4.0755 | 4206 | 0.0156 |
| 4.0765 | 4207 | 0.0156 |
| 4.0774 | 4208 | 0.0125 |
| 4.0784 | 4209 | 0.0188 |
| 4.0794 | 4210 | 0.0275 |
| 4.0803 | 4211 | 0.0081 |
| 4.0813 | 4212 | 0.0099 |
| 4.0823 | 4213 | 0.0263 |
| 4.0833 | 4214 | 0.0209 |
| 4.0842 | 4215 | 0.0176 |
| 4.0852 | 4216 | 0.011 |
| 4.0862 | 4217 | 0.0132 |
| 4.0871 | 4218 | 0.0092 |
| 4.0881 | 4219 | 0.0082 |
| 4.0891 | 4220 | 0.0121 |
| 4.0900 | 4221 | 0.0222 |
| 4.0910 | 4222 | 0.0155 |
| 4.0920 | 4223 | 0.0105 |
| 4.0929 | 4224 | 0.0103 |
| 4.0939 | 4225 | 0.0212 |
| 4.0949 | 4226 | 0.0117 |
| 4.0958 | 4227 | 0.0135 |
| 4.0968 | 4228 | 0.0164 |
| 4.0978 | 4229 | 0.0466 |
| 4.0987 | 4230 | 0.0099 |
| 4.0997 | 4231 | 0.0146 |
| 4.1007 | 4232 | 0.0086 |
| 4.1016 | 4233 | 0.025 |
| 4.1026 | 4234 | 0.0192 |
| 4.1036 | 4235 | 0.018 |
| 4.1045 | 4236 | 0.0157 |
| 4.1055 | 4237 | 0.0212 |
| 4.1065 | 4238 | 0.0127 |
| 4.1075 | 4239 | 0.0102 |
| 4.1084 | 4240 | 0.0071 |
| 4.1094 | 4241 | 0.0284 |
| 4.1104 | 4242 | 0.0174 |
| 4.1113 | 4243 | 0.0142 |
| 4.1123 | 4244 | 0.0184 |
| 4.1133 | 4245 | 0.0297 |
| 4.1142 | 4246 | 0.0197 |
| 4.1152 | 4247 | 0.0076 |
| 4.1162 | 4248 | 0.0125 |
| 4.1171 | 4249 | 0.0238 |
| 4.1181 | 4250 | 0.0195 |
| 4.1191 | 4251 | 0.0136 |
| 4.1200 | 4252 | 0.0123 |
| 4.1210 | 4253 | 0.0275 |
| 4.1220 | 4254 | 0.0217 |
| 4.1229 | 4255 | 0.0183 |
| 4.1239 | 4256 | 0.0083 |
| 4.1249 | 4257 | 0.0377 |
| 4.1258 | 4258 | 0.0096 |
| 4.1268 | 4259 | 0.009 |
| 4.1278 | 4260 | 0.0146 |
| 4.1288 | 4261 | 0.0318 |
| 4.1297 | 4262 | 0.0191 |
| 4.1307 | 4263 | 0.0191 |
| 4.1317 | 4264 | 0.0145 |
| 4.1326 | 4265 | 0.0295 |
| 4.1336 | 4266 | 0.0254 |
| 4.1346 | 4267 | 0.0096 |
| 4.1355 | 4268 | 0.0121 |
| 4.1365 | 4269 | 0.0286 |
| 4.1375 | 4270 | 0.0246 |
| 4.1384 | 4271 | 0.0151 |
| 4.1394 | 4272 | 0.0127 |
| 4.1404 | 4273 | 0.0264 |
| 4.1413 | 4274 | 0.0175 |
| 4.1423 | 4275 | 0.0118 |
| 4.1433 | 4276 | 0.0123 |
| 4.1442 | 4277 | 0.0252 |
| 4.1452 | 4278 | 0.0151 |
| 4.1462 | 4279 | 0.0125 |
| 4.1471 | 4280 | 0.0072 |
| 4.1481 | 4281 | 0.0575 |
| 4.1491 | 4282 | 0.0186 |
| 4.1500 | 4283 | 0.0118 |
| 4.1510 | 4284 | 0.0163 |
| 4.1520 | 4285 | 0.0354 |
| 4.1530 | 4286 | 0.0199 |
| 4.1539 | 4287 | 0.0125 |
| 4.1549 | 4288 | 0.0124 |
| 4.1559 | 4289 | 0.0332 |
| 4.1568 | 4290 | 0.014 |
| 4.1578 | 4291 | 0.0084 |
| 4.1588 | 4292 | 0.0093 |
| 4.1597 | 4293 | 0.0175 |
| 4.1607 | 4294 | 0.011 |
| 4.1617 | 4295 | 0.0108 |
| 4.1626 | 4296 | 0.0101 |
| 4.1636 | 4297 | 0.0121 |
| 4.1646 | 4298 | 0.0163 |
| 4.1655 | 4299 | 0.0098 |
| 4.1665 | 4300 | 0.0098 |
| 4.1675 | 4301 | 0.038 |
| 4.1684 | 4302 | 0.0133 |
| 4.1694 | 4303 | 0.0151 |
| 4.1704 | 4304 | 0.0111 |
| 4.1713 | 4305 | 0.0289 |
| 4.1723 | 4306 | 0.0146 |
| 4.1733 | 4307 | 0.0101 |
| 4.1742 | 4308 | 0.0082 |
| 4.1752 | 4309 | 0.0271 |
| 4.1762 | 4310 | 0.0142 |
| 4.1772 | 4311 | 0.0217 |
| 4.1781 | 4312 | 0.0185 |
| 4.1791 | 4313 | 0.0219 |
| 4.1801 | 4314 | 0.0135 |
| 4.1810 | 4315 | 0.0159 |
| 4.1820 | 4316 | 0.0109 |
| 4.1830 | 4317 | 0.0313 |
| 4.1839 | 4318 | 0.0148 |
| 4.1849 | 4319 | 0.0126 |
| 4.1859 | 4320 | 0.0098 |
| 4.1868 | 4321 | 0.0492 |
| 4.1878 | 4322 | 0.0253 |
| 4.1888 | 4323 | 0.0136 |
| 4.1897 | 4324 | 0.0241 |
| 4.1907 | 4325 | 0.0293 |
| 4.1917 | 4326 | 0.0095 |
| 4.1926 | 4327 | 0.0112 |
| 4.1936 | 4328 | 0.012 |
| 4.1946 | 4329 | 0.0292 |
| 4.1955 | 4330 | 0.0149 |
| 4.1965 | 4331 | 0.0145 |
| 4.1975 | 4332 | 0.0149 |
| 4.1985 | 4333 | 0.0157 |
| 4.1994 | 4334 | 0.0182 |
| 4.2004 | 4335 | 0.0115 |
| 4.2014 | 4336 | 0.0091 |
| 4.2023 | 4337 | 0.0175 |
| 4.2033 | 4338 | 0.0148 |
| 4.2043 | 4339 | 0.0107 |
| 4.2052 | 4340 | 0.0113 |
| 4.2062 | 4341 | 0.0307 |
| 4.2072 | 4342 | 0.0153 |
| 4.2081 | 4343 | 0.0072 |
| 4.2091 | 4344 | 0.0078 |
| 4.2101 | 4345 | 0.0208 |
| 4.2110 | 4346 | 0.0205 |
| 4.2120 | 4347 | 0.0145 |
| 4.2130 | 4348 | 0.0159 |
| 4.2139 | 4349 | 0.0162 |
| 4.2149 | 4350 | 0.0149 |
| 4.2159 | 4351 | 0.0146 |
| 4.2168 | 4352 | 0.011 |
| 4.2178 | 4353 | 0.0171 |
| 4.2188 | 4354 | 0.0186 |
| 4.2197 | 4355 | 0.0109 |
| 4.2207 | 4356 | 0.0099 |
| 4.2217 | 4357 | 0.0216 |
| 4.2227 | 4358 | 0.014 |
| 4.2236 | 4359 | 0.015 |
| 4.2246 | 4360 | 0.0107 |
| 4.2256 | 4361 | 0.0328 |
| 4.2265 | 4362 | 0.0401 |
| 4.2275 | 4363 | 0.0105 |
| 4.2285 | 4364 | 0.0144 |
| 4.2294 | 4365 | 0.0333 |
| 4.2304 | 4366 | 0.0164 |
| 4.2314 | 4367 | 0.0107 |
| 4.2323 | 4368 | 0.0092 |
| 4.2333 | 4369 | 0.0201 |
| 4.2343 | 4370 | 0.0212 |
| 4.2352 | 4371 | 0.0165 |
| 4.2362 | 4372 | 0.0143 |
| 4.2372 | 4373 | 0.0351 |
| 4.2381 | 4374 | 0.0158 |
| 4.2391 | 4375 | 0.0072 |
| 4.2401 | 4376 | 0.01 |
| 4.2410 | 4377 | 0.0208 |
| 4.2420 | 4378 | 0.0154 |
| 4.2430 | 4379 | 0.0165 |
| 4.2439 | 4380 | 0.0135 |
| 4.2449 | 4381 | 0.0139 |
| 4.2459 | 4382 | 0.0177 |
| 4.2469 | 4383 | 0.0141 |
| 4.2478 | 4384 | 0.0096 |
| 4.2488 | 4385 | 0.0303 |
| 4.2498 | 4386 | 0.0242 |
| 4.2507 | 4387 | 0.013 |
| 4.2517 | 4388 | 0.0136 |
| 4.2527 | 4389 | 0.0261 |
| 4.2536 | 4390 | 0.0135 |
| 4.2546 | 4391 | 0.0108 |
| 4.2556 | 4392 | 0.0138 |
| 4.2565 | 4393 | 0.0144 |
| 4.2575 | 4394 | 0.0104 |
| 4.2585 | 4395 | 0.0077 |
| 4.2594 | 4396 | 0.0107 |
| 4.2604 | 4397 | 0.0195 |
| 4.2614 | 4398 | 0.0073 |
| 4.2623 | 4399 | 0.0096 |
| 4.2633 | 4400 | 0.0106 |
| 4.2643 | 4401 | 0.0132 |
| 4.2652 | 4402 | 0.0193 |
| 4.2662 | 4403 | 0.0054 |
| 4.2672 | 4404 | 0.0087 |
| 4.2682 | 4405 | 0.0353 |
| 4.2691 | 4406 | 0.0177 |
| 4.2701 | 4407 | 0.0122 |
| 4.2711 | 4408 | 0.011 |
| 4.2720 | 4409 | 0.0173 |
| 4.2730 | 4410 | 0.0116 |
| 4.2740 | 4411 | 0.0144 |
| 4.2749 | 4412 | 0.0082 |
| 4.2759 | 4413 | 0.0312 |
| 4.2769 | 4414 | 0.0124 |
| 4.2778 | 4415 | 0.0118 |
| 4.2788 | 4416 | 0.008 |
| 4.2798 | 4417 | 0.0118 |
| 4.2807 | 4418 | 0.0077 |
| 4.2817 | 4419 | 0.0113 |
| 4.2827 | 4420 | 0.0042 |
| 4.2836 | 4421 | 0.0258 |
| 4.2846 | 4422 | 0.0066 |
| 4.2856 | 4423 | 0.0146 |
| 4.2865 | 4424 | 0.0088 |
| 4.2875 | 4425 | 0.0229 |
| 4.2885 | 4426 | 0.0078 |
| 4.2894 | 4427 | 0.0102 |
| 4.2904 | 4428 | 0.0071 |
| 4.2914 | 4429 | 0.0212 |
| 4.2924 | 4430 | 0.0095 |
| 4.2933 | 4431 | 0.0138 |
| 4.2943 | 4432 | 0.0122 |
| 4.2953 | 4433 | 0.0125 |
| 4.2962 | 4434 | 0.014 |
| 4.2972 | 4435 | 0.0172 |
| 4.2982 | 4436 | 0.0065 |
| 4.2991 | 4437 | 0.0314 |
| 4.3001 | 4438 | 0.0116 |
| 4.3011 | 4439 | 0.0134 |
| 4.3020 | 4440 | 0.0096 |
| 4.3030 | 4441 | 0.0144 |
| 4.3040 | 4442 | 0.0096 |
| 4.3049 | 4443 | 0.014 |
| 4.3059 | 4444 | 0.013 |
| 4.3069 | 4445 | 0.019 |
| 4.3078 | 4446 | 0.009 |
| 4.3088 | 4447 | 0.0094 |
| 4.3098 | 4448 | 0.0101 |
| 4.3107 | 4449 | 0.0343 |
| 4.3117 | 4450 | 0.009 |
| 4.3127 | 4451 | 0.0082 |
| 4.3136 | 4452 | 0.0125 |
| 4.3146 | 4453 | 0.0142 |
| 4.3156 | 4454 | 0.0101 |
| 4.3166 | 4455 | 0.0092 |
| 4.3175 | 4456 | 0.0066 |
| 4.3185 | 4457 | 0.0127 |
| 4.3195 | 4458 | 0.0075 |
| 4.3204 | 4459 | 0.0124 |
| 4.3214 | 4460 | 0.0084 |
| 4.3224 | 4461 | 0.0274 |
| 4.3233 | 4462 | 0.0218 |
| 4.3243 | 4463 | 0.0157 |
| 4.3253 | 4464 | 0.0178 |
| 4.3262 | 4465 | 0.0238 |
| 4.3272 | 4466 | 0.0134 |
| 4.3282 | 4467 | 0.0196 |
| 4.3291 | 4468 | 0.0088 |
| 4.3301 | 4469 | 0.0214 |
| 4.3311 | 4470 | 0.0103 |
| 4.3320 | 4471 | 0.0097 |
| 4.3330 | 4472 | 0.0091 |
| 4.3340 | 4473 | 0.0141 |
| 4.3349 | 4474 | 0.0124 |
| 4.3359 | 4475 | 0.0101 |
| 4.3369 | 4476 | 0.0077 |
| 4.3379 | 4477 | 0.0308 |
| 4.3388 | 4478 | 0.0142 |
| 4.3398 | 4479 | 0.0085 |
| 4.3408 | 4480 | 0.0089 |
| 4.3417 | 4481 | 0.0215 |
| 4.3427 | 4482 | 0.0147 |
| 4.3437 | 4483 | 0.0102 |
| 4.3446 | 4484 | 0.0067 |
| 4.3456 | 4485 | 0.0228 |
| 4.3466 | 4486 | 0.0083 |
| 4.3475 | 4487 | 0.0094 |
| 4.3485 | 4488 | 0.0075 |
| 4.3495 | 4489 | 0.024 |
| 4.3504 | 4490 | 0.0164 |
| 4.3514 | 4491 | 0.0098 |
| 4.3524 | 4492 | 0.0107 |
| 4.3533 | 4493 | 0.0319 |
| 4.3543 | 4494 | 0.0162 |
| 4.3553 | 4495 | 0.01 |
| 4.3562 | 4496 | 0.012 |
| 4.3572 | 4497 | 0.0141 |
| 4.3582 | 4498 | 0.0068 |
| 4.3591 | 4499 | 0.01 |
| 4.3601 | 4500 | 0.009 |
| 4.3611 | 4501 | 0.0165 |
| 4.3621 | 4502 | 0.0077 |
| 4.3630 | 4503 | 0.0084 |
| 4.3640 | 4504 | 0.0115 |
| 4.3650 | 4505 | 0.0236 |
| 4.3659 | 4506 | 0.0138 |
| 4.3669 | 4507 | 0.0077 |
| 4.3679 | 4508 | 0.011 |
| 4.3688 | 4509 | 0.0175 |
| 4.3698 | 4510 | 0.0114 |
| 4.3708 | 4511 | 0.0149 |
| 4.3717 | 4512 | 0.0129 |
| 4.3727 | 4513 | 0.0209 |
| 4.3737 | 4514 | 0.0154 |
| 4.3746 | 4515 | 0.0074 |
| 4.3756 | 4516 | 0.0097 |
| 4.3766 | 4517 | 0.0235 |
| 4.3775 | 4518 | 0.0138 |
| 4.3785 | 4519 | 0.0054 |
| 4.3795 | 4520 | 0.0083 |
| 4.3804 | 4521 | 0.0206 |
| 4.3814 | 4522 | 0.0124 |
| 4.3824 | 4523 | 0.0094 |
| 4.3833 | 4524 | 0.0099 |
| 4.3843 | 4525 | 0.0108 |
| 4.3853 | 4526 | 0.01 |
| 4.3863 | 4527 | 0.0108 |
| 4.3872 | 4528 | 0.005 |
| 4.3882 | 4529 | 0.0158 |
| 4.3892 | 4530 | 0.012 |
| 4.3901 | 4531 | 0.0067 |
| 4.3911 | 4532 | 0.0109 |
| 4.3921 | 4533 | 0.0283 |
| 4.3930 | 4534 | 0.0212 |
| 4.3940 | 4535 | 0.0179 |
| 4.3950 | 4536 | 0.0113 |
| 4.3959 | 4537 | 0.0214 |
| 4.3969 | 4538 | 0.0124 |
| 4.3979 | 4539 | 0.0202 |
| 4.3988 | 4540 | 0.007 |
| 4.3998 | 4541 | 0.0205 |
| 4.4008 | 4542 | 0.0137 |
| 4.4017 | 4543 | 0.011 |
| 4.4027 | 4544 | 0.0117 |
| 4.4037 | 4545 | 0.0183 |
| 4.4046 | 4546 | 0.0067 |
| 4.4056 | 4547 | 0.0134 |
| 4.4066 | 4548 | 0.0165 |
| 4.4076 | 4549 | 0.0141 |
| 4.4085 | 4550 | 0.0096 |
| 4.4095 | 4551 | 0.0072 |
| 4.4105 | 4552 | 0.0092 |
| 4.4114 | 4553 | 0.033 |
| 4.4124 | 4554 | 0.0313 |
| 4.4134 | 4555 | 0.0106 |
| 4.4143 | 4556 | 0.0128 |
| 4.4153 | 4557 | 0.0165 |
| 4.4163 | 4558 | 0.0117 |
| 4.4172 | 4559 | 0.0101 |
| 4.4182 | 4560 | 0.0072 |
| 4.4192 | 4561 | 0.0145 |
| 4.4201 | 4562 | 0.0192 |
| 4.4211 | 4563 | 0.0153 |
| 4.4221 | 4564 | 0.0226 |
| 4.4230 | 4565 | 0.0211 |
| 4.4240 | 4566 | 0.0182 |
| 4.4250 | 4567 | 0.0098 |
| 4.4259 | 4568 | 0.0081 |
| 4.4269 | 4569 | 0.0373 |
| 4.4279 | 4570 | 0.0124 |
| 4.4288 | 4571 | 0.0108 |
| 4.4298 | 4572 | 0.012 |
| 4.4308 | 4573 | 0.0156 |
| 4.4318 | 4574 | 0.0167 |
| 4.4327 | 4575 | 0.0142 |
| 4.4337 | 4576 | 0.0129 |
| 4.4347 | 4577 | 0.0205 |
| 4.4356 | 4578 | 0.0206 |
| 4.4366 | 4579 | 0.0119 |
| 4.4376 | 4580 | 0.0086 |
| 4.4385 | 4581 | 0.0178 |
| 4.4395 | 4582 | 0.0137 |
| 4.4405 | 4583 | 0.0177 |
| 4.4414 | 4584 | 0.009 |
| 4.4424 | 4585 | 0.0235 |
| 4.4434 | 4586 | 0.0199 |
| 4.4443 | 4587 | 0.0081 |
| 4.4453 | 4588 | 0.0076 |
| 4.4463 | 4589 | 0.0137 |
| 4.4472 | 4590 | 0.0176 |
| 4.4482 | 4591 | 0.0093 |
| 4.4492 | 4592 | 0.0117 |
| 4.4501 | 4593 | 0.0351 |
| 4.4511 | 4594 | 0.0101 |
| 4.4521 | 4595 | 0.0076 |
| 4.4530 | 4596 | 0.0102 |
| 4.4540 | 4597 | 0.0188 |
| 4.4550 | 4598 | 0.0125 |
| 4.4560 | 4599 | 0.0109 |
| 4.4569 | 4600 | 0.0165 |
| 4.4579 | 4601 | 0.0235 |
| 4.4589 | 4602 | 0.013 |
| 4.4598 | 4603 | 0.0236 |
| 4.4608 | 4604 | 0.0184 |
| 4.4618 | 4605 | 0.0236 |
| 4.4627 | 4606 | 0.0074 |
| 4.4637 | 4607 | 0.0099 |
| 4.4647 | 4608 | 0.0152 |
| 4.4656 | 4609 | 0.0195 |
| 4.4666 | 4610 | 0.0107 |
| 4.4676 | 4611 | 0.0071 |
| 4.4685 | 4612 | 0.0092 |
| 4.4695 | 4613 | 0.0101 |
| 4.4705 | 4614 | 0.0129 |
| 4.4714 | 4615 | 0.0184 |
| 4.4724 | 4616 | 0.0101 |
| 4.4734 | 4617 | 0.0095 |
| 4.4743 | 4618 | 0.0096 |
| 4.4753 | 4619 | 0.0105 |
| 4.4763 | 4620 | 0.0072 |
| 4.4773 | 4621 | 0.0188 |
| 4.4782 | 4622 | 0.0086 |
| 4.4792 | 4623 | 0.0107 |
| 4.4802 | 4624 | 0.0088 |
| 4.4811 | 4625 | 0.0125 |
| 4.4821 | 4626 | 0.0069 |
| 4.4831 | 4627 | 0.0078 |
| 4.4840 | 4628 | 0.0151 |
| 4.4850 | 4629 | 0.0201 |
| 4.4860 | 4630 | 0.0114 |
| 4.4869 | 4631 | 0.0068 |
| 4.4879 | 4632 | 0.0051 |
| 4.4889 | 4633 | 0.0197 |
| 4.4898 | 4634 | 0.0152 |
| 4.4908 | 4635 | 0.0081 |
| 4.4918 | 4636 | 0.0061 |
| 4.4927 | 4637 | 0.0168 |
| 4.4937 | 4638 | 0.0128 |
| 4.4947 | 4639 | 0.0189 |
| 4.4956 | 4640 | 0.009 |
| 4.4966 | 4641 | 0.0217 |
| 4.4976 | 4642 | 0.0156 |
| 4.4985 | 4643 | 0.0061 |
| 4.4995 | 4644 | 0.0152 |
| 4.5005 | 4645 | 0.0243 |
| 4.5015 | 4646 | 0.0126 |
| 4.5024 | 4647 | 0.01 |
| 4.5034 | 4648 | 0.0053 |
| 4.5044 | 4649 | 0.0132 |
| 4.5053 | 4650 | 0.013 |
| 4.5063 | 4651 | 0.0154 |
| 4.5073 | 4652 | 0.0115 |
| 4.5082 | 4653 | 0.0112 |
| 4.5092 | 4654 | 0.0099 |
| 4.5102 | 4655 | 0.0081 |
| 4.5111 | 4656 | 0.0107 |
| 4.5121 | 4657 | 0.0121 |
| 4.5131 | 4658 | 0.0091 |
| 4.5140 | 4659 | 0.0101 |
| 4.5150 | 4660 | 0.0102 |
| 4.5160 | 4661 | 0.0176 |
| 4.5169 | 4662 | 0.0111 |
| 4.5179 | 4663 | 0.0104 |
| 4.5189 | 4664 | 0.0067 |
| 4.5198 | 4665 | 0.0207 |
| 4.5208 | 4666 | 0.0094 |
| 4.5218 | 4667 | 0.0062 |
| 4.5227 | 4668 | 0.0092 |
| 4.5237 | 4669 | 0.0173 |
| 4.5247 | 4670 | 0.0243 |
| 4.5257 | 4671 | 0.0123 |
| 4.5266 | 4672 | 0.0097 |
| 4.5276 | 4673 | 0.0144 |
| 4.5286 | 4674 | 0.0117 |
| 4.5295 | 4675 | 0.0183 |
| 4.5305 | 4676 | 0.0103 |
| 4.5315 | 4677 | 0.0274 |
| 4.5324 | 4678 | 0.0101 |
| 4.5334 | 4679 | 0.0111 |
| 4.5344 | 4680 | 0.0096 |
| 4.5353 | 4681 | 0.0159 |
| 4.5363 | 4682 | 0.0212 |
| 4.5373 | 4683 | 0.0131 |
| 4.5382 | 4684 | 0.012 |
| 4.5392 | 4685 | 0.0185 |
| 4.5402 | 4686 | 0.0177 |
| 4.5411 | 4687 | 0.0083 |
| 4.5421 | 4688 | 0.0102 |
| 4.5431 | 4689 | 0.0178 |
| 4.5440 | 4690 | 0.0203 |
| 4.5450 | 4691 | 0.0144 |
| 4.5460 | 4692 | 0.014 |
| 4.5470 | 4693 | 0.0161 |
| 4.5479 | 4694 | 0.0108 |
| 4.5489 | 4695 | 0.0145 |
| 4.5499 | 4696 | 0.0108 |
| 4.5508 | 4697 | 0.0179 |
| 4.5518 | 4698 | 0.0106 |
| 4.5528 | 4699 | 0.0139 |
| 4.5537 | 4700 | 0.0154 |
| 4.5547 | 4701 | 0.0148 |
| 4.5557 | 4702 | 0.0161 |
| 4.5566 | 4703 | 0.0133 |
| 4.5576 | 4704 | 0.0088 |
| 4.5586 | 4705 | 0.0147 |
| 4.5595 | 4706 | 0.0114 |
| 4.5605 | 4707 | 0.0096 |
| 4.5615 | 4708 | 0.0107 |
| 4.5624 | 4709 | 0.0176 |
| 4.5634 | 4710 | 0.0139 |
| 4.5644 | 4711 | 0.011 |
| 4.5653 | 4712 | 0.0185 |
| 4.5663 | 4713 | 0.0208 |
| 4.5673 | 4714 | 0.012 |
| 4.5682 | 4715 | 0.014 |
| 4.5692 | 4716 | 0.0256 |
| 4.5702 | 4717 | 0.0238 |
| 4.5712 | 4718 | 0.0154 |
| 4.5721 | 4719 | 0.008 |
| 4.5731 | 4720 | 0.0138 |
| 4.5741 | 4721 | 0.0118 |
| 4.5750 | 4722 | 0.0056 |
| 4.5760 | 4723 | 0.0094 |
| 4.5770 | 4724 | 0.0073 |
| 4.5779 | 4725 | 0.0159 |
| 4.5789 | 4726 | 0.0141 |
| 4.5799 | 4727 | 0.0126 |
| 4.5808 | 4728 | 0.0115 |
| 4.5818 | 4729 | 0.0177 |
| 4.5828 | 4730 | 0.0202 |
| 4.5837 | 4731 | 0.0119 |
| 4.5847 | 4732 | 0.0197 |
| 4.5857 | 4733 | 0.0209 |
| 4.5866 | 4734 | 0.0135 |
| 4.5876 | 4735 | 0.0139 |
| 4.5886 | 4736 | 0.0096 |
| 4.5895 | 4737 | 0.012 |
| 4.5905 | 4738 | 0.0141 |
| 4.5915 | 4739 | 0.0098 |
| 4.5924 | 4740 | 0.0094 |
| 4.5934 | 4741 | 0.0178 |
| 4.5944 | 4742 | 0.0155 |
| 4.5954 | 4743 | 0.0131 |
| 4.5963 | 4744 | 0.0106 |
| 4.5973 | 4745 | 0.0172 |
| 4.5983 | 4746 | 0.0184 |
| 4.5992 | 4747 | 0.0101 |
| 4.6002 | 4748 | 0.0077 |
| 4.6012 | 4749 | 0.0239 |
| 4.6021 | 4750 | 0.0221 |
| 4.6031 | 4751 | 0.0137 |
| 4.6041 | 4752 | 0.0204 |
| 4.6050 | 4753 | 0.0161 |
| 4.6060 | 4754 | 0.0141 |
| 4.6070 | 4755 | 0.0126 |
| 4.6079 | 4756 | 0.0133 |
| 4.6089 | 4757 | 0.0096 |
| 4.6099 | 4758 | 0.0095 |
| 4.6108 | 4759 | 0.0159 |
| 4.6118 | 4760 | 0.0148 |
| 4.6128 | 4761 | 0.0192 |
| 4.6137 | 4762 | 0.0198 |
| 4.6147 | 4763 | 0.0233 |
| 4.6157 | 4764 | 0.0174 |
| 4.6167 | 4765 | 0.0238 |
| 4.6176 | 4766 | 0.0124 |
| 4.6186 | 4767 | 0.0216 |
| 4.6196 | 4768 | 0.0087 |
| 4.6205 | 4769 | 0.0176 |
| 4.6215 | 4770 | 0.0288 |
| 4.6225 | 4771 | 0.0069 |
| 4.6234 | 4772 | 0.0115 |
| 4.6244 | 4773 | 0.0205 |
| 4.6254 | 4774 | 0.0211 |
| 4.6263 | 4775 | 0.0112 |
| 4.6273 | 4776 | 0.0132 |
| 4.6283 | 4777 | 0.0088 |
| 4.6292 | 4778 | 0.0073 |
| 4.6302 | 4779 | 0.0144 |
| 4.6312 | 4780 | 0.0146 |
| 4.6321 | 4781 | 0.0119 |
| 4.6331 | 4782 | 0.0116 |
| 4.6341 | 4783 | 0.0098 |
| 4.6350 | 4784 | 0.0075 |
| 4.6360 | 4785 | 0.0161 |
| 4.6370 | 4786 | 0.0131 |
| 4.6379 | 4787 | 0.0094 |
| 4.6389 | 4788 | 0.0074 |
| 4.6399 | 4789 | 0.0197 |
| 4.6409 | 4790 | 0.0126 |
| 4.6418 | 4791 | 0.0134 |
| 4.6428 | 4792 | 0.01 |
| 4.6438 | 4793 | 0.0108 |
| 4.6447 | 4794 | 0.013 |
| 4.6457 | 4795 | 0.0112 |
| 4.6467 | 4796 | 0.012 |
| 4.6476 | 4797 | 0.0203 |
| 4.6486 | 4798 | 0.026 |
| 4.6496 | 4799 | 0.008 |
| 4.6505 | 4800 | 0.0151 |
| 4.6515 | 4801 | 0.0205 |
| 4.6525 | 4802 | 0.0132 |
| 4.6534 | 4803 | 0.0133 |
| 4.6544 | 4804 | 0.0137 |
| 4.6554 | 4805 | 0.0246 |
| 4.6563 | 4806 | 0.0136 |
| 4.6573 | 4807 | 0.0098 |
| 4.6583 | 4808 | 0.0142 |
| 4.6592 | 4809 | 0.0129 |
| 4.6602 | 4810 | 0.0114 |
| 4.6612 | 4811 | 0.0113 |
| 4.6621 | 4812 | 0.0078 |
| 4.6631 | 4813 | 0.0185 |
| 4.6641 | 4814 | 0.0185 |
| 4.6651 | 4815 | 0.0105 |
| 4.6660 | 4816 | 0.0152 |
| 4.6670 | 4817 | 0.0189 |
| 4.6680 | 4818 | 0.024 |
| 4.6689 | 4819 | 0.0099 |
| 4.6699 | 4820 | 0.012 |
| 4.6709 | 4821 | 0.0103 |
| 4.6718 | 4822 | 0.0148 |
| 4.6728 | 4823 | 0.0149 |
| 4.6738 | 4824 | 0.0163 |
| 4.6747 | 4825 | 0.0116 |
| 4.6757 | 4826 | 0.0144 |
| 4.6767 | 4827 | 0.0065 |
| 4.6776 | 4828 | 0.0109 |
| 4.6786 | 4829 | 0.0108 |
| 4.6796 | 4830 | 0.01 |
| 4.6805 | 4831 | 0.0108 |
| 4.6815 | 4832 | 0.0116 |
| 4.6825 | 4833 | 0.0224 |
| 4.6834 | 4834 | 0.0181 |
| 4.6844 | 4835 | 0.015 |
| 4.6854 | 4836 | 0.0159 |
| 4.6864 | 4837 | 0.0209 |
| 4.6873 | 4838 | 0.0172 |
| 4.6883 | 4839 | 0.0095 |
| 4.6893 | 4840 | 0.0118 |
| 4.6902 | 4841 | 0.032 |
| 4.6912 | 4842 | 0.0106 |
| 4.6922 | 4843 | 0.0089 |
| 4.6931 | 4844 | 0.015 |
| 4.6941 | 4845 | 0.0126 |
| 4.6951 | 4846 | 0.0201 |
| 4.6960 | 4847 | 0.0103 |
| 4.6970 | 4848 | 0.0226 |
| 4.6980 | 4849 | 0.0112 |
| 4.6989 | 4850 | 0.0102 |
| 4.6999 | 4851 | 0.008 |
| 4.7009 | 4852 | 0.0134 |
| 4.7018 | 4853 | 0.0163 |
| 4.7028 | 4854 | 0.012 |
| 4.7038 | 4855 | 0.0094 |
| 4.7047 | 4856 | 0.0259 |
| 4.7057 | 4857 | 0.0273 |
| 4.7067 | 4858 | 0.0145 |
| 4.7076 | 4859 | 0.0124 |
| 4.7086 | 4860 | 0.0261 |
| 4.7096 | 4861 | 0.0212 |
| 4.7106 | 4862 | 0.0268 |
| 4.7115 | 4863 | 0.0158 |
| 4.7125 | 4864 | 0.0181 |
| 4.7135 | 4865 | 0.0246 |
| 4.7144 | 4866 | 0.0219 |
| 4.7154 | 4867 | 0.0129 |
| 4.7164 | 4868 | 0.0124 |
| 4.7173 | 4869 | 0.0395 |
| 4.7183 | 4870 | 0.0195 |
| 4.7193 | 4871 | 0.0206 |
| 4.7202 | 4872 | 0.0148 |
| 4.7212 | 4873 | 0.0152 |
| 4.7222 | 4874 | 0.0166 |
| 4.7231 | 4875 | 0.0096 |
| 4.7241 | 4876 | 0.0171 |
| 4.7251 | 4877 | 0.0183 |
| 4.7260 | 4878 | 0.0186 |
| 4.7270 | 4879 | 0.0114 |
| 4.7280 | 4880 | 0.011 |
| 4.7289 | 4881 | 0.0175 |
| 4.7299 | 4882 | 0.0134 |
| 4.7309 | 4883 | 0.0081 |
| 4.7318 | 4884 | 0.0076 |
| 4.7328 | 4885 | 0.0191 |
| 4.7338 | 4886 | 0.0129 |
| 4.7348 | 4887 | 0.0235 |
| 4.7357 | 4888 | 0.0116 |
| 4.7367 | 4889 | 0.0177 |
| 4.7377 | 4890 | 0.0078 |
| 4.7386 | 4891 | 0.0078 |
| 4.7396 | 4892 | 0.0196 |
| 4.7406 | 4893 | 0.017 |
| 4.7415 | 4894 | 0.017 |
| 4.7425 | 4895 | 0.0136 |
| 4.7435 | 4896 | 0.0129 |
| 4.7444 | 4897 | 0.0121 |
| 4.7454 | 4898 | 0.0174 |
| 4.7464 | 4899 | 0.0181 |
| 4.7473 | 4900 | 0.0102 |
| 4.7483 | 4901 | 0.0222 |
| 4.7493 | 4902 | 0.0163 |
| 4.7502 | 4903 | 0.0137 |
| 4.7512 | 4904 | 0.0187 |
| 4.7522 | 4905 | 0.0102 |
| 4.7531 | 4906 | 0.0084 |
| 4.7541 | 4907 | 0.0115 |
| 4.7551 | 4908 | 0.0106 |
| 4.7561 | 4909 | 0.018 |
| 4.7570 | 4910 | 0.0155 |
| 4.7580 | 4911 | 0.0149 |
| 4.7590 | 4912 | 0.0135 |
| 4.7599 | 4913 | 0.0145 |
| 4.7609 | 4914 | 0.0131 |
| 4.7619 | 4915 | 0.0116 |
| 4.7628 | 4916 | 0.0154 |
| 4.7638 | 4917 | 0.018 |
| 4.7648 | 4918 | 0.0142 |
| 4.7657 | 4919 | 0.0139 |
| 4.7667 | 4920 | 0.011 |
| 4.7677 | 4921 | 0.0204 |
| 4.7686 | 4922 | 0.0117 |
| 4.7696 | 4923 | 0.0147 |
| 4.7706 | 4924 | 0.0116 |
| 4.7715 | 4925 | 0.027 |
| 4.7725 | 4926 | 0.014 |
| 4.7735 | 4927 | 0.0092 |
| 4.7744 | 4928 | 0.026 |
| 4.7754 | 4929 | 0.014 |
| 4.7764 | 4930 | 0.0095 |
| 4.7773 | 4931 | 0.0194 |
| 4.7783 | 4932 | 0.0134 |
| 4.7793 | 4933 | 0.0163 |
| 4.7803 | 4934 | 0.0094 |
| 4.7812 | 4935 | 0.008 |
| 4.7822 | 4936 | 0.0196 |
| 4.7832 | 4937 | 0.0231 |
| 4.7841 | 4938 | 0.015 |
| 4.7851 | 4939 | 0.0151 |
| 4.7861 | 4940 | 0.0149 |
| 4.7870 | 4941 | 0.012 |
| 4.7880 | 4942 | 0.0315 |
| 4.7890 | 4943 | 0.0132 |
| 4.7899 | 4944 | 0.0101 |
| 4.7909 | 4945 | 0.0107 |
| 4.7919 | 4946 | 0.0099 |
| 4.7928 | 4947 | 0.0076 |
| 4.7938 | 4948 | 0.0107 |
| 4.7948 | 4949 | 0.0216 |
| 4.7957 | 4950 | 0.0211 |
| 4.7967 | 4951 | 0.0127 |
| 4.7977 | 4952 | 0.0118 |
| 4.7986 | 4953 | 0.0162 |
| 4.7996 | 4954 | 0.0113 |
| 4.8006 | 4955 | 0.0085 |
| 4.8015 | 4956 | 0.0113 |
| 4.8025 | 4957 | 0.0217 |
| 4.8035 | 4958 | 0.012 |
| 4.8045 | 4959 | 0.0106 |
| 4.8054 | 4960 | 0.019 |
| 4.8064 | 4961 | 0.0275 |
| 4.8074 | 4962 | 0.0142 |
| 4.8083 | 4963 | 0.0155 |
| 4.8093 | 4964 | 0.0077 |
| 4.8103 | 4965 | 0.0245 |
| 4.8112 | 4966 | 0.0232 |
| 4.8122 | 4967 | 0.0107 |
| 4.8132 | 4968 | 0.0144 |
| 4.8141 | 4969 | 0.0207 |
| 4.8151 | 4970 | 0.0173 |
| 4.8161 | 4971 | 0.0145 |
| 4.8170 | 4972 | 0.0188 |
| 4.8180 | 4973 | 0.021 |
| 4.8190 | 4974 | 0.0219 |
| 4.8199 | 4975 | 0.0098 |
| 4.8209 | 4976 | 0.0101 |
| 4.8219 | 4977 | 0.0111 |
| 4.8228 | 4978 | 0.0091 |
| 4.8238 | 4979 | 0.0191 |
| 4.8248 | 4980 | 0.0086 |
| 4.8258 | 4981 | 0.0136 |
| 4.8267 | 4982 | 0.0132 |
| 4.8277 | 4983 | 0.0109 |
| 4.8287 | 4984 | 0.018 |
| 4.8296 | 4985 | 0.0222 |
| 4.8306 | 4986 | 0.0261 |
| 4.8316 | 4987 | 0.0139 |
| 4.8325 | 4988 | 0.0152 |
| 4.8335 | 4989 | 0.0245 |
| 4.8345 | 4990 | 0.0133 |
| 4.8354 | 4991 | 0.0137 |
| 4.8364 | 4992 | 0.0136 |
| 4.8374 | 4993 | 0.0154 |
| 4.8383 | 4994 | 0.0121 |
| 4.8393 | 4995 | 0.011 |
| 4.8403 | 4996 | 0.0111 |
| 4.8412 | 4997 | 0.019 |
| 4.8422 | 4998 | 0.0319 |
| 4.8432 | 4999 | 0.0128 |
| 4.8441 | 5000 | 0.0168 |
| 4.8451 | 5001 | 0.0181 |
| 4.8461 | 5002 | 0.0199 |
| 4.8470 | 5003 | 0.0093 |
| 4.8480 | 5004 | 0.0185 |
| 4.8490 | 5005 | 0.016 |
| 4.8500 | 5006 | 0.0204 |
| 4.8509 | 5007 | 0.0169 |
| 4.8519 | 5008 | 0.0152 |
| 4.8529 | 5009 | 0.015 |
| 4.8538 | 5010 | 0.0116 |
| 4.8548 | 5011 | 0.0108 |
| 4.8558 | 5012 | 0.0096 |
| 4.8567 | 5013 | 0.016 |
| 4.8577 | 5014 | 0.0172 |
| 4.8587 | 5015 | 0.0184 |
| 4.8596 | 5016 | 0.0096 |
| 4.8606 | 5017 | 0.0133 |
| 4.8616 | 5018 | 0.014 |
| 4.8625 | 5019 | 0.0082 |
| 4.8635 | 5020 | 0.0173 |
| 4.8645 | 5021 | 0.0201 |
| 4.8654 | 5022 | 0.0136 |
| 4.8664 | 5023 | 0.0119 |
| 4.8674 | 5024 | 0.0146 |
| 4.8683 | 5025 | 0.0144 |
| 4.8693 | 5026 | 0.0126 |
| 4.8703 | 5027 | 0.0149 |
| 4.8712 | 5028 | 0.0161 |
| 4.8722 | 5029 | 0.0169 |
| 4.8732 | 5030 | 0.0225 |
| 4.8742 | 5031 | 0.013 |
| 4.8751 | 5032 | 0.0217 |
| 4.8761 | 5033 | 0.023 |
| 4.8771 | 5034 | 0.016 |
| 4.8780 | 5035 | 0.0119 |
| 4.8790 | 5036 | 0.0093 |
| 4.8800 | 5037 | 0.0101 |
| 4.8809 | 5038 | 0.0156 |
| 4.8819 | 5039 | 0.0133 |
| 4.8829 | 5040 | 0.0123 |
| 4.8838 | 5041 | 0.0279 |
| 4.8848 | 5042 | 0.0113 |
| 4.8858 | 5043 | 0.0155 |
| 4.8867 | 5044 | 0.0091 |
| 4.8877 | 5045 | 0.0274 |
| 4.8887 | 5046 | 0.0205 |
| 4.8896 | 5047 | 0.0111 |
| 4.8906 | 5048 | 0.0138 |
| 4.8916 | 5049 | 0.0173 |
| 4.8925 | 5050 | 0.0194 |
| 4.8935 | 5051 | 0.0133 |
| 4.8945 | 5052 | 0.0118 |
| 4.8955 | 5053 | 0.0231 |
| 4.8964 | 5054 | 0.0167 |
| 4.8974 | 5055 | 0.0152 |
| 4.8984 | 5056 | 0.0144 |
| 4.8993 | 5057 | 0.0159 |
| 4.9003 | 5058 | 0.0232 |
| 4.9013 | 5059 | 0.012 |
| 4.9022 | 5060 | 0.0075 |
| 4.9032 | 5061 | 0.0192 |
| 4.9042 | 5062 | 0.0164 |
| 4.9051 | 5063 | 0.0141 |
| 4.9061 | 5064 | 0.0099 |
| 4.9071 | 5065 | 0.0296 |
| 4.9080 | 5066 | 0.0143 |
| 4.9090 | 5067 | 0.0222 |
| 4.9100 | 5068 | 0.017 |
| 4.9109 | 5069 | 0.0197 |
| 4.9119 | 5070 | 0.0296 |
| 4.9129 | 5071 | 0.0121 |
| 4.9138 | 5072 | 0.008 |
| 4.9148 | 5073 | 0.0099 |
| 4.9158 | 5074 | 0.0134 |
| 4.9167 | 5075 | 0.0098 |
| 4.9177 | 5076 | 0.0061 |
| 4.9187 | 5077 | 0.0153 |
| 4.9197 | 5078 | 0.0091 |
| 4.9206 | 5079 | 0.0106 |
| 4.9216 | 5080 | 0.0088 |
| 4.9226 | 5081 | 0.0232 |
| 4.9235 | 5082 | 0.0176 |
| 4.9245 | 5083 | 0.0115 |
| 4.9255 | 5084 | 0.0142 |
| 4.9264 | 5085 | 0.0142 |
| 4.9274 | 5086 | 0.0127 |
| 4.9284 | 5087 | 0.0102 |
| 4.9293 | 5088 | 0.0167 |
| 4.9303 | 5089 | 0.0176 |
| 4.9313 | 5090 | 0.0118 |
| 4.9322 | 5091 | 0.016 |
| 4.9332 | 5092 | 0.0178 |
| 4.9342 | 5093 | 0.0228 |
| 4.9351 | 5094 | 0.0378 |
| 4.9361 | 5095 | 0.012 |
| 4.9371 | 5096 | 0.0108 |
| 4.9380 | 5097 | 0.0264 |
| 4.9390 | 5098 | 0.0152 |
| 4.9400 | 5099 | 0.0199 |
| 4.9409 | 5100 | 0.0139 |
| 4.9419 | 5101 | 0.0184 |
| 4.9429 | 5102 | 0.0191 |
| 4.9439 | 5103 | 0.0114 |
| 4.9448 | 5104 | 0.0085 |
| 4.9458 | 5105 | 0.0178 |
| 4.9468 | 5106 | 0.0197 |
| 4.9477 | 5107 | 0.0194 |
| 4.9487 | 5108 | 0.0164 |
| 4.9497 | 5109 | 0.0104 |
| 4.9506 | 5110 | 0.0072 |
| 4.9516 | 5111 | 0.0108 |
| 4.9526 | 5112 | 0.0113 |
| 4.9535 | 5113 | 0.0209 |
| 4.9545 | 5114 | 0.0167 |
| 4.9555 | 5115 | 0.0115 |
| 4.9564 | 5116 | 0.0132 |
| 4.9574 | 5117 | 0.0453 |
| 4.9584 | 5118 | 0.0234 |
| 4.9593 | 5119 | 0.0195 |
| 4.9603 | 5120 | 0.0249 |
| 4.9613 | 5121 | 0.0148 |
| 4.9622 | 5122 | 0.0082 |
| 4.9632 | 5123 | 0.0122 |
| 4.9642 | 5124 | 0.0247 |
| 4.9652 | 5125 | 0.0273 |
| 4.9661 | 5126 | 0.0188 |
| 4.9671 | 5127 | 0.008 |
| 4.9681 | 5128 | 0.0107 |
| 4.9690 | 5129 | 0.025 |
| 4.9700 | 5130 | 0.0183 |
| 4.9710 | 5131 | 0.0091 |
| 4.9719 | 5132 | 0.01 |
| 4.9729 | 5133 | 0.0201 |
| 4.9739 | 5134 | 0.0197 |
| 4.9748 | 5135 | 0.0115 |
| 4.9758 | 5136 | 0.0146 |
| 4.9768 | 5137 | 0.0197 |
| 4.9777 | 5138 | 0.0145 |
| 4.9787 | 5139 | 0.012 |
| 4.9797 | 5140 | 0.0205 |
| 4.9806 | 5141 | 0.0142 |
| 4.9816 | 5142 | 0.0232 |
| 4.9826 | 5143 | 0.016 |
| 4.9835 | 5144 | 0.0189 |
| 4.9845 | 5145 | 0.0131 |
| 4.9855 | 5146 | 0.0187 |
| 4.9864 | 5147 | 0.0081 |
| 4.9874 | 5148 | 0.0132 |
| 4.9884 | 5149 | 0.0132 |
| 4.9894 | 5150 | 0.0154 |
| 4.9903 | 5151 | 0.0096 |
| 4.9913 | 5152 | 0.0188 |
| 4.9923 | 5153 | 0.0184 |
| 4.9932 | 5154 | 0.0222 |
| 4.9942 | 5155 | 0.0327 |
| 4.9952 | 5156 | 0.0127 |
| 4.9961 | 5157 | 0.02 |
| 4.9971 | 5158 | 0.0126 |
| 4.9981 | 5159 | 0.0253 |
| 4.9990 | 5160 | 0.0334 |
</details>
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF
|
mradermacher
| 2024-11-13T21:41:10Z | 10 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:migtissera/Tess-v2.5-Gemma-2-27B-alpha",
"base_model:quantized:migtissera/Tess-v2.5-Gemma-2-27B-alpha",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-13T17:13:56Z |
---
base_model: migtissera/Tess-v2.5-Gemma-2-27B-alpha
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/migtissera/Tess-v2.5-Gemma-2-27B-alpha
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ1_S.gguf) | i1-IQ1_S | 6.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ1_M.gguf) | i1-IQ1_M | 6.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ2_S.gguf) | i1-IQ2_S | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ2_M.gguf) | i1-IQ2_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q2_K.gguf) | i1-Q2_K | 10.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q4_0.gguf) | i1-Q4_0 | 15.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q5_K_S.gguf) | i1-Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Gemma-2-27B-alpha-i1-GGUF/resolve/main/Tess-v2.5-Gemma-2-27B-alpha.i1-Q6_K.gguf) | i1-Q6_K | 22.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jschoormans/detr-finetuned-ES-Las-v0
|
jschoormans
| 2024-11-13T21:31:36Z | 190 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-11-13T09:31:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF
|
zhangfeng026
| 2024-11-13T21:22:25Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-13T21:20:47Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-32b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zhangfeng026/Qwen2.5-Coder-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-32b-instruct-q4_k_m.gguf -c 2048
```
|
jacobhoffmann/TestGen_v2.2-Llama-3.1-8B-lr3e-05_epochs1
|
jacobhoffmann
| 2024-11-13T20:54:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T20:49:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shuyuej/Ministral-8B-Instruct-2410-GPTQ
|
shuyuej
| 2024-11-13T20:54:17Z | 69 | 1 | null |
[
"safetensors",
"mistral",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | 2024-11-13T20:45:42Z |
---
license: apache-2.0
---
# The Quantized Ministral 8B Instruct 2410 Model
Original Base Model: `mistralai/Ministral-8B-Instruct-2410`.<br>
Link: [https://huggingface.co/mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410)
## Quantization Configurations
```
"quantization_config": {
"bits": 4,
"checkpoint_format": "gptq",
"damp_percent": 0.01,
"desc_act": true,
"group_size": 128,
"model_file_base_name": null,
"model_name_or_path": null,
"quant_method": "gptq",
"static_groups": false,
"sym": true,
"true_sequential": true
},
```
## Source Codes
Source Codes: [https://github.com/vkola-lab/medpodgpt/tree/main/quantization](https://github.com/vkola-lab/medpodgpt/tree/main/quantization).
|
ClaudioItaly/Qwen-Density
|
ClaudioItaly
| 2024-11-13T20:48:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T20:44:12Z |
---
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-Coder-7B-Instruct
parameters:
density: 1.3
weight: 1.5
- model: Qwen/Qwen2.5-Coder-7B-Instruct
parameters:
density: 1.3
weight: 1.5
- model: Qwen/Qwen2.5-Coder-7B-Instruct
parameters:
density: 1.3
weight: 1.5
merge_method: ties
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
parameters:
normalize: true
int8_mask: false
dtype: bfloat16
tokenizer_source: union
```
|
mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF
|
mradermacher
| 2024-11-13T20:48:10Z | 73 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"finetune",
"synthetic data",
"text-generation-inference",
"conversational",
"en",
"dataset:ajibawa-2023/OpenHermes-2.5-Code-290k",
"dataset:teknium/OpenHermes-2.5",
"base_model:ajibawa-2023/OpenHermes-2.5-Code-290k-13B",
"base_model:quantized:ajibawa-2023/OpenHermes-2.5-Code-290k-13B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T16:42:41Z |
---
base_model: ajibawa-2023/OpenHermes-2.5-Code-290k-13B
datasets:
- ajibawa-2023/OpenHermes-2.5-Code-290k
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- finetune
- synthetic data
- text-generation-inference
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ajibawa-2023/OpenHermes-2.5-Code-290k-13B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF
|
mradermacher
| 2024-11-13T20:48:10Z | 179 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"finetune",
"synthetic data",
"text-generation-inference",
"conversational",
"en",
"dataset:ajibawa-2023/OpenHermes-2.5-Code-290k",
"dataset:teknium/OpenHermes-2.5",
"base_model:ajibawa-2023/OpenHermes-2.5-Code-290k-13B",
"base_model:quantized:ajibawa-2023/OpenHermes-2.5-Code-290k-13B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-13T18:42:57Z |
---
base_model: ajibawa-2023/OpenHermes-2.5-Code-290k-13B
datasets:
- ajibawa-2023/OpenHermes-2.5-Code-290k
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- finetune
- synthetic data
- text-generation-inference
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ajibawa-2023/OpenHermes-2.5-Code-290k-13B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Code-290k-13B-i1-GGUF/resolve/main/OpenHermes-2.5-Code-290k-13B.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
braindao/iq-code-evmind-0.5b-instruct-v0.2411.0-150
|
braindao
| 2024-11-13T20:44:19Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T20:42:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
braindao/iq-code-evmind-0.5b-instruct-v0.2411.0-50
|
braindao
| 2024-11-13T20:41:14Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T20:40:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
martinsinnona/plotqa_simple_6
|
martinsinnona
| 2024-11-13T20:40:40Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-13T19:51:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProdeusUnity/Prismatic-12b
|
ProdeusUnity
| 2024-11-13T20:34:11Z | 23 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T12:34:31Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Prismatic 12b v0.0
*The sparkling courage I longed for, what I got is small... My tears are surely the prism of tomorrow... Say "Hello!" to the ideal future, let's go see them~*
Listen to the song on youtube: https://www.youtube.com/watch?v=v3I6EVlyPx4
One off merge for a friend, though it came out rather good, I like it, so try it?
mistralai/Mistral-Nemo-Base-2407
inflatebot/MN-12b-Mag-Mell-R1
nbeerbower/Mistral-Nemo-Prism-12B-v5
License for this model Apache 2.0
Format: Mistral Tekken or ChatML
Thank you to AuriAetherwiing for helping me merge the models and for providing compute (A40).
Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the ties merge method using mistralai_Mistral-Nemo-Base-2407 as a base.
### Models Merged
Models Merged
The following models were included in the merge:
/inflatebot_MN-12B-Mag-Mell-R1
/nbeerbower_Mistral-Nemo-Prism-12B-v5
#### Configuration
The following YAML configuration was used to produce this model:
models:
- model: /inflatebot_MN-12B-Mag-Mell-R1
parameters:
weight: 0.3
density: 0.5
- model: /nbeerbower_Mistral-Nemo-Prism-12B-v5
parameters:
weight: 0.4
density: 0.75
base_model: /mistralai_Mistral-Nemo-Base-2407
parameters:
epsilon: 0.05
normalize: true
lambda: 1
merge_method: ties
dtype: bfloat16
|
jimmymeister/whisper-large-v3-turbo-german-ct2
|
jimmymeister
| 2024-11-13T20:26:18Z | 41 | 2 |
transformers
|
[
"transformers",
"automatic-speech-recognition",
"de",
"dataset:flozi00/asr-german-mixed",
"dataset:flozi00/asr-german-mixed-evals",
"arxiv:2409.03137",
"base_model:primeline/whisper-large-v3-german",
"base_model:finetune:primeline/whisper-large-v3-german",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-13T18:17:35Z |
---
license: apache-2.0
language:
- de
library_name: transformers
pipeline_tag: automatic-speech-recognition
model-index:
- name: whisper-large-v3-turbo-german by Florian Zimmermeister @primeLine
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: German ASR Data-Mix
type: flozi00/asr-german-mixed
metrics:
- type: wer
value: 2.628 %
name: Test WER
datasets:
- flozi00/asr-german-mixed
- flozi00/asr-german-mixed-evals
base_model:
- primeline/whisper-large-v3-german
---
## Important note:
This model is just a CTranslate2 Translation, for usage in CTranslate conform frameworks such as faster-whisper.
For any questions about the fine tuning method or the dataset used please refer to the original Repo [primeline/whisper-large-v3-turbo-german](https://huggingface.co/primeline/whisper-large-v3-turbo-german)
### Summary
This model map provides information about a model based on Whisper Large v3 that has been fine-tuned for speech recognition in German. Whisper is a powerful speech recognition platform developed by OpenAI. This model has been specially optimized for processing and recognizing German speech.
### Applications
This model can be used in various application areas, including
- Transcription of spoken German language
- Voice commands and voice control
- Automatic subtitling for German videos
- Voice-based search queries in German
- Dictation functions in word processing programs
## Model family
| Model | Parameters | link |
|----------------------------------|------------|--------------------------------------------------------------|
| Whisper large v3 german | 1.54B | [link](https://huggingface.co/primeline/whisper-large-v3-german) |
| Whisper large v3 turbo german | 809M | [link](https://huggingface.co/primeline/whisper-large-v3-turbo-german)
| Distil-whisper large v3 german | 756M | [link](https://huggingface.co/primeline/distil-whisper-large-v3-german) |
| tiny whisper | 37.8M | [link](https://huggingface.co/primeline/whisper-tiny-german) |
## Evaluations - Word error rate
| Dataset | openai-whisper-large-v3-turbo | openai-whisper-large-v3 | primeline-whisper-large-v3-german | nyrahealth-CrisperWhisper (large)| primeline-whisper-large-v3-turbo-german |
|-------------------------------------|-------------------------------|-------------------------|-----------------------------------|---------------------------|-----------------------------------------|
| Tuda-De | 8.300 | 7.884 | 7.711 | **5.148** | 6.441 |
| common_voice_19_0 | 3.849 | 3.484 | 3.215 | **1.927** | 3.200 |
| multilingual librispeech | 3.203 | 2.832 | 2.129 | 2.815 | **2.070** |
| All | 3.649 | 3.279 | 2.734 | 2.662 | **2.628** |
The data and code for evaluations are available [here](https://huggingface.co/datasets/flozi00/asr-german-mixed-evals)
### Training data
The training data for this model includes a large amount of spoken German from various sources. The data was carefully selected and processed to optimize recognition performance.
### Training process
The training of the model was performed with the following hyperparameters
- Batch size: 12288
- Epochs: 3
- Learning rate: 1e-6
- Data augmentation: No
- Optimizer: [Ademamix](https://arxiv.org/abs/2409.03137)
### How to use
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "primeline/whisper-large-v3-turbo-german"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## [About us](https://primeline-ai.com/en/)
[](https://primeline-ai.com/en/)
Your partner for AI infrastructure in Germany <br>
Experience the powerful AI infrastructure that drives your ambitions in Deep Learning, Machine Learning & High-Performance Computing. Optimized for AI training and inference.
Model author: [Florian Zimmermeister](https://huggingface.co/flozi00)
|
OzgurEnt/Llama-3.1-8B-Instruct-Qutaiba-LinuxGeneral
|
OzgurEnt
| 2024-11-13T20:09:58Z | 55 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-13T19:42:31Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Qutaiba Ashqar
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
desdesmond/trained_with_six_emotions
|
desdesmond
| 2024-11-13T20:00:41Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ykacer/bert-base-cased-imdb-sequence-classification",
"base_model:finetune:ykacer/bert-base-cased-imdb-sequence-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T13:34:31Z |
---
library_name: transformers
license: apache-2.0
base_model: ykacer/bert-base-cased-imdb-sequence-classification
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: trained_with_six_emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_with_six_emotions
This model is a fine-tuned version of [ykacer/bert-base-cased-imdb-sequence-classification](https://huggingface.co/ykacer/bert-base-cased-imdb-sequence-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2450
- Accuracy: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.659 | 1.0 | 800 | 0.2002 | 0.9341 |
| 0.1547 | 2.0 | 1600 | 0.1646 | 0.9322 |
| 0.1207 | 3.0 | 2400 | 0.2115 | 0.9281 |
| 0.0839 | 4.0 | 3200 | 0.2177 | 0.9278 |
| 0.0505 | 5.0 | 4000 | 0.2450 | 0.9284 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ashishkgpian/biobert_icd9_classifier_2
|
ashishkgpian
| 2024-11-13T19:48:28Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T19:47:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
griffio/vit-large-patch16-224-dungeon-geo-morphs-011
|
griffio
| 2024-11-13T19:44:57Z | 191 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224",
"base_model:finetune:google/vit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-13T19:32:52Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch16-224-dungeon-geo-morphs-011
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-dungeon-geo-morphs-011
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0008 | 6.5714 | 10 | 0.1917 | 0.9444 |
| 0.0 | 13.2857 | 20 | 0.1489 | 0.9444 |
| 0.0 | 19.8571 | 30 | 0.1657 | 0.9444 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
NickyP27/IranianHistoryLlama3.2-1B-Instruct
|
NickyP27
| 2024-11-13T19:40:25Z | 92 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-08T16:09:31Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Flora_7B-i1-GGUF
|
mradermacher
| 2024-11-13T19:33:42Z | 61 | 0 |
transformers
|
[
"transformers",
"gguf",
"finetune",
"en",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"base_model:ResplendentAI/Flora_7B",
"base_model:quantized:ResplendentAI/Flora_7B",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-13T16:44:30Z |
---
base_model: ResplendentAI/Flora_7B
datasets:
- ResplendentAI/Synthetic_Soul_1k
language:
- en
library_name: transformers
license: cc-by-sa-4.0
quantized_by: mradermacher
tags:
- finetune
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ResplendentAI/Flora_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Flora_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Flora_7B-i1-GGUF/resolve/main/Flora_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
griffio/vit-large-patch16-224-dungeon-geo-morphs-010
|
griffio
| 2024-11-13T19:31:40Z | 192 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224",
"base_model:finetune:google/vit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-13T19:27:28Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch16-224-dungeon-geo-morphs-010
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-dungeon-geo-morphs-010
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1230
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.9545 | 6.5714 | 10 | 0.3644 | 0.9444 |
| 0.2033 | 13.2857 | 20 | 0.1559 | 0.9444 |
| 0.0472 | 19.8571 | 30 | 0.1230 | 0.9444 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
TeunS/Geert
|
TeunS
| 2024-11-13T19:22:14Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma2",
"text-generation",
"unsloth",
"gemma",
"mlx",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b-it",
"base_model:quantized:unsloth/gemma-2-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:08:35Z |
---
base_model: unsloth/gemma-2-2b-it
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma2
- gemma
- mlx
---
# TeunS/Geert
The Model [TeunS/Geert](https://huggingface.co/TeunS/Geert) was converted to MLX format from [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) using mlx-lm version **0.19.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TeunS/Geert")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Cloyne/PhoRerank_law
|
Cloyne
| 2024-11-13T19:15:14Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T19:14:44Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Affluendo/midnattmugs
|
Affluendo
| 2024-11-13T19:00:41Z | 10 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-13T18:17:53Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MIDNATTMUGS
---
# Midnattmugs
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MIDNATTMUGS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Affluendo/midnattmugs', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hpcgroup/hpc-coder-v2-1.3b
|
hpcgroup
| 2024-11-13T19:00:20Z | 99 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"hpc",
"parallel",
"axonn",
"en",
"dataset:hpcgroup/hpc-instruct",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-09T20:37:28Z |
---
library_name: transformers
tags:
- code
- hpc
- parallel
- axonn
datasets:
- hpcgroup/hpc-instruct
- ise-uiuc/Magicoder-OSS-Instruct-75K
- nickrosh/Evol-Instruct-Code-80k-v1
language:
- en
pipeline_tag: text-generation
---
# HPC-Coder-v2
The HPC-Coder-v2-1.3b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc.
This version is a fine-tuning of the [Deepseek Coder 1.3b](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) model.
It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
[HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b), [HPC-Coder-v2-6.7b](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b), and [HPC-Coder-v2-16b](https://huggingface.co/hpcgroup/hpc-coder-v2-16b) are the most capable open-source LLMs for parallel and HPC code generation.
HPC-Coder-v2-16b is currently the best performing open-source LLM on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.
## Using HPC-Coder-v2
The model is provided as a standard huggingface model with safetensor weights.
It can be used with [transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines), [vllm](https://github.com/vllm-project/vllm), or any other standard model inference framework.
HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results.
It was trained with the following instruct template:
```md
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Quantized Models
4 and 8 bit quantized weights are available in the GGUF format for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
The 4 bit model requires ~0.8 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b-Q4_K_S-GGUF).
The 8 bit model requires ~1.4 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b-Q8_0-GGUF).
Further information on how to use them with llama.cpp can be found in [its documentation](https://github.com/ggerganov/llama.cpp).
|
sunkripto/task-15-Qwen-Qwen1.5-1.8B
|
sunkripto
| 2024-11-13T18:58:32Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-11-12T13:24:23Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
hpcgroup/hpc-coder-v2-6.7b
|
hpcgroup
| 2024-11-13T18:58:01Z | 36 | 7 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"hpc",
"parallel",
"axonn",
"en",
"dataset:hpcgroup/hpc-instruct",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-06T01:52:10Z |
---
library_name: transformers
tags:
- code
- hpc
- parallel
- axonn
datasets:
- hpcgroup/hpc-instruct
- ise-uiuc/Magicoder-OSS-Instruct-75K
- nickrosh/Evol-Instruct-Code-80k-v1
language:
- en
pipeline_tag: text-generation
---
# HPC-Coder-v2
The HPC-Coder-v2-6.7b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc.
This version is a fine-tuning of the [Deepseek Coder 6.7b](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) model.
It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
[HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b), [HPC-Coder-v2-6.7b](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b), and [HPC-Coder-v2-16b](https://huggingface.co/hpcgroup/hpc-coder-v2-16b) are the most capable open-source LLMs for parallel and HPC code generation.
HPC-Coder-v2-16b is currently the best performing open-source LLM on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.
## Using HPC-Coder-v2
The model is provided as a standard huggingface model with safetensor weights.
It can be used with [transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines), [vllm](https://github.com/vllm-project/vllm), or any other standard model inference framework.
HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results.
It was trained with the following instruct template:
```md
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Quantized Models
4 and 8 bit quantized weights are available in the GGUF format for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
The 4 bit model requires ~3.8 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b-Q4_K_S-GGUF).
The 8 bit model requires ~7.1 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b-Q8_0-GGUF).
Further information on how to use them with llama.cpp can be found in [its documentation](https://github.com/ggerganov/llama.cpp).
|
hpcgroup/hpc-coder-v2-16b
|
hpcgroup
| 2024-11-13T18:55:39Z | 11 | 12 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"code",
"hpc",
"parallel",
"axonn",
"conversational",
"custom_code",
"en",
"dataset:hpcgroup/hpc-instruct",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"base_model:finetune:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-02T22:22:49Z |
---
library_name: transformers
datasets:
- hpcgroup/hpc-instruct
- ise-uiuc/Magicoder-OSS-Instruct-75K
- nickrosh/Evol-Instruct-Code-80k-v1
language:
- en
base_model:
- deepseek-ai/DeepSeek-Coder-V2-Lite-Base
tags:
- code
- hpc
- parallel
- axonn
pipeline_tag: text-generation
---
# HPC-Coder-v2
The HPC-Coder-v2-16b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc.
This version is a fine-tuning of the [Deepseek Coder V2 lite base](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) model.
It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
[HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b), [HPC-Coder-v2-6.7b](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b), and [HPC-Coder-v2-16b](https://huggingface.co/hpcgroup/hpc-coder-v2-16b) are the most capable open-source LLMs for parallel and HPC code generation.
HPC-Coder-v2-16b is currently the best performing open-source LLM on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.
## Using HPC-Coder-v2
The model is provided as a standard huggingface model with safetensor weights.
It can be used with [transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines), [vllm](https://github.com/vllm-project/vllm), or any other standard model inference framework.
HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results.
It was trained with the following instruct template:
```md
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
|
lenML/aya-expanse-8b-abliterated
|
lenML
| 2024-11-13T18:48:37Z | 127 | 4 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"gguf",
"CohereForAI",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-expanse-8b",
"base_model:finetune:CohereForAI/aya-expanse-8b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-11-13T09:32:05Z |
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
base_model:
- CohereForAI/aya-expanse-8b
tags:
- gguf
- CohereForAI
---
# Model Card for aya-expanse-8b-abliterated
This is an uncensored version of [aya-expanse-8b](https://huggingface.co/CohereForAI/aya-expanse-8b) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
# Limitations
目前,根据我的 `lenml-reject-eval` 测试,此版本模型将拒绝评分从 `0.91` 降低到 `0.50`,这仍然是一个很高的分数(目前完全解除限制的模型在 reject eval 中最低可以到达到 0.05)
后续还会继续更新这个模型
|
saliq5/PR_BERT
|
saliq5
| 2024-11-13T18:44:42Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T18:44:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/Prismatic-12b-GGUF
|
bartowski
| 2024-11-13T18:43:54Z | 29 | 2 | null |
[
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:ProdeusUnity/Prismatic-12b",
"base_model:quantized:ProdeusUnity/Prismatic-12b",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T18:04:22Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
tags:
- mergekit
- merge
base_model: ProdeusUnity/Prismatic-12b
---
## Llamacpp imatrix Quantizations of Prismatic-12b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization.
Original model: https://huggingface.co/ProdeusUnity/Prismatic-12b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Prismatic-12b-f16.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-f16.gguf) | f16 | 24.50GB | false | Full F16 weights. |
| [Prismatic-12b-Q8_0.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Prismatic-12b-Q6_K_L.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Prismatic-12b-Q6_K.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
| [Prismatic-12b-Q5_K_L.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Prismatic-12b-Q5_K_M.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
| [Prismatic-12b-Q5_K_S.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
| [Prismatic-12b-Q4_K_L.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Prismatic-12b-Q4_K_M.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for most use cases, *recommended*. |
| [Prismatic-12b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Prismatic-12b-Q4_K_S.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Prismatic-12b-Q4_0.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Prismatic-12b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_0_8_8.gguf) | Q4_0_8_8 | 7.07GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Prismatic-12b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_0_4_8.gguf) | Q4_0_4_8 | 7.07GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Prismatic-12b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q4_0_4_4.gguf) | Q4_0_4_4 | 7.07GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Prismatic-12b-IQ4_XS.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Prismatic-12b-Q3_K_L.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
| [Prismatic-12b-Q3_K_M.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
| [Prismatic-12b-IQ3_M.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Prismatic-12b-Q3_K_S.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
| [Prismatic-12b-Q2_K_L.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Prismatic-12b-IQ3_XS.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Prismatic-12b-Q2_K.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
| [Prismatic-12b-IQ2_M.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Prismatic-12b-IQ2_S.gguf](https://huggingface.co/bartowski/Prismatic-12b-GGUF/blob/main/Prismatic-12b-IQ2_S.gguf) | IQ2_S | 4.14GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Prismatic-12b-GGUF --include "Prismatic-12b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Prismatic-12b-GGUF --include "Prismatic-12b-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Prismatic-12b-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
naresh810/DBERT_DFARS
|
naresh810
| 2024-11-13T18:43:14Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-13T18:42:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf
|
RichardErkhov
| 2024-11-13T18:16:00Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-11-13T16:47:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SmolLM2-1.7B - GGUF
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/SmolLM2-1.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SmolLM2-1.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q2_K.gguf) | Q2_K | 0.63GB |
| [SmolLM2-1.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q3_K_S.gguf) | Q3_K_S | 0.72GB |
| [SmolLM2-1.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q3_K.gguf) | Q3_K | 0.8GB |
| [SmolLM2-1.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q3_K_M.gguf) | Q3_K_M | 0.8GB |
| [SmolLM2-1.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q3_K_L.gguf) | Q3_K_L | 0.87GB |
| [SmolLM2-1.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.IQ4_XS.gguf) | IQ4_XS | 0.88GB |
| [SmolLM2-1.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q4_0.gguf) | Q4_0 | 0.92GB |
| [SmolLM2-1.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.IQ4_NL.gguf) | IQ4_NL | 0.93GB |
| [SmolLM2-1.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q4_K_S.gguf) | Q4_K_S | 0.93GB |
| [SmolLM2-1.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q4_K.gguf) | Q4_K | 0.98GB |
| [SmolLM2-1.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q4_K_M.gguf) | Q4_K_M | 0.98GB |
| [SmolLM2-1.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q4_1.gguf) | Q4_1 | 1.02GB |
| [SmolLM2-1.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q5_0.gguf) | Q5_0 | 1.11GB |
| [SmolLM2-1.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.11GB |
| [SmolLM2-1.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q5_K.gguf) | Q5_K | 1.14GB |
| [SmolLM2-1.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.14GB |
| [SmolLM2-1.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q5_1.gguf) | Q5_1 | 1.2GB |
| [SmolLM2-1.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q6_K.gguf) | Q6_K | 1.31GB |
| [SmolLM2-1.7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM2-1.7B-gguf/blob/main/SmolLM2-1.7B.Q8_0.gguf) | Q8_0 | 1.7GB |
Original model description:
---
base_model: HuggingFaceTB/SmolLM2-1.7B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-1.7B
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2

|
cc12954/stock_trained_roberta
|
cc12954
| 2024-11-13T18:12:14Z | 5 | 0 | null |
[
"pytorch",
"safetensors",
"roberta",
"text-classification",
"region:us"
] |
text-classification
| 2024-11-13T17:16:40Z |
---
tags:
- text-classification
library_tag: transformers
---
# Model Information
This model is fine-tuned for sentiment analysis using a RoBERTa architecture.
|
camidenecken/RoBERTa-RM1-v1-4-rm-v30
|
camidenecken
| 2024-11-13T18:10:14Z | 177 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T18:09:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aryangp/text_summarization_aryangp_uiet
|
Aryangp
| 2024-11-13T18:06:41Z | 115 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-13T17:53:51Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_summarization_aryangp_uiet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_summarization_aryangp_uiet
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7634
- Rouge1: 0.1255
- Rouge2: 0.0385
- Rougel: 0.1066
- Rougelsum: 0.1063
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.9020 | 0.1279 | 0.0385 | 0.1095 | 0.1093 | 19.0 |
| No log | 2.0 | 124 | 2.7634 | 0.1255 | 0.0385 | 0.1066 | 0.1063 | 19.0 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
camidenecken/RoBERTa-RM1-v1-4-rm-v28
|
camidenecken
| 2024-11-13T18:06:37Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T18:06:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/datagemma-rag-27b-it-GGUF
|
mradermacher
| 2024-11-13T18:06:06Z | 16 | 0 |
transformers
|
[
"transformers",
"gguf",
"conversational",
"en",
"base_model:google/datagemma-rag-27b-it",
"base_model:quantized:google/datagemma-rag-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T00:07:31Z |
---
base_model: google/datagemma-rag-27b-it
extra_gated_button_content: Acknowledge license
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/google/datagemma-rag-27b-it
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/datagemma-rag-27b-it-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q2_K.gguf) | Q2_K | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q5_K_S.gguf) | Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q5_K_M.gguf) | Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q6_K.gguf) | Q6_K | 22.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/datagemma-rag-27b-it-GGUF/resolve/main/datagemma-rag-27b-it.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
camidenecken/RoBERTa-RM1-v1-4-rm-v27
|
camidenecken
| 2024-11-13T18:05:06Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T18:04:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wuriyanto/ner-bert-indonesian-v1
|
wuriyanto
| 2024-11-13T18:04:11Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"indonesian",
"id",
"arxiv:1810.04805",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-13T17:35:14Z |
---
license: mit
base_model:
- google-bert/bert-base-multilingual-uncased
tags:
- ner
- indonesian
- bert
language:
- id
library_name: transformers
---
# ner-bert-indonesian-v1
### Model Description
**ner-bert-indonesian-v1** is a fine-tuned **google-bert/bert-base-multilingual-uncased** which is used for **named-entity-recognition (NER)** tasks in **Indonesian**. **In version 1**, the model is quite good at recognizing the following 4 entity types:
- 0 others (entities not yet recognized by the model) - Lainnya
- Person - Orang
- Organisation - Organisasi
- Place - Tempat/Lokasi
### Usage
Using **pipelines**
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained('wuriyanto/ner-bert-indonesian-v1')
model = AutoModelForTokenClassification.from_pretrained('wuriyanto/ner-bert-indonesian-v1')
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "OpenAI adalah laboratorium penelitan kecerdasan buatan yang terdiri atas perusahaan waralaba OpenAI LP dan perusahaan induk nirlabanya, OpenAI Inc. Para pendirinya (sam altman) terdorong oleh ketakutan mereka akan kemungkinan bahwa kecerdasan buatan dapat mengancam keberadaan manusia, perusahaan ini ada di amerika serikat. PT. Indodana , salah satu perusahann di Indonesia mulai mengadopsi teknologi ini."
ner_results = nlp(example)
for n in ner_results:
print(n)
```
Using **using custom parsers**
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
id_to_label = {0: 'O', 1: 'Place', 2: 'Organisation', 3: 'Person'}
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('wuriyanto/ner-bert-indonesian-v1')
model = AutoModelForTokenClassification.from_pretrained('wuriyanto/ner-bert-indonesian-v1')
def tokenize_input(sentence):
tokenized_input = tokenizer(sentence, return_tensors="pt", padding=True, truncation=True)
return tokenized_input
def predict_ner(sentence):
inputs = tokenize_input(sentence)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predictions = torch.argmax(logits, dim=2)
# Convert predictions and tokens back to readable format
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
predicted_labels = [id_to_label[p.item()] for p in predictions[0]]
# Merge subwords and filter out special tokens
merged_tokens, merged_labels = [], []
current_token, current_label = "", None
for token, label in zip(tokens, predicted_labels):
print(token, ' ', label)
# Skip special tokens and punctuation (like [CLS], [SEP], commas, and periods)
if token in ["[CLS]", "[SEP]"] or (label == "O" and token in [",", "."]):
continue
if token.startswith("##"):
current_token += token[2:]
if current_label == 'O':
current_label = label
else:
if current_token:
merged_tokens.append(current_token)
merged_labels.append(current_label)
current_token = token
current_label = label
if current_token:
merged_tokens.append(current_token)
merged_labels.append(current_label)
results = list(zip(merged_tokens, merged_labels))
return results
sentence = "OpenAI adalah laboratorium penelitan kecerdasan buatan yang terdiri atas perusahaan waralaba OpenAI LP dan perusahaan induk nirlabanya, OpenAI Inc. Para pendirinya (sam altman) terdorong oleh ketakutan mereka akan kemungkinan bahwa kecerdasan buatan dapat mengancam keberadaan manusia, perusahaan ini ada di amerika serikat. PT. Indodana , salah satu perusahann di Indonesia mulai mengadopsi teknologi ini."
results = predict_ner(sentence)
print(results)
for token, label in results:
print(f"{token}: {label}")
```
### Dataset and citation info
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
* The DEE NER dataset: Ika Alfina, Ruli Manurung, and Mohamad Ivan Fanany, ["DBpedia Entities Expansion in Automatically Building Dataset for Indonesian NER"](https://ieeexplore.ieee.org/document/7872784), in Proceeding of 8th International Conference on Advanced Computer Science and Information Systems 2016 (ICACSIS 2016).
* The MDEE and Singgalang NER dataset: Ika Alfina, Septiviana Savitri, and Mohamad Ivan Fanany, ["Modified DBpedia Entities Expansion for Tagging Automatically NER Dataset"](https://ieeexplore.ieee.org/document/8355036), in Proceeding of 9th International Conference on Advanced Computer Science and Information Systems 2017 (ICACSIS 2017).
* The Gold Standard: Andry Luthfi, Bayu Distiawan, and Ruli Manurung, ["Building an Indonesian named entity recognizer using Wikipedia and DBPedia"](https://ieeexplore.ieee.org/document/6973520), in the Proceesing of 2014 International Conference on Asian Language Processing (IALP 2014).
|
prithivMLmods/Qwen2.5-Coder-3B-Instruct-GGUF
|
prithivMLmods
| 2024-11-13T18:03:50Z | 254 | 9 |
transformers
|
[
"transformers",
"gguf",
"Llama",
"Qwen2.5",
"Coder",
"F16",
"16-bit",
"Q4",
"Q5",
"Q8",
"Llama-cpp",
"Ollama",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-13T06:49:36Z |
---
license: creativeml-openrail-m
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- Llama
- Qwen2.5
- Coder
- F16
- 16-bit
- Q4
- Q5
- Q8
- Llama-cpp
- Ollama
---
## Qwen2.5-Coder-3B-Instruct Model Files
| File Name | Size | Description | Upload Status |
|--------------------------------------------|----------|--------------------------|---------------------|
| `.gitattributes` | 1.81 kB | Attributes file | Uploaded |
| `Qwen2.5-Coder-3B-Instruct.F16.gguf` | 6.18 GB | FP16 model file | Uploaded (LFS) |
| `Qwen2.5-Coder-3B-Instruct.Q4_K_M.gguf` | 1.93 GB | Quantized Q4 model | Uploaded (LFS) |
| `Qwen2.5-Coder-3B-Instruct.Q5_K_M.gguf` | 2.22 GB | Quantized Q5 model | Uploaded (LFS) |
| `Qwen2.5-Coder-3B-Instruct.Q8_0.gguf` | 3.29 GB | Quantized Q8 model | Uploaded (LFS) |
| `README.md` | 42 Bytes | Readme file | Initial commit |
# Run with Ollama 🦙
## Overview
Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes.
## Table of Contents
- [Download and Install Ollama](#download-and-install-ollama)
- [Steps to Run GGUF Models](#steps-to-run-gguf-models)
- [1. Create the Model File](#1-create-the-model-file)
- [2. Add the Template Command](#2-add-the-template-command)
- [3. Create and Patch the Model](#3-create-and-patch-the-model)
- [Running the Model](#running-the-model)
- [Sample Usage](#sample-usage)
## Download and Install Ollama🦙
To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
## Steps to Run GGUF Models
### 1. Create the Model File
First, create a model file and name it appropriately. For example, you can name your model file `metallama`.
### 2. Add the Template Command
In your model file, include a `FROM` line that specifies the base model file you want to use. For instance:
```bash
FROM Llama-3.2-1B.F16.gguf
```
Ensure that the model file is in the same directory as your script.
### 3. Create and Patch the Model
Open your terminal and run the following command to create and patch your model:
```bash
ollama create metallama -f ./metallama
```
Once the process is successful, you will see a confirmation message.
To verify that the model was created successfully, you can list all models with:
```bash
ollama list
```
Make sure that `metallama` appears in the list of models.
---
## Running the Model
To run your newly created model, use the following command in your terminal:
```bash
ollama run metallama
```
### Sample Usage
In the command prompt, you can execute:
```bash
D:\>ollama run metallama
```
You can interact with the model like this:
```plaintext
>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.
```
---
## Conclusion
With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient.
- This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.
|
mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF
|
mradermacher
| 2024-11-13T18:03:08Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:rombodawg/Rombos-Coder-V2.5-Qwen-14b",
"base_model:quantized:rombodawg/Rombos-Coder-V2.5-Qwen-14b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-13T03:10:57Z |
---
base_model: rombodawg/Rombos-Coder-V2.5-Qwen-14b
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B/blob/main/LICENSE
no_imatrix: nan detected in blk.47.attn_q.weight
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-14b
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-14b-GGUF/resolve/main/Rombos-Coder-V2.5-Qwen-14b.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
camidenecken/RoBERTa-RM1-v1-4-rm-v25
|
camidenecken
| 2024-11-13T18:02:07Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T18:01:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Adriano2024/bert-finetuned-ner
|
Adriano2024
| 2024-11-13T18:01:20Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-13T16:25:03Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9331789612967251
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.9412746079412746
- name: Accuracy
type: accuracy
value: 0.9864308000235474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9332
- Recall: 0.9495
- F1: 0.9413
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0746 | 1.0 | 1756 | 0.0685 | 0.9035 | 0.9359 | 0.9194 | 0.9804 |
| 0.0356 | 2.0 | 3512 | 0.0676 | 0.9345 | 0.9483 | 0.9414 | 0.9853 |
| 0.0223 | 3.0 | 5268 | 0.0616 | 0.9332 | 0.9495 | 0.9413 | 0.9864 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
camidenecken/RoBERTa-RM1-v1-4-rm-v22
|
camidenecken
| 2024-11-13T17:57:48Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T17:57:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rombodawg/Rombos-Coder-V2.5-Qwen-32b
|
rombodawg
| 2024-11-13T17:55:41Z | 184 | 11 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T08:07:36Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Rombos-Coder-V2.5-Qwen-32b

Rombos-Coder-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-Coder-32B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method as demonstrated in my own "Continuous Finetuning" method (Linked bellow).
https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF:
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-32b-GGUF
- https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-32b-i1-GGUF
EXL2:
Benchmarks: (Coming soon)
|
rombodawg/Rombos-Coder-V2.5-Qwen-14b
|
rombodawg
| 2024-11-13T17:55:04Z | 57 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T07:52:21Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Rombos-Coder-V2.5-Qwen-14b

Rombos-Coder-V2.5-Qwen-14b is a continues finetuned version of Qwen2.5-Coder-14B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method as demonstrated in my own "Continuous Finetuning" method (Linked bellow).
https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF:
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-14b-GGUF
EXL2:
Benchmarks: (Coming soon)
|
camidenecken/RoBERTa-RM1-v1-4-rm-v21
|
camidenecken
| 2024-11-13T17:54:49Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T17:54:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rombodawg/Rombos-Coder-V2.5-Qwen-7b
|
rombodawg
| 2024-11-13T17:54:15Z | 79 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-28T04:55:50Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Rombos-Coder-V2.5-Qwen-7b

Rombos-Coder-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-Coder-7B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the *
Ties* merge method as demonstrated in my own "Continuous Finetuning" method (Linked bellow).
https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF:
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-7b-GGUF
- https://huggingface.co/mradermacher/Rombos-Coder-V2.5-Qwen-7b-i1-GGUF
EXL2:
Benchmarks: (Coming soon)
|
camidenecken/RoBERTa-RM1-v1-4-rm-v19
|
camidenecken
| 2024-11-13T17:52:04Z | 179 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T17:51:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
reasonwang/ToolGen-Llama-3-8B-Tool-Memorization
|
reasonwang
| 2024-11-13T17:51:46Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-07T14:49:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
reasonwang/ToolGen-Llama-3-8B-Tool-Retriever
|
reasonwang
| 2024-11-13T17:51:19Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-07T14:39:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aurazboev/ISAllama-3.1-8b-tuned
|
aurazboev
| 2024-11-13T17:47:27Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T22:26:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aurazboev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
georgeprethesh/test2
|
georgeprethesh
| 2024-11-13T17:46:47Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-13T17:46:44Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anjana
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# test2
<Gallery />
## Model description
## Trigger words
You should use `anjana` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/georgeprethesh/test2/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
camidenecken/RoBERTa-RM1-v1-4-rm-v16
|
camidenecken
| 2024-11-13T17:46:35Z | 179 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T17:45:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AyadSarah/fine_tuned_clip
|
AyadSarah
| 2024-11-13T17:46:33Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-11-13T17:46:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jacobhoffmann/TestGen_v2.2-Llama-3.1-8B-lr2e-05_epochs1
|
jacobhoffmann
| 2024-11-13T17:41:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T17:37:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zaanind/gpt2_finetune_films
|
zaanind
| 2024-11-13T17:33:24Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T07:15:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
karteek/my_awesome_eli5_clm-model
|
karteek
| 2024-11-13T17:30:17Z | 177 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T17:09:56Z |
---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9504 | 1.0 | 1308 | 3.8461 |
| 3.8509 | 2.0 | 2616 | 3.8372 |
| 3.8114 | 3.0 | 3924 | 3.8365 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ahmadsy/ppo-SnowballTarget
|
ahmadsy
| 2024-11-13T17:28:00Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-11-13T16:56:03Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ahmadsy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
galsenai/whisper-large-v3-wo
|
galsenai
| 2024-11-13T17:26:45Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-13T17:25:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/internlm2_5-20b-chat-GGUF
|
mradermacher
| 2024-11-13T17:19:10Z | 60 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:internlm/internlm2_5-20b-chat",
"base_model:quantized:internlm/internlm2_5-20b-chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T07:26:56Z |
---
base_model: internlm/internlm2_5-20b-chat
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/internlm/internlm2_5-20b-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/internlm2_5-20b-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q2_K.gguf) | Q2_K | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q3_K_S.gguf) | Q3_K_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q3_K_M.gguf) | Q3_K_M | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q3_K_L.gguf) | Q3_K_L | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q5_K_S.gguf) | Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q5_K_M.gguf) | Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q6_K.gguf) | Q6_K | 16.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-20b-chat-GGUF/resolve/main/internlm2_5-20b-chat.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ouassimMegrad/MKDT
|
ouassimMegrad
| 2024-11-13T17:17:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:paulml/NeuralOmniWestBeaglake-7B",
"base_model:merge:paulml/NeuralOmniWestBeaglake-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:merge:paulml/OmniBeagleSquaredMBX-v3-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T17:05:29Z |
---
base_model:
- paulml/NeuralOmniWestBeaglake-7B
- paulml/OmniBeagleSquaredMBX-v3-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
layer_range: [0, 32]
- model: paulml/NeuralOmniWestBeaglake-7B
layer_range: [0, 32]
merge_method: slerp # This should not be indented under 'sources'
base_model: paulml/NeuralOmniWestBeaglake-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
EmpSurak/The-Next-Generation
|
EmpSurak
| 2024-11-13T17:16:05Z | 15 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:gpl-3.0",
"region:us"
] |
text-to-image
| 2024-11-13T12:34:00Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
star trek, the next generation, john de lancie as klingon warrior, q, solo,
big grin, open mouth, sharp long teeth, fangs, armor, starship bridge
output:
url: images/2024-11-13_11-58-50_4147.png
- text: >-
star trek, the next generation, john de lancie, q, closeup portrait, solo,
happy, red starfleet uniform, starship
parameters:
negative_prompt: (epaulettes)
output:
url: images/2024-11-13_12-14-25_4698.png
- text: star trek, the next generation, starship bridge, ferengi, solo, bald
parameters:
negative_prompt: (hair)
output:
url: images/2024-11-13_12-31-59_5786.png
- text: star trek, the next generation, beverly crusher, medical emergency, klingon patient
output:
url: images/2024-11-13_17-09-54_8849.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: star trek, the next generation, tng
license: gpl-3.0
---
# TNG
<Gallery />
## Model description
A general LoRA with the goal to create Star Trek: TNG-like pictures. Currently, it is limited to season 1.
## Trigger words
You should use `star trek` to trigger the image generation.
You should use `the next generation` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/EmpSurak/The-Next-Generation/tree/main) them in the Files & versions tab.
|
gauneg/deberta-v3-base-absa-ate-sentiment
|
gauneg
| 2024-11-13T17:12:18Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"en",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T12:41:01Z |
---
license: mit
language:
- en
base_model:
- microsoft/deberta-v3-base
pipeline_tag: token-classification
library_name: transformers
---
# Training
This model is designed for token classification tasks, enabling it to extract aspect terms and predict the sentiment polarity associated with the extracted aspect terms.
The extracted aspect terms will be the span(s) from the input text on which a sentiment is being expressed.
## Datasets
This model has been trained on the following datasets:
1. Aspect Based Sentiment Analysis SemEval Shared Tasks ([2014](https://aclanthology.org/S14-2004/), [2015](https://aclanthology.org/S15-2082/), [2016](https://aclanthology.org/S16-1002/))
2. Multi-Aspect Multi-Sentiment [MAMS](https://aclanthology.org/D19-1654/)
# Use
* Making end-to-end inference with a pipeline
```python
from transformers import pipeline
ate_sent_pipeline = pipeline(task='ner',
aggregation_strategy='simple',
model="gauneg/deberta-v3-base-absa-ate-sentiment")
text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
ate_sent_pipeline(text_input)
```
Expected output
```bash
[{'entity_group': 'pos', #sentiment polarity
'score': 0.87505656,
'word': 'food', # aspect term
'start': 25,
'end': 30},
{'entity_group': 'neg',# sentiment polarity
'score': 0.4558051,
'word': 'service', #aspect term
'start': 55,
'end': 63}]
```
# OR
* Making token level inferences with Auto classes
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_id = "gauneg/deberta-v3-base-absa-ate-sentiment"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# the sequence of labels used during training
labels = {"B-neu": 1, "I-neu": 2, "O": 0, "B-neg": 3, "B-con": 4, "I-pos": 5, "B-pos": 6, "I-con": 7, "I-neg": 8, "X": -100}
id2lab = {idx: lab for lab, idx in labels.items()}
lab2id = {lab: idx for lab, idx in labels.items()}
model = AutoModelForTokenClassification.from_pretrained("../models/deberta-v3-base-bio-w-pol/",
num_labels=len(labels), id2label=id2lab, label2id=lab2id)
# making one prediction at a time (should be padded/batched and truncated for efficiency)
text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
tok_inputs = tokenizer(text_input, return_tensors="pt")
y_pred = model(**tok_inputs) # predicting the logits
# selecting the most favoured labels for each token from the logits
y_pred_fin = y_pred.logits.argmax(dim=-1)[0]
# since first and the last tokens are excluded ([CLS] and [SEP]) they have to be removed before decoding
decoded_pred = [id2lab[logx.item()] for logx in y_pred_fin[1:-1]]
## displaying the input tokens with predictions and skipping [CLS] and [SEP] tokens at the beginning and the end respectively
decoded_toks = tok_inputs['input_ids'][0][1:-1]
tok_levl_pred = list(zip(tokenizer.convert_ids_to_tokens(decoded_toks), decoded_pred))
```
Expected output
```bash
[('▁Been', 'O'),
('▁here', 'O'),
('▁a', 'O'),
('▁few', 'O'),
('▁times', 'O'),
('▁and', 'O'),
('▁food', 'B-pos'),
('▁has', 'O'),
('▁always', 'O'),
('▁been', 'O'),
('▁good', 'O'),
('▁but', 'O'),
('▁service', 'B-neg'),
('▁really', 'O'),
('▁suffers', 'O'),
('▁when', 'O'),
('▁it', 'O'),
('▁gets', 'O'),
('▁crowded', 'O'),
('.', 'O')]
```
# Evaluation on Benchmark Test Datasets
The first evaluation is for token-extraction task without considering the polarity of the extracted tokens. The tokens expected to be extracted are aspect term tokens
on which the sentiments have been expressed. (scores are expressed as micro-averages of B-I-O labels)
# ATE (Aspect Term Extraction Only)
| Test Dataset | Base Model | Fine-tuned Model | Precision | Recall | F1 Score |
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|hotel reviews (SemEval 2015)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|71.16|73.92|71.6|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|70.92|72.28|71.07|
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|64.05|79.69|70.0|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|66.29|72.78|68.92|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|70.58|61.52|64.21|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|66.38|50.62|54.31|
|laptop reviews (SemEval 2014)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|70.82|48.97|52.08|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|73.61|46.38|49.87|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|MAMS-ATE (2019)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|81.07|79.66|80.35|
|MAMS-ATE (2019)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|79.91|78.95|79.39|
|MAMS-ATE (2019)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|74.46|84.5|78.75|
|MAMS-ATE (2019)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|77.8|79.81|78.75|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|88.59|87.0|87.45|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|92.26|82.95|86.57|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|93.07|81.95|86.32|
|restaurant reviews (SemEval 2014)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|92.94|81.71|86.01|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2015)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|72.91|75.4|72.74|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|70.54|77.48|72.63|
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|68.32|79.84|72.28|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|71.94|74.75|71.84|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|70.22|75.83|71.84|
|restaurant reviews (SemEval 2016)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|71.54|73.38|71.2|
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|71.35|72.78|70.85|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|66.68|77.97|70.79|
# Aspect Sentiment Evaluation
This evaluation considers token-extraction task with polarity of the extracted tokens. The tokens expected to be extracted are aspect term tokens
on which the sentiments have been expressed along with the polarity of the sentiments. (scores are expressed as macro-averages)
| Test Dataset | Base Model | Fine-tuned Model | Precision | Recall | F1 Score |
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|51.92|65.55|54.94|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|54.62|53.65|54.08|
|hotel reviews (SemEval 2015)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|55.43|56.53|54.03|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|52.88|55.19|53.85|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|44.25|41.55|42.81|
|laptop reviews (SemEval 2014)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|46.15|33.23|37.09|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|41.7|34.38|36.93|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|44.98|31.87|35.67|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|MAMS-ATE (2019)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|72.06|72.98|72.49|
|MAMS-ATE (2019)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|72.97|71.63|72.26|
|MAMS-ATE (2019)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|69.34|73.3|71.07|
|MAMS-ATE (2019)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|65.74|75.11|69.77|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|61.15|58.46|59.74|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|60.13|56.81|58.13|
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|56.79|59.3|57.93|
|restaurant reviews (SemEval 2014)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|58.99|54.76|56.45|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|53.89|55.7|54.11|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|54.36|55.38|53.6|
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|51.67|56.58|53.29|
|restaurant reviews (SemEval 2015)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|54.55|53.68|53.12|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|53.7|60.49|55.05|
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-base|[gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|52.31|54.58|52.33|
|restaurant reviews (SemEval 2016)|(this) microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|52.07|54.58|52.15|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|49.07|56.5|51.25|
|
jacobhoffmann/TestGen_v2.2-Llama-3.1-8B-lr1e-05_epochs3
|
jacobhoffmann
| 2024-11-13T17:11:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T17:06:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gauneg/roberta-base-absa-ate-sentiment
|
gauneg
| 2024-11-13T17:11:24Z | 223 | 1 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-02T23:20:47Z |
---
language:
- en
license: apache-2.0
base_model:
- FacebookAI/roberta-base
pipeline_tag: token-classification
library_name: transformers
---
# Training
This model is designed for token classification tasks, enabling it to extract aspect terms and predict the sentiment polarity associated with the extracted aspect terms.
The extracted aspect terms will be the span(s) from the input text on which a sentiment is being expressed.
## Datasets
This model has been trained on the following datasets:
1. Aspect Based Sentiment Analysis SemEval Shared Tasks ([2014](https://aclanthology.org/S14-2004/), [2015](https://aclanthology.org/S15-2082/), [2016](https://aclanthology.org/S16-1002/))
2. Multi-Aspect Multi-Sentiment [MAMS](https://aclanthology.org/D19-1654/)
# Use
* Using the pipeline directly for end-to-end inference:
```python
from transformers import pipeline
ate_sent_pipeline = pipeline(task='ner',
aggregation_strategy='simple',
model="gauneg/roberta-base-absa-ate-sentiment")
text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
ate_sent_pipeline(text_input)
```
* pipeline output:
```bash
[{'entity_group': 'pos', #sentiment polarity
'score': 0.8447307,
'word': ' food', # aspect term
'start': 26,
'end': 30},
{'entity_group': 'neg', #sentiment polarity
'score': 0.81927896,
'word': ' service', #aspect term
'start': 56,
'end': 63}]
```
# OR
* Making token level inferences with Auto classes
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_id = "gauneg/roberta-base-absa-ate-sentiment"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# the sequence of labels used during training
labels = {"B-neu": 1, "I-neu": 2, "O": 0, "B-neg": 3, "B-con": 4, "I-pos": 5, "B-pos": 6, "I-con": 7, "I-neg": 8, "X": -100}
id2lab = {idx: lab for lab, idx in labels.items()}
lab2id = {lab: idx for lab, idx in labels.items()}
model = AutoModelForTokenClassification.from_pretrained(model_id,
num_labels=len(labels), id2label=id2lab, label2id=lab2id)
# making one prediction at a time (should be padded/batched and truncated for efficiency)
text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
tok_inputs = tokenizer(text_input, return_tensors="pt")
y_pred = model(**tok_inputs) # predicting the logits
# since first and the last tokens are excluded (<s> and </s>)
# they have to be removed before decoding the labels predicted against them
y_pred_fin = y_pred.logits.argmax(dim=-1)[0][1:-1] # selecting the most favoured labels for each token from the logits
decoded_pred = [id2lab[logx.item()] for logx in y_pred_fin]
## displaying the input tokens with predictions and skipping <s> and </s> tokens at the beginning and the end respectively
decoded_toks = tok_inputs['input_ids'][0][1:-1]
tok_levl_pred = list(zip(tokenizer.convert_ids_to_tokens(decoded_toks), decoded_pred))
```
* results in `tok_level_pred` variable
```bash
[('Be', 'O'),
('en', 'O'),
('Ġhere', 'O'),
('Ġa', 'O'),
('Ġfew', 'O'),
('Ġtimes', 'O'),
('Ġand', 'O'),
('Ġfood', 'B-pos'),
('Ġhas', 'O'),
('Ġalways', 'O'),
('Ġbeen', 'O'),
('Ġgood', 'O'),
('Ġbut', 'O'),
('Ġservice', 'B-neg'),
('Ġreally', 'O'),
('Ġsuffers', 'O'),
('Ġwhen', 'O'),
('Ġit', 'O'),
('Ġgets', 'O'),
('Ġcrowded', 'O'),
('.', 'O')]
```
# Evaluation on Benchmark Test Datasets
The first evaluation is for token-extraction task without considering the polarity of the extracted tokens. The tokens expected to be extracted are aspect term tokens
on which the sentiments have been expressed. (scores are expressed as micro-averages of B-I-O labels)
# ATE (Aspect Term Extraction Only)
| Test Dataset | Base Model | Fine-tuned Model | Precision | Recall | F1 Score |
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|71.16|73.92|71.6|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|70.92|72.28|71.07|
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|64.05|79.69|70.0|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|66.29|72.78|68.92|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|70.58|61.52|64.21|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|66.38|50.62|54.31|
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|70.82|48.97|52.08|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|73.61|46.38|49.87|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|MAMS-ATE (2019)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|81.07|79.66|80.35|
|MAMS-ATE (2019)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|79.91|78.95|79.39|
|MAMS-ATE (2019)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|74.46|84.5|78.75|
|MAMS-ATE (2019)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|77.8|79.81|78.75|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|88.59|87.0|87.45|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|92.26|82.95|86.57|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|93.07|81.95|86.32|
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|92.94|81.71|86.01|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|72.91|75.4|72.74|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|70.54|77.48|72.63|
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|68.32|79.84|72.28|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|71.94|74.75|71.84|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|70.22|75.83|71.84|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|71.54|73.38|71.2|
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|71.35|72.78|70.85|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|66.68|77.97|70.79|
# Aspect Sentiment Evaluation
This evaluation considers token-extraction task with polarity of the extracted tokens. The tokens expected to be extracted are aspect term tokens
on which the sentiments have been expressed along with the polarity of the sentiments. (scores are expressed as macro-averages)
| Test Dataset | Base Model | Fine-tuned Model | Precision | Recall | F1 Score |
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|51.92|65.55|54.94|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|54.62|53.65|54.08|
|hotel reviews (SemEval 2015)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|55.43|56.53|54.03|
|hotel reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|52.88|55.19|53.85|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|44.25|41.55|42.81|
|laptop reviews (SemEval 2014)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|46.15|33.23|37.09|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|41.7|34.38|36.93|
|laptop reviews (SemEval 2014)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|44.98|31.87|35.67|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|MAMS-ATE (2019)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|72.06|72.98|72.49|
|MAMS-ATE (2019)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|72.97|71.63|72.26|
|MAMS-ATE (2019)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|69.34|73.3|71.07|
|MAMS-ATE (2019)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|65.74|75.11|69.77|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|61.15|58.46|59.74|
|restaurant reviews (SemEval 2014)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|60.13|56.81|58.13|
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|56.79|59.3|57.93|
|restaurant reviews (SemEval 2014)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|58.99|54.76|56.45|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|53.89|55.7|54.11|
|restaurant reviews (SemEval 2015)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|54.36|55.38|53.6|
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|51.67|56.58|53.29|
|restaurant reviews (SemEval 2015)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|54.55|53.68|53.12|
| ------------ | ---------- | ---------------- | --------- | ------ | -------- |
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-large|[gauneg/roberta-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/roberta-large-absa-ate-sentiment-lora-adapter)|53.7|60.49|55.05|
|restaurant reviews (SemEval 2016)|FacebookAI/roberta-base|(this) [gauneg/roberta-base-absa-ate-sentiment](https://huggingface.co/gauneg/roberta-base-absa-ate-sentiment)|52.31|54.58|52.33|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-base|[gauneg/deberta-v3-base-absa-ate-sentiment](https://huggingface.co/gauneg/deberta-v3-base-absa-ate-sentiment)|52.07|54.58|52.15|
|restaurant reviews (SemEval 2016)|microsoft/deberta-v3-large|[gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter](https://huggingface.co/gauneg/deberta-v3-large-absa-ate-sentiment-lora-adapter)|49.07|56.5|51.25|
|
SatyaShodhaka/florence-crosswalk2-crop-ft
|
SatyaShodhaka
| 2024-11-13T17:08:29Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-11-13T17:06:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahul77/qwen2-7b-instruct-markdown
|
rahul77
| 2024-11-13T16:54:56Z | 14 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-10-27T12:47:25Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2-7b-instruct-markdown
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-7b-instruct-markdown
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
rh-rad-ai-roadshow/granite-3-parasol-instruct
|
rh-rad-ai-roadshow
| 2024-11-13T16:54:10Z | 5 | 0 | null |
[
"safetensors",
"granite",
"license:apache-2.0",
"region:us"
] | null | 2024-11-13T16:45:22Z |
---
license: apache-2.0
---
|
GGarri/whisper_finetuned_ver241113_1
|
GGarri
| 2024-11-13T16:47:34Z | 79 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ko",
"dataset:GGarri/241113_newdata",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-13T09:46:47Z |
---
library_name: transformers
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- GGarri/241113_newdata
metrics:
- wer
model-index:
- name: Whisper Small ko
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: customdata
type: GGarri/241113_newdata
metrics:
- name: Wer
type: wer
value: 0.8156606851549755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ko
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the customdata dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0498
- Cer: 1.1070
- Wer: 0.8157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|
| 1.1429 | 1.5625 | 100 | 0.8829 | 14.7984 | 14.5304 |
| 0.3401 | 3.125 | 200 | 0.2637 | 2.0625 | 1.7828 |
| 0.0413 | 4.6875 | 300 | 0.0599 | 1.5498 | 1.3167 |
| 0.0163 | 6.25 | 400 | 0.0462 | 1.2818 | 0.9904 |
| 0.0127 | 7.8125 | 500 | 0.0517 | 1.5265 | 1.1885 |
| 0.0065 | 9.375 | 600 | 0.0402 | 1.5031 | 1.0487 |
| 0.0028 | 10.9375 | 700 | 0.0396 | 1.7012 | 1.3167 |
| 0.001 | 12.5 | 800 | 0.0406 | 1.5148 | 1.1186 |
| 0.0004 | 14.0625 | 900 | 0.0405 | 1.4216 | 1.0371 |
| 0.0005 | 15.625 | 1000 | 0.0424 | 1.5847 | 1.1885 |
| 0.0001 | 17.1875 | 1100 | 0.0425 | 1.2701 | 0.9788 |
| 0.0001 | 18.75 | 1200 | 0.0429 | 1.3051 | 1.0137 |
| 0.0001 | 20.3125 | 1300 | 0.0432 | 1.2701 | 0.9788 |
| 0.0001 | 21.875 | 1400 | 0.0436 | 1.2818 | 0.9904 |
| 0.0001 | 23.4375 | 1500 | 0.0439 | 1.2934 | 1.0021 |
| 0.0001 | 25.0 | 1600 | 0.0441 | 1.2934 | 1.0021 |
| 0.0001 | 26.5625 | 1700 | 0.0443 | 1.2934 | 1.0021 |
| 0.0001 | 28.125 | 1800 | 0.0446 | 1.2934 | 1.0021 |
| 0.0001 | 29.6875 | 1900 | 0.0448 | 1.2818 | 0.9904 |
| 0.0001 | 31.25 | 2000 | 0.0449 | 1.2002 | 0.9089 |
| 0.0001 | 32.8125 | 2100 | 0.0454 | 1.2002 | 0.9089 |
| 0.0001 | 34.375 | 2200 | 0.0458 | 1.2002 | 0.9089 |
| 0.0 | 35.9375 | 2300 | 0.0461 | 1.2002 | 0.9089 |
| 0.0 | 37.5 | 2400 | 0.0463 | 1.1769 | 0.8856 |
| 0.0 | 39.0625 | 2500 | 0.0465 | 1.1769 | 0.8856 |
| 0.0 | 40.625 | 2600 | 0.0467 | 1.1536 | 0.8623 |
| 0.0 | 42.1875 | 2700 | 0.0469 | 1.1303 | 0.8390 |
| 0.0 | 43.75 | 2800 | 0.0471 | 1.1536 | 0.8623 |
| 0.0 | 45.3125 | 2900 | 0.0473 | 1.1536 | 0.8623 |
| 0.0 | 46.875 | 3000 | 0.0474 | 1.1536 | 0.8623 |
| 0.0 | 48.4375 | 3100 | 0.0476 | 1.1536 | 0.8623 |
| 0.0 | 50.0 | 3200 | 0.0477 | 1.1303 | 0.8390 |
| 0.0 | 51.5625 | 3300 | 0.0478 | 1.1419 | 0.8506 |
| 0.0 | 53.125 | 3400 | 0.0479 | 1.1186 | 0.8273 |
| 0.0 | 54.6875 | 3500 | 0.0481 | 1.1186 | 0.8273 |
| 0.0 | 56.25 | 3600 | 0.0482 | 1.1186 | 0.8273 |
| 0.0 | 57.8125 | 3700 | 0.0483 | 1.1186 | 0.8273 |
| 0.0 | 59.375 | 3800 | 0.0484 | 1.1070 | 0.8157 |
| 0.0 | 60.9375 | 3900 | 0.0485 | 1.1070 | 0.8157 |
| 0.0 | 62.5 | 4000 | 0.0487 | 1.1070 | 0.8157 |
| 0.0 | 64.0625 | 4100 | 0.0490 | 1.1070 | 0.8157 |
| 0.0 | 65.625 | 4200 | 0.0492 | 1.1070 | 0.8157 |
| 0.0 | 67.1875 | 4300 | 0.0494 | 1.1070 | 0.8157 |
| 0.0 | 68.75 | 4400 | 0.0495 | 1.1070 | 0.8157 |
| 0.0 | 70.3125 | 4500 | 0.0496 | 1.1070 | 0.8157 |
| 0.0 | 71.875 | 4600 | 0.0497 | 1.1070 | 0.8157 |
| 0.0 | 73.4375 | 4700 | 0.0497 | 1.1070 | 0.8157 |
| 0.0 | 75.0 | 4800 | 0.0497 | 1.1070 | 0.8157 |
| 0.0 | 76.5625 | 4900 | 0.0498 | 1.1070 | 0.8157 |
| 0.0 | 78.125 | 5000 | 0.0498 | 1.1070 | 0.8157 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.4.0
- Datasets 2.18.0
- Tokenizers 0.20.3
|
mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF
|
mradermacher
| 2024-11-13T16:47:21Z | 194 | 0 |
transformers
|
[
"transformers",
"gguf",
"juanako",
"UNA",
"cybertron",
"fbl",
"en",
"dataset:fblgit/tree-of-knowledge",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"base_model:quantized:fblgit/una-cybertron-7b-v2-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-13T14:00:46Z |
---
base_model: fblgit/una-cybertron-7b-v2-bf16
datasets:
- fblgit/tree-of-knowledge
- Open-Orca/SlimOrca-Dedup
- allenai/ultrafeedback_binarized_cleaned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- juanako
- UNA
- cybertron
- fbl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF/resolve/main/una-cybertron-7b-v2-bf16.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
griffio/vit-large-patch16-224-dungeon-geo-morphs-006
|
griffio
| 2024-11-13T16:47:11Z | 193 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224",
"base_model:finetune:google/vit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-13T16:41:09Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch16-224-dungeon-geo-morphs-006
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9722222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-dungeon-geo-morphs-006
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1481
- Accuracy: 0.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0005 | 6.5714 | 10 | 0.1356 | 0.9722 |
| 0.0 | 13.2857 | 20 | 0.1541 | 0.9722 |
| 0.0 | 19.8571 | 30 | 0.1481 | 0.9722 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/una-cybertron-7b-v2-bf16-GGUF
|
mradermacher
| 2024-11-13T16:47:10Z | 25 | 0 |
transformers
|
[
"transformers",
"gguf",
"juanako",
"UNA",
"cybertron",
"fbl",
"en",
"dataset:fblgit/tree-of-knowledge",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"base_model:quantized:fblgit/una-cybertron-7b-v2-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T07:29:38Z |
---
base_model: fblgit/una-cybertron-7b-v2-bf16
datasets:
- fblgit/tree-of-knowledge
- Open-Orca/SlimOrca-Dedup
- allenai/ultrafeedback_binarized_cleaned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- juanako
- UNA
- cybertron
- fbl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/una-cybertron-7b-v2-bf16-GGUF/resolve/main/una-cybertron-7b-v2-bf16.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zainajabroh/image_emotion_classification_project_4
|
Zainajabroh
| 2024-11-13T16:44:56Z | 192 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224-in21k",
"base_model:finetune:google/vit-large-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-13T16:41:32Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_emotion_classification_project_4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.51875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_emotion_classification_project_4
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9052
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: reduce_lr_on_plateau
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6977 | 1.0 | 640 | 1.5713 | 0.325 |
| 1.7006 | 2.0 | 1280 | 1.4543 | 0.4562 |
| 1.6725 | 3.0 | 1920 | 1.6124 | 0.4625 |
| 1.2312 | 4.0 | 2560 | 1.6711 | 0.5 |
| 0.6097 | 5.0 | 3200 | 1.8838 | 0.5312 |
| 1.264 | 6.0 | 3840 | 2.0933 | 0.4875 |
| 2.4064 | 7.0 | 4480 | 2.0628 | 0.5188 |
| 2.0741 | 8.0 | 5120 | 2.6505 | 0.4625 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
rohiths24/Llama-3.2-1B-Instruct-Finetuned
|
rohiths24
| 2024-11-13T16:43:54Z | 158 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T16:42:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ebinna/single_cls_mamba2-130m_redo
|
ebinna
| 2024-11-13T16:39:37Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-11-13T15:36:29Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: single_cls_mamba2-130m_redo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# single_cls_mamba2-130m_redo
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Accuracy: 0.969
- Micro Precision: 0.969
- Micro Recall: 0.969
- Micro F1: 0.969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro Precision | Micro Recall | Micro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|
| 0.1705 | 1.0 | 2500 | 0.1675 | 0.963 | 0.963 | 0.963 | 0.963 |
| 0.031 | 2.0 | 5000 | 0.1862 | 0.97 | 0.97 | 0.97 | 0.97 |
| 0.0185 | 3.0 | 7500 | 0.1889 | 0.972 | 0.972 | 0.972 | 0.972 |
| 0.0029 | 4.0 | 10000 | 0.2020 | 0.968 | 0.968 | 0.968 | 0.968 |
| 0.0002 | 5.0 | 12500 | 0.2062 | 0.969 | 0.969 | 0.969 | 0.969 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.1.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cuongdev/7nguoi-5000
|
cuongdev
| 2024-11-13T16:39:05Z | 35 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-13T16:33:22Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### 7nguoi-5000 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
griffio/vit-large-patch16-224-dungeon-geo-morphs-005
|
griffio
| 2024-11-13T16:38:41Z | 188 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224",
"base_model:finetune:google/vit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-13T16:33:31Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch16-224-dungeon-geo-morphs-005
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: dungeon-geo-morphs
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9722222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-dungeon-geo-morphs-005
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
- Accuracy: 0.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.582 | 6.5714 | 10 | 0.1846 | 0.9722 |
| 0.0293 | 13.2857 | 20 | 0.0766 | 0.9722 |
| 0.0021 | 19.8571 | 30 | 0.0719 | 0.9722 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
deepnet111/sn9-3b-006
|
deepnet111
| 2024-11-13T16:36:51Z | 215 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-13T15:41:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LBK95/Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V5
|
LBK95
| 2024-11-13T16:27:56Z | 15 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-11-13T09:49:22Z |
---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- dpo
- generated_from_trainer
library_name: peft
model-index:
- name: Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0059
- Rewards/chosen: -1.9822
- Rewards/rejected: -2.2494
- Rewards/accuracies: 0.6000
- Rewards/margins: 0.2673
- Logps/rejected: -163.8624
- Logps/chosen: -165.7420
- Logits/rejected: -0.1662
- Logits/chosen: -0.1805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7395 | 0.3010 | 73 | 0.6468 | 0.0134 | -0.0847 | 0.9000 | 0.0981 | -142.2149 | -145.7866 | 0.3794 | 0.3670 |
| 0.7285 | 0.6021 | 146 | 0.6128 | 0.0518 | -0.1414 | 0.7000 | 0.1932 | -142.7814 | -145.4018 | 0.3432 | 0.3316 |
| 0.5488 | 0.9031 | 219 | 0.5896 | 0.0505 | -0.2094 | 0.8000 | 0.2599 | -143.4620 | -145.4151 | 0.3212 | 0.3092 |
| 0.4181 | 1.2041 | 292 | 0.7451 | -0.5895 | -1.0121 | 0.7000 | 0.4226 | -151.4888 | -151.8154 | 0.2582 | 0.2463 |
| 0.6666 | 1.5052 | 365 | 0.6292 | -0.4920 | -0.8706 | 0.5 | 0.3786 | -150.0739 | -150.8403 | 0.2068 | 0.1950 |
| 0.5649 | 1.8062 | 438 | 0.6652 | -0.6961 | -1.0296 | 0.6000 | 0.3335 | -151.6640 | -152.8809 | 0.1043 | 0.0914 |
| 0.3129 | 2.1072 | 511 | 0.8072 | -1.2644 | -1.5342 | 0.6000 | 0.2698 | -156.7100 | -158.5638 | 0.0071 | -0.0060 |
| 0.0785 | 2.4082 | 584 | 1.0289 | -2.0249 | -2.2745 | 0.6000 | 0.2496 | -164.1127 | -166.1691 | -0.1558 | -0.1700 |
| 0.1698 | 2.7093 | 657 | 1.0059 | -1.9822 | -2.2494 | 0.6000 | 0.2673 | -163.8624 | -165.7420 | -0.1662 | -0.1805 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.