yaryar78's picture
Update README.md
a032c33 verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
metrics:
  - accuracy
datasets:
  - Norod78/HebrewLyricsDataet
language:
  - he
base_model:
  - avichr/hebEMO_fear
pipeline_tag: text-classification
model-index:
  - name: hebrew_lyrics_to_singer_classifer_small_dataset
    results: []

hebrew_lyrics_to_singer_classifer_small_dataset

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0080
  • Accuracy: 0.9987

Model description

This model was trained to classify singer name based on parts of it's song - on a limited 10 singers dataset of Hebrew lyrics, extracted from Norod78/HebrewLyricsDataet 9 singers_list = {'诪转讬 讻住驻讬': 0, '转讬住诇诐': 1, '拽讜专讬谉 讗诇讗诇': 2, '讗讬驻讛 讛讬诇讚': 3, '讛诪讻砖驻讜转': 4, '谞讜砖讗讬 讛诪讙讘注转': 5, '专讬拽讬 讙诇': 6, '讝拽谞讬 爪驻转': 7, '讘专讬 住讞专讜祝': 8, '讗讛讜讚 讘谞讗讬': 9}

Example use

from transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer

model_path="yaryar78/hebrew_lyrics_to_singer_classifer_small_dataset"

pipe = pipeline("text-classification", model=model_path)

singers_list = ['诪转讬 讻住驻讬' , '转讬住诇诐', '拽讜专讬谉 讗诇讗诇', '讗讬驻讛 讛讬诇讚', '讛诪讻砖驻讜转', '谞讜砖讗讬 讛诪讙讘注转', '专讬拽讬 讙诇', '讝拽谞讬 爪驻转', '讘专讬 住讞专讜祝', '讗讛讜讚 讘谞讗讬']

singers_list = singers_list[::-1]

label2id = {s: i for i, s in enumerate(singers_list)}

id2label = {i: s for s, i in label2id.items()}

prompt = "爪讬驻讜专讬诐 诪住转讜讘讘讜转 砖诪讞 驻讛 讜砖诪讞 砖诐"

prediction = pipe(prompt)

label_str = prediction[0]["label"]

print(label_str)

label_id = int(label_str.replace("LABEL_", ""))

predicted_singer = id2label[label_id]

print(predicted_singer)


Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.7106 1.0 5393 0.1266 0.9643
0.2078 2.0 10786 0.0621 0.9861
0.1516 3.0 16179 0.0246 0.9945
0.1018 4.0 21572 0.0114 0.9978
0.0554 5.0 26965 0.0080 0.9987

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0