Agnuxo commited on
Commit
b66a533
verified
1 Parent(s): 94f867a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -30
README.md CHANGED
@@ -1,35 +1,13 @@
1
 
2
- ---
3
- base_model: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit
4
- language: ['en', 'es']
5
- license: apache-2.0
6
- tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf']
7
- datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k']
8
- library_name: adapter-transformers
9
- metrics:
10
 
11
- ---
12
 
13
- # Uploaded model
 
 
14
 
15
- [<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
16
- - **Developed by:** Agnuxo(https://github.com/Agnuxo1)
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
-
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
23
-
24
-
25
- ## Benchmark Results
26
-
27
- This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
28
-
29
-
30
- Model Size: 3,821,079,552 parameters
31
- Required Memory: 14.23 GB
32
-
33
- For more details, visit my [GitHub](https://github.com/Agnuxo1).
34
-
35
- Thanks for your interest in this model!
 
1
 
2
+ # Modelo Fine-tuned: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit
 
 
 
 
 
 
 
3
 
4
+ Este modelo ha sido afinado para la tarea de glue (sst2) y ha sido evaluado con los siguientes resultados:
5
 
6
+ - **Precisi贸n (Accuracy)**: 0.5011
7
+ - **N煤mero de Par谩metros**: 3,722,585,088
8
+ - **Memoria Necesaria**: 13.87 GB
9
 
10
+ Puedes encontrar m谩s detalles en mi [GitHub](https://github.com/Agnuxo1).
 
 
 
11
 
12
+ 隆Gracias por tu inter茅s en este modelo!
13
+