Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,10 @@ tags:
|
|
16 |
|
17 |

|
18 |
|
19 |
-
# **PyThagorean-
|
20 |
|
21 |
PyThagorean [Python + Math] is a Python and mathematics-based model designed to solve mathematical problems using Python libraries and coding. It has been fine-tuned on 1.5 million entries and is built on LLaMA's architecture. The model supports different parameter sizes, including 10B, 3B, and 1B (Tiny). These instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agent-based retrieval and summarization tasks. PyThagorean leverages an auto-regressive language model that uses an optimized transformer architecture. The tuned versions employ supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
22 |
|
23 |
-
|
24 |
# **Use with transformers**
|
25 |
|
26 |
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|
|
|
16 |
|
17 |

|
18 |
|
19 |
+
# **PyThagorean-3B**
|
20 |
|
21 |
PyThagorean [Python + Math] is a Python and mathematics-based model designed to solve mathematical problems using Python libraries and coding. It has been fine-tuned on 1.5 million entries and is built on LLaMA's architecture. The model supports different parameter sizes, including 10B, 3B, and 1B (Tiny). These instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agent-based retrieval and summarization tasks. PyThagorean leverages an auto-regressive language model that uses an optimized transformer architecture. The tuned versions employ supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
22 |
|
|
|
23 |
# **Use with transformers**
|
24 |
|
25 |
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|