prithivMLmods commited on
Commit
0bca6d3
·
verified ·
1 Parent(s): 3672bab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -16,11 +16,10 @@ tags:
16
 
17
  ![python.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/4nYxcbXSfu2Q0fIXul41e.gif)
18
 
19
- # **PyThagorean-10B**
20
 
21
  PyThagorean [Python + Math] is a Python and mathematics-based model designed to solve mathematical problems using Python libraries and coding. It has been fine-tuned on 1.5 million entries and is built on LLaMA's architecture. The model supports different parameter sizes, including 10B, 3B, and 1B (Tiny). These instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agent-based retrieval and summarization tasks. PyThagorean leverages an auto-regressive language model that uses an optimized transformer architecture. The tuned versions employ supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
22
 
23
-
24
  # **Use with transformers**
25
 
26
  Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
 
16
 
17
  ![python.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/4nYxcbXSfu2Q0fIXul41e.gif)
18
 
19
+ # **PyThagorean-3B**
20
 
21
  PyThagorean [Python + Math] is a Python and mathematics-based model designed to solve mathematical problems using Python libraries and coding. It has been fine-tuned on 1.5 million entries and is built on LLaMA's architecture. The model supports different parameter sizes, including 10B, 3B, and 1B (Tiny). These instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agent-based retrieval and summarization tasks. PyThagorean leverages an auto-regressive language model that uses an optimized transformer architecture. The tuned versions employ supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
22
 
 
23
  # **Use with transformers**
24
 
25
  Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.