legolasyiu commited on
Commit
98fd191
·
verified ·
1 Parent(s): 3916224

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +26 -23
index.html CHANGED
@@ -8,50 +8,53 @@
8
  </head>
9
  <body>
10
  <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- # SuperTransformer
13
 
14
- Suptertransformer that auto loads Huggingface models
 
 
15
 
16
- # Introduction
17
- This is a single line transformer for easy to load models from Huggingface. It is not to replace Huggingface Transformer process. It simplifies it and speed up the loading the process of the HuggingFace models
 
18
 
19
- # Usage
20
- SuperTransformers download the model locally. The super class uses AutoTokenizer and AutoModelForCausalLM.from_pretrained.
 
 
 
21
 
22
- # Installation
23
- ``` bash
24
- pip install bitsandbytes>=0.39.0
25
- pip install --upgrade accelerate transformers
26
- ```
27
- # How to run
28
- ```python
29
  python SuperTransformer.py
30
- ```
31
 
32
- # Example of usage:
33
 
34
- ```python
35
  # Load SuperTransformer Class, (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
36
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2","You are a highly knowledgeable assistant with expertise in chemistry and physics. <reasoning>","What is the area of a circle, radius=16, reason step by step", 2026)
37
  # 8-bit quantization
38
  SuperTransformers.HuggingFaceTransformer8bit()
39
  # or 4-bit quantization
40
  SuperTransformers.HuggingFaceTransformer4bit()
41
- ```
42
 
43
- ## Returns model and tokenizer
44
- ```python
45
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
46
  model, tokenizer = HuggingfaceTransfomer() #returns the model and tokenizer
47
- ```
48
- ## returns pipline as higher helper
49
- ```python
 
50
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
51
  pipe = HuggingfacePipeline() #returns the pipeline only
52
  output = pipe(self.text, max_new_tokens=self.max_new_tokens) # Limit output length to save memory
53
  # Print the generated output
54
  print(output)
 
55
  </p>
56
  </div>
57
  </body>
 
8
  </head>
9
  <body>
10
  <div class="card">
11
+ <h1>Welcome to Super Transformer</h1>
12
+ <h1>SuperTransformer</h1>
13
 
14
+ <p>Suptertransformer that auto loads Huggingface models </p>
15
+ <h2>Introduction</h2>
16
+ <p>This is a single line transformer for easy to load models from Huggingface. It is not to replace Huggingface Transformer process. It simplifies it and speed up the loading the process of the HuggingFace models</p>
17
 
18
+
19
+ <h2>Usage</h2>
20
+ <p>SuperTransformers download the model locally. The super class uses AutoTokenizer and AutoModelForCausalLM.from_pretrained.</p>
21
 
22
+ <h2>Installation</h2>
23
+ <code>
24
+ pip install bitsandbytes>=0.39.0
25
+ pip install --upgrade accelerate transformers
26
+ </code>
27
 
28
+ <h2>How to run</h2>
29
+ <code>
 
 
 
 
 
30
  python SuperTransformer.py
31
+ </code>
32
 
33
+ <h2>Example of usage:</h2>
34
 
35
+ <code>
36
  # Load SuperTransformer Class, (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
37
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2","You are a highly knowledgeable assistant with expertise in chemistry and physics. <reasoning>","What is the area of a circle, radius=16, reason step by step", 2026)
38
  # 8-bit quantization
39
  SuperTransformers.HuggingFaceTransformer8bit()
40
  # or 4-bit quantization
41
  SuperTransformers.HuggingFaceTransformer4bit()
42
+ </code>
43
 
44
+ <h2>Returns model and tokenizer</h2>
45
+ <code>
46
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
47
  model, tokenizer = HuggingfaceTransfomer() #returns the model and tokenizer
48
+ </code>
49
+
50
+ <h2>returns pipline as higher helper</h2>
51
+ <code>
52
  SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
53
  pipe = HuggingfacePipeline() #returns the pipeline only
54
  output = pipe(self.text, max_new_tokens=self.max_new_tokens) # Limit output length to save memory
55
  # Print the generated output
56
  print(output)
57
+ </code>
58
  </p>
59
  </div>
60
  </body>