legolasyiu commited on
Commit
9dd43c7
·
verified ·
1 Parent(s): 98fd191

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +44 -46
index.html CHANGED
@@ -9,52 +9,50 @@
9
  <body>
10
  <div class="card">
11
  <h1>Welcome to Super Transformer</h1>
12
- <h1>SuperTransformer</h1>
13
-
14
- <p>Suptertransformer that auto loads Huggingface models </p>
15
- <h2>Introduction</h2>
16
- <p>This is a single line transformer for easy to load models from Huggingface. It is not to replace Huggingface Transformer process. It simplifies it and speed up the loading the process of the HuggingFace models</p>
17
-
18
-
19
- <h2>Usage</h2>
20
- <p>SuperTransformers download the model locally. The super class uses AutoTokenizer and AutoModelForCausalLM.from_pretrained.</p>
21
-
22
- <h2>Installation</h2>
23
- <code>
24
- pip install bitsandbytes>=0.39.0
25
- pip install --upgrade accelerate transformers
26
- </code>
27
-
28
- <h2>How to run</h2>
29
- <code>
30
- python SuperTransformer.py
31
- </code>
32
-
33
- <h2>Example of usage:</h2>
34
-
35
- <code>
36
- # Load SuperTransformer Class, (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
37
- SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2","You are a highly knowledgeable assistant with expertise in chemistry and physics. <reasoning>","What is the area of a circle, radius=16, reason step by step", 2026)
38
- # 8-bit quantization
39
- SuperTransformers.HuggingFaceTransformer8bit()
40
- # or 4-bit quantization
41
- SuperTransformers.HuggingFaceTransformer4bit()
42
- </code>
43
-
44
- <h2>Returns model and tokenizer</h2>
45
- <code>
46
- SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
47
- model, tokenizer = HuggingfaceTransfomer() #returns the model and tokenizer
48
- </code>
49
-
50
- <h2>returns pipline as higher helper</h2>
51
- <code>
52
- SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
53
- pipe = HuggingfacePipeline() #returns the pipeline only
54
- output = pipe(self.text, max_new_tokens=self.max_new_tokens) # Limit output length to save memory
55
- # Print the generated output
56
- print(output)
57
- </code>
58
  </p>
59
  </div>
60
  </body>
 
9
  <body>
10
  <div class="card">
11
  <h1>Welcome to Super Transformer</h1>
12
+ <p>Suptertransformer that auto loads Huggingface models </p>
13
+ <h2>Introduction</h2>
14
+ <p>This is a single line transformer for easy to load models from Huggingface. It is not to replace Huggingface Transformer process. It simplifies it and speed up the loading the process of the HuggingFace models</p>
15
+
16
+
17
+ <h2>Usage</h2>
18
+ <p>SuperTransformers download the model locally. The super class uses AutoTokenizer and AutoModelForCausalLM.from_pretrained.</p>
19
+
20
+ <h2>Installation</h2>
21
+ <code>
22
+ pip install bitsandbytes>=0.39.0
23
+ pip install --upgrade accelerate transformers
24
+ </code>
25
+
26
+ <h2>How to run</h2>
27
+ <code>
28
+ python SuperTransformer.py
29
+ </code>
30
+
31
+ <h2>Example of usage:</h2>
32
+
33
+ <code>
34
+ # Load SuperTransformer Class, (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
35
+ SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2","You are a highly knowledgeable assistant with expertise in chemistry and physics. <reasoning>","What is the area of a circle, radius=16, reason step by step", 2026)
36
+ # 8-bit quantization
37
+ SuperTransformers.HuggingFaceTransformer8bit()
38
+ # or 4-bit quantization
39
+ SuperTransformers.HuggingFaceTransformer4bit()
40
+ </code>
41
+
42
+ <h2>Returns model and tokenizer</h2>
43
+ <code>
44
+ SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
45
+ model, tokenizer = HuggingfaceTransfomer() #returns the model and tokenizer
46
+ </code>
47
+
48
+ <h2>returns pipline as higher helper</h2>
49
+ <code>
50
+ SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-RE1-V2")
51
+ pipe = HuggingfacePipeline() #returns the pipeline only
52
+ output = pipe(self.text, max_new_tokens=self.max_new_tokens) # Limit output length to save memory
53
+ # Print the generated output
54
+ print(output)
55
+ </code>
 
 
56
  </p>
57
  </div>
58
  </body>