Update README.md
Browse files
README.md
CHANGED
@@ -47,6 +47,36 @@ The model is open source under apache 2.0. License
|
|
47 |
|
48 |
## Usage
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
### Library use
|
52 |
|
|
|
47 |
|
48 |
## Usage
|
49 |
|
50 |
+
### NOTE:
|
51 |
+
|
52 |
+
|
53 |
+
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted playground model, initialize the generator as shown below:
|
54 |
+
|
55 |
+
```python
|
56 |
+
from pip_library_etl import PipEtl
|
57 |
+
|
58 |
+
generator = PipEtl(device="cloud")
|
59 |
+
```
|
60 |
+
|
61 |
+
We have hosted the model at http://52.165.83.89:3100/infer. Hence, one can also make a POST request to this endpoint with the following payload:
|
62 |
+
|
63 |
+
```json
|
64 |
+
{
|
65 |
+
"model_name": "PipableAI/pip-library-etl-1.3b",
|
66 |
+
"prompt": "prompt",
|
67 |
+
"max_new_tokens": "400"
|
68 |
+
}
|
69 |
+
```
|
70 |
+
|
71 |
+
```bash
|
72 |
+
curl -X 'POST' \
|
73 |
+
'http://52.165.83.89:3100/infer' \
|
74 |
+
-H 'accept: application/json' \
|
75 |
+
-H 'Content-Type: application/x-www-form-urlencoded' \
|
76 |
+
-d 'model_name=PipableAI%2Fpip-library-etl-1.3b&prompt=YOUR PROMPT&device=&max_new_tokens='
|
77 |
+
```
|
78 |
+
|
79 |
+
Alternatively, you can directly access the endpoint at http://52.165.83.89:3100/docs#/default/infer_infer_post.
|
80 |
|
81 |
### Library use
|
82 |
|