Spaces:
Running
Running
File size: 2,657 Bytes
a7eec35 bb5fa37 a7eec35 7be8da7 a7eec35 f336786 a7eec35 f336786 a7eec35 4ac731c a7eec35 4ac731c a7eec35 f336786 9530053 f336786 06197a9 f336786 06197a9 65ff348 06197a9 65ff348 06197a9 65ff348 06197a9 329e33e 9530053 06197a9 9530053 f336786 a7eec35 65ff348 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
from typing import List
import gradio as gr
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True, device='cpu')
def embed(document: str):
return model.encode(document)
with gr.Blocks(title="Nomic Text Embeddings") as app:
gr.Markdown("# Nomic Text Embeddings v1.5")
gr.Markdown("Generate embeddings for your text using the nomic-embed-text-v1.5 model.")
# Create an input text box
text_input = gr.Textbox(label="Enter text to embed", placeholder="Type or paste your text here...")
# Create an output component to display the embedding
output = gr.JSON(label="Text Embedding")
# Add a submit button with API name
submit_btn = gr.Button("Generate Embedding", variant="primary")
# Handle both button click and text submission
submit_btn.click(embed, inputs=text_input, outputs=output, api_name="predict")
text_input.submit(embed, inputs=text_input, outputs=output)
# Add API usage guide
gr.Markdown("## API Usage")
gr.Markdown("""
You can use this API programmatically. Hugging Face Spaces requires using their client libraries which handle queuing automatically.
### Quick Command-Line Usage
```bash
# Install gradio client
pip install gradio_client
# Generate embedding with one command
python -c "from gradio_client import Client; print(Client('ipepe/nomic-embeddings').predict('Your text here', api_name='/predict'))"
```
### Python Example (Recommended)
```python
from gradio_client import Client
client = Client("ipepe/nomic-embeddings")
result = client.predict(
"Your text to embed goes here",
api_name="/predict"
)
print(result) # Returns the embedding array
```
### JavaScript/Node.js Example
```javascript
import { client } from "@gradio/client";
const app = await client("ipepe/nomic-embeddings");
const result = await app.predict("/predict", ["Your text to embed goes here"]);
console.log(result.data);
```
### Direct HTTP (Advanced)
Direct HTTP requests require implementing the Gradio queue protocol:
1. POST to `/queue/join` to join queue
2. Listen to `/queue/data` via SSE for results
3. Handle session management
For direct HTTP, we recommend using the official Gradio clients above which handle this automatically.
The response will contain the embedding array as a list of floats.
""")
if __name__ == '__main__':
app.launch(server_name="0.0.0.0", show_error=True, server_port=7860) |