File size: 1,715 Bytes
3180b4a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Shasha Demo Notebook\n",
"This notebook shows how to programmatically invoke our AI inference pipeline, run sentiment analysis, and generate code examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 1. Setup inference client\n",
"from hf_client import get_inference_client\n",
"# Initialize client for Qwen3-32B (fallback on Groq if unavailable)\n",
"client = get_inference_client('Qwen/Qwen3-32B', provider='auto')\n",
"# Example chat completion request\n",
"resp = client.chat.completions.create(\n",
" model='Qwen/Qwen3-32B',\n",
" messages=[{'role':'user','content':'Write a Python function to reverse a string.'}]\n",
")\n",
"print(resp.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 2. Sentiment Analysis with Transformers.js (Python demo)\n",
"from transformers import pipeline\n",
"# Using OpenAI provider for sentiment\n",
"sentiment = pipeline('sentiment-analysis', model='openai/gpt-4', trust_remote_code=True)\n",
"print(sentiment('I love building AI-powered tools!'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Next steps:\n",
"- Try different models (Gemini Pro, Fireworks AI) by changing the model= parameter.\n",
"- Explore custom plugins via plugins.py to integrate with Slack or GitHub.\n",
"- Use auth.py to load private files from Google Drive."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.x"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |