Spaces:
Running
How to run 🐳 DeepSite locally
Hi everyone 👋
Some of you have asked me how to use DeepSite locally. It's actually super easy!
Thanks to Inference Providers, you'll be able to switch between different providers just like in the online application. The cost should also be very low (a few cents at most).
Run DeepSite locally
- Clone the repo using git
git clone https://huggingface.co/spaces/enzostvs/deepsite
- Install the dependencies (make sure node is installed on your machine)
npm install
Create your
.env
file and add theHF_TOKEN
variable
Make sure to create a token with inference permissions and optionally write permissions (if you want to deploy your results in Spaces)Build the project
npm run build
- Start it and enjoy with a coffee ☕
npm run start
To make sure everything is correctly setup, you should see this banner on the top-right corner.
Feel free to ask or report issue related to the local usage below 👇
Thank you all!
It would be cool to provide instructions for running this in docker. I tried it yesterday and got it running although it gave an error when trying to use it. I did not look into what was causing it yet though.
Hi
this works great thank you!
WOW
im getting Invalid credentials in Authorization header
not really that familiar with running stuff locally
getting error as "Invalid credentials in Authorization header"
getting error as "Invalid credentials in Authorization header"
Are you sure you did those steps correctly?
- Create a token with inference permissions: https://huggingface.co/settings/tokens/new?ownUserPermissions=repo.content.read&ownUserPermissions=repo.write&ownUserPermissions=inference.serverless.write&tokenType=fineGrained then copy it to your clipboard
- Create a new file named
.env
in the Deepsite folder you cloned and paste your token in it so it should look like this:
HF_TOKEN=THE_TOKEN_YOU_JUST_CREATED
- Launch the app again
verified steps, it launches but upon prompt I get the same response Invalid credentials in Authorization header
Hi guys, I gonna take a look at this, will keep you updated
Using with locally running models would be cool too.
I did everything according to the steps above, it worked the first time. Thank you.
P.S..
Updated the node
it has started working for me now - many thanks!
Using with locally running models would be cool too.
I know right
I used the Dockerfile, set the HF_TOKEN and on first try get the error message: We have not been able to find inference provider information for model deepseek-ai/DeepSeek-V3-0324
. Error happens in try catch when calling client.chatCompletionStream.
can i point this to my own deepseek API key and run offline using my own API key nothing to do with huggingface?
can i point this to my own deepseek API key and run offline using my own API key nothing to do with huggingface?
I did so after the previous error with the inference provider but always run into the max token output limit and receive a website that suddenly stops. Wondering how the inference provider approach works differently towards this.. i can not explain myself as deepseek is just limited to max 8k output.
can i point this to my own deepseek API key and run offline using my own API key nothing to do with huggingface?
I did so after the previous error with the inference provider but always run into the max token output limit and receive a website that suddenly stops. Wondering how the inference provider approach works differently towards this.. i can not explain myself as deepseek is just limited to max 8k output.
There is models now well over 1Mill so could easily swap. did you run the docker and set API and .env file?
Deepsite uses deepseek (here online) so this was the base of my test... Here online i receive a full website but locally with my direct deepseek api not. Yeah there are some models with much more output. also deepseek coder v2 with 128k .. but still wondering the differences between deepseek platform api and inference provider - makes no sense.
Yes you can use it without being PRO, but you're always concerned about limits (https://huggingface.co/settings/billing)
Thanks.
Can run locally - offline - with OLLAMA server
Hello, I subscribed to the pro option twice and they charged me $10 twice but I still haven't upgraded.
How to add google provider to this project? I want to use gemini 2.5 pro
Hello, I subscribed to the pro option twice and they charged me $10 twice but I still haven't upgraded.
Very weird we are going to take a look at it (did you subscribe from hf.co/subscribe/pro?)
Hello, I subscribed to the pro option twice and they charged me $10 twice but I still haven't upgraded.
Very weird we are going to take a look at it (did you subscribe from hf.co/subscribe/pro?)
Yes of course via this link: https://huggingface.co/pricing, I was charged $20 for both tests
Can run locally - offline - with OLLAMA server
Which LLM model you use?
i use distilled DeepSeek, and Qwen 2.5 and Gemma 3
However I am sure to make this work i have to do something with code, but have no idea what.
Failed to fetch
i use distilled DeepSeek, and Qwen 2.5 and Gemma 3
However I am sure to make this work i have to do something with code, but have no idea what.
SO can we use it with this local setup or not?
Yeaa I was wondering the same thing, can we actually use a local LLM to run this or not!!!
hi i want to run it locally its my first time trying to run LLM locally can you provide me with step by step how to do it, what i need to install first what tools or software that i need thank you
Hell yeah thnx brother!
How do I share it as a website ? Like google slides
I created a custom version of DeepSite to run locally!
Now you can run the powerful DeepSite platform directly on your own machine — fully customizable, and with no need for external services. 🌟
Using Ollama, you can seamlessly integrate any AI model (Llama 2, Mistral, DeepSeek, etc.) into your setup, giving you full control over your environment and workflow.
Check out the project on GitHub: https://github.com/MartinsMessias/deepsite-locally
Does it only create front end content or will it actually function when you create something? I was trying to make a few thingd but I'm new to HF but I love making prompts with it. I just don't know how to turn it into something functional.
I got the same question as spiketop, can we make it functional?
If anyone knows lmk. DM or email me [email protected] because this is cool but I don't know how to make it work
It may not always add functionality beyond some hover and click effects, but I guess it depends on how you prompt it? I mean... It's a big model, you can go wild with your requests for it...
I also have the same question at Spiketop. Can I make website created functional? Anyone know how this can be done and willing to assist? Thanks!
Guys, one thing Im trying to do is to use the html code from the AI assistant and drop it on a VSC to make it usable. I was able to recreate the exact same page in there, and possibly making it functional.
Using with locally running models would be cool too.
that means: no money for they :/
I created a custom version of DeepSite to run locally!
Now you can run the powerful DeepSite platform directly on your own machine — fully customizable, and with no need for external services. 🌟
Using Ollama, you can seamlessly integrate any AI model (Llama 2, Mistral, DeepSeek, etc.) into your setup, giving you full control over your environment and workflow.Check out the project on GitHub: https://github.com/MartinsMessias/deepsite-locally
I was thinking on the same thing.
i only can say, there is a huge diff between running locally and remote, obviously when running locally it depends of your possibilities, my posibilities are:
low GPU 4Gb
decent RAM 32 Gb
low CPU AMD Ryzen 7
the results are the next, when asking for a mobile UI for IA Chatting:
(stars are on my oppinion, the rate of results)
DeepSeek Coder V2 16B (LOCALLY) ⭐⭐⬛⬛⬛👇🏼
DeepSite (Default REMOTE model) ⭐⭐⭐⭐⬛👇🏼
(i know the interface is the one that runs models locally, but i recovered the output HTML from saved file, results of original deepsite repo)
@Usern123454321 Running models locally can get heavy fast. Using OpenRouter is usually way more practical. Models like Claude are affordable and perform really well. And DeepSeek V3 has been surprisingly good, especially for front-end tasks, easily one of the best in that area lately.
@antocarloss Its pretty straight forward to remove, unless if you vibe code it and dont know anything about programming. In that case, reach out to someone who can and pay them part of what you charge your client. It will literarily take less than 5mins to do so if you know what you are looking for.
How can l take the website from Deepsite hands
What is the minimum hardware requirements to run a V2 16B locally?
How do I run locality has been created
We have not been able to find inference provider information for model deepseek-ai/DeepSeek-V3-0324.
do this come with a model or i have to add my own model?
j'essaye de creer une application, comment faire pour passer le code de deepsite en appli????
I have followed the steps, and was able to install it as well. When I got o localhost:3000, it loads, but I don't see the Local Usage tag, plus I am getting invalid headers even when I have the .env file with the token. How do I resolve this? Thanks for the help!
I have confirmed that I have latest git.
Is this all Available only on Linux?
The string did not match the expected pattern.
I Get this error
Criei um site como faço para copiar o link do site que criei