Post
1441
๐๐๐๐๐จ๐จ๐ง๐ฆ - ๐๐ ๐๐ง๐ญ๐ข๐ ๐๐๐ ๐ญ๐จ ๐ก๐๐ฅ๐ฉ ๐ฒ๐จ๐ฎ ๐๐ฎ๐ข๐ฅ๐ ๐ฒ๐จ๐ฎ๐ซ ๐ฌ๐ญ๐๐ซ๐ญ๐ฎ๐ฉ
GitHub ๐ https://github.com/AstraBert/ragcoon
Are you building a startup and you're stuck in the process, trying to navigate hundreds of resources, suggestions and LinkedIn posts?๐ถโ๐ซ๏ธ
Well, fear no more, because ๐ฅ๐๐๐ฐ๐ผ๐ผ๐ป๐ฆ is here to do some of the job for you:
๐ It's built on free resources written by successful founders
โ๏ธ It performs complex retrieval operations, exploiting "vanilla" hybrid search, query expansion with an ๐ต๐๐ฝ๐ผ๐๐ต๐ฒ๐๐ถ๐ฐ๐ฎ๐น ๐ฑ๐ผ๐ฐ๐๐บ๐ฒ๐ป๐ approach and ๐บ๐๐น๐๐ถ-๐๐๐ฒ๐ฝ ๐พ๐๐ฒ๐ฟ๐ ๐ฑ๐ฒ๐ฐ๐ผ๐บ๐ฝ๐ผ๐๐ถ๐๐ถ๐ผ๐ป
๐ It evaluates the ๐ฟ๐ฒ๐น๐ถ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ of the retrieved context, and the ๐ฟ๐ฒ๐น๐ฒ๐๐ฎ๐ป๐ฐ๐ and ๐ณ๐ฎ๐ถ๐๐ต๐ณ๐๐น๐ป๐ฒ๐๐ of its own responses, in an auto-correction effort
RAGcoon๐ฆ is ๐ผ๐ฝ๐ฒ๐ป-๐๐ผ๐๐ฟ๐ฐ๐ฒ and relies on easy-to-use components:
๐นLlamaIndex is at the core of the agent architecture, provisions the integrations with language models and vector database services, and performs evaluations
๐น Qdrant is your go-to, versatile and scalable companion for vector database services
๐นGroq provides lightning-fast LLM inference to support the agent, giving it the full power of ๐ค๐๐ค-๐ฏ๐ฎ๐ by Qwen
๐นHugging Face provides the embedding models used for dense and sparse retrieval
๐นFastAPI wraps the whole backend into an API interface
๐น๐ ๐ฒ๐๐ผ๐ฝ by Google is used to serve the application frontend
RAGcoon๐ฆ can be spinned up locally - it's ๐๐ผ๐ฐ๐ธ๐ฒ๐ฟ-๐ฟ๐ฒ๐ฎ๐ฑ๐๐, and you can find the whole code to reproduce it on GitHub ๐ https://github.com/AstraBert/ragcoon
But there might be room for an online version of RAGcoon๐ฆ: let me know if you would use it - we can connect and build it together!๐
GitHub ๐ https://github.com/AstraBert/ragcoon
Are you building a startup and you're stuck in the process, trying to navigate hundreds of resources, suggestions and LinkedIn posts?๐ถโ๐ซ๏ธ
Well, fear no more, because ๐ฅ๐๐๐ฐ๐ผ๐ผ๐ป๐ฆ is here to do some of the job for you:
๐ It's built on free resources written by successful founders
โ๏ธ It performs complex retrieval operations, exploiting "vanilla" hybrid search, query expansion with an ๐ต๐๐ฝ๐ผ๐๐ต๐ฒ๐๐ถ๐ฐ๐ฎ๐น ๐ฑ๐ผ๐ฐ๐๐บ๐ฒ๐ป๐ approach and ๐บ๐๐น๐๐ถ-๐๐๐ฒ๐ฝ ๐พ๐๐ฒ๐ฟ๐ ๐ฑ๐ฒ๐ฐ๐ผ๐บ๐ฝ๐ผ๐๐ถ๐๐ถ๐ผ๐ป
๐ It evaluates the ๐ฟ๐ฒ๐น๐ถ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ of the retrieved context, and the ๐ฟ๐ฒ๐น๐ฒ๐๐ฎ๐ป๐ฐ๐ and ๐ณ๐ฎ๐ถ๐๐ต๐ณ๐๐น๐ป๐ฒ๐๐ of its own responses, in an auto-correction effort
RAGcoon๐ฆ is ๐ผ๐ฝ๐ฒ๐ป-๐๐ผ๐๐ฟ๐ฐ๐ฒ and relies on easy-to-use components:
๐นLlamaIndex is at the core of the agent architecture, provisions the integrations with language models and vector database services, and performs evaluations
๐น Qdrant is your go-to, versatile and scalable companion for vector database services
๐นGroq provides lightning-fast LLM inference to support the agent, giving it the full power of ๐ค๐๐ค-๐ฏ๐ฎ๐ by Qwen
๐นHugging Face provides the embedding models used for dense and sparse retrieval
๐นFastAPI wraps the whole backend into an API interface
๐น๐ ๐ฒ๐๐ผ๐ฝ by Google is used to serve the application frontend
RAGcoon๐ฆ can be spinned up locally - it's ๐๐ผ๐ฐ๐ธ๐ฒ๐ฟ-๐ฟ๐ฒ๐ฎ๐ฑ๐๐, and you can find the whole code to reproduce it on GitHub ๐ https://github.com/AstraBert/ragcoon
But there might be room for an online version of RAGcoon๐ฆ: let me know if you would use it - we can connect and build it together!๐