Query Expension for Better Query Embedding using LLMs
-
Updated
Feb 18, 2025 - Python
Query Expension for Better Query Embedding using LLMs
Self-hosted MCP server for hybrid semantic code search and repository intelligence.
A Node.js REST API that powers a RAG-based chatbot, handling data ingestion, vector search, and LLM-powered responses.
Docs.AI RAG Chatbot is an advanced application designed to revolutionize document interactions through AI-driven capabilities.
A complete web data Retrieval-Augmented Generation (RAG) pipeline built with TypeScript and Bun that scrapes news articles using Selenium, embeds them with Jina's cloud embeddings API, and stores semantic vectors in Qdrant vector database for fast similarity search and AI-powered applications.
A full-stack chatbot that answers queries over recent news using Retrieval-Augmented Generation (RAG).
Backend for a RAG-powered news chatbot providing real-time AI responses, semantic search, and news retrieval using Node.js, Socket.IO, PostgreSQL, Redis, and Qdrant.
A decentralized protocol for authenticating AI-generated content — blending blockchain proofs, IPFS storage, and semantic AI embeddings to establish verifiable authorship trails.
A real-time news chatbot application built with modern web technologies. It delivers intelligent, AI-powered responses, supports multiple chat sessions with persistent history, and provides a responsive, user-friendly interface across devices.
Oh My Repos: a tool for semantic search and analysis of GitHub starred repositories.
Add a description, image, and links to the jina-embeddings topic page so that developers can more easily learn about it.
To associate your repository with the jina-embeddings topic, visit your repo's landing page and select "manage topics."