Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
-
Updated
Nov 6, 2023 - Python
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
This Repositry is an experiment with an agent that searches documents and asks questions repeatedly in response to the main question. It automatically determines the optimal answer from the current documents or recognizes when there is no answer.
🐋 DeepSeek-R1: Retrieval-Augmented Generation for Document Q&A 📄
An LLM-powered Slack bot built with Langchain.
RAG chatbot designed for domain-specific queries using Ollama, Langchain, phi-3 and Faiss
Sistema RAG (Retrieval Augmented Generation) para asistencia de documentación técnica en español utilizando LangChain, OpenAI Y Streamlit para la interfaz visual
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
A Document QA chatbot using LangChain, Pinecone for vector storage, and Amazon Bedrock (mistral.mixtral-8x7b-instruct for LLM and titan-embed-text for embeddings). Built with a Streamlit frontend for document uploads and contextual question answering.
AI agent API (Python/FastAPI) to upload documents (PDF/TXT) and answer questions using RAG with Azure OpenAI and LangChain.
AI-powered commission plan assistant featuring advanced RAG pipeline, Model Context Protocol (MCP) PostgreSQL server integration, multi-format document processing, and secure SELECT-only database operations. Guided 3-phase plan creation with conversational interface.
AI-powered document Q&A system with vector search
Sub-second RAG-based chatbot for medical Q&A over 10+ textbooks with source-cited responses.
⚡️ Local RAG API using FastAPI + LangChain + Ollama | Upload PDFs, DOCX, CSVs, XLSX and ask questions using your own documents — fully offline!
AI assistant backend for document-based question answering using RAG (LangChain, OpenAI, FastAPI, ChromaDB). Features modular architecture, multi-tool agents, conversational memory, semantic search, PDF/Docx/Markdown processing, and production-ready deployment with Docker.
🔍 Agentic AI system that allows users to upload documents (PDFs, DOCX, etc.) and natural language questions. It uses LLM-based RAG to extract relevant information. The architecture includes multi-agent components such as document retrievers, summarizers, web searchers, and tool routers — enabling dynamic reasoning and accurate responses.
AI-Powered PDF Context Retrieval Chatbot (RAG) is a smart chatbot that lets you upload PDFs and ask questions about their content. Using advanced AI and semantic search, it finds and summarizes answers directly from your documents—ideal for legal, academic, business, and support tasks.
🔒 Local AI Agent - Offline RAG system for secure document Q&A with no external APIs
Voice-enabled RAG assistant that processes PDFs with Ollama, stores embeddings in ChromaDB, and provides spoken answers via ElevenLabs TTS for hands-free document Q&A
Add a description, image, and links to the document-qa topic page so that developers can more easily learn about it.
To associate your repository with the document-qa topic, visit your repo's landing page and select "manage topics."