Registration

GenAI RAG with LlamaIndex, Ollama and Elasticsearch

Added by: CoursesToday
0
22 Oct 2025
0

Free Download GenAI RAG with LlamaIndex, Ollama and Elasticsearch
Released 10/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 21 Lessons ( 1h 49m ) | Size: 707 MB
Build a Generative AI Platform with Retrieval Augmented Generation with Elasticsearch, LlamaIndex, Ollama, and Mistral


Description
Retrieval-Augmented Generation (RAG) is the next practical step after semantic indexing and search. In this course, you'll build a complete, local-first RAG pipeline that ingests PDFs, stores chunked vectors in Elasticsearch, retrieves the right context, and generates grounded answers with the Mistral LLM running locally via Ollama.
We'll work end-to-end on a concrete scenario: searching student CVs to answer questions like "Who worked in Ireland?" or "Who has Spark experience?". You'll set up a Dockerized stack (FastAPI, Elasticsearch, Kibana, Streamlit, Ollama) and wire it together with LlamaIndex so you can focus on the logic, not boilerplate. Along the way, you'll learn where RAG shines, where it struggles (precision/recall, hallucinations), and how to design for production.
By the end, you'll have a working app: upload PDFs → extract text → produce clean JSON → chunk & embed → index into Elasticsearch → query via Streamlit → generate answers with Mistral, fully reproducible on your machine.
What will I learn?
From Search to RAG
Revisit semantic search and extend it to RAG: retrieve relevant chunks first, then generate grounded answers. See how LlamaIndex connects your data to the LLM and why chunk size and overlap matter for recall and precision.
Building the Pipeline
Use FastAPI to accept PDF uploads and trigger the ingestion flow: text extraction, JSON shaping, chunking, embeddings, and indexing into Elasticsearch, all orchestrated with LlamaIndex to minimize boilerplate.
Working with Elasticsearch
Create an index for CV chunks with vectors and metadata. Understand similarity search vs. keyword queries, how vector fields are stored, and how to explore documents and scores with Kibana.
Streamlit Chat Interface
Build a simple Streamlit UI to ask questions in natural language. Toggle debug mode to inspect which chunks supported the answer, and use metadata filters (e.g., by person) to boost precision for targeted queries.
Ingestion & JSON Shaping
Extract raw text from PDFs using PyMuPDF, then have Ollama (Mistral) produce lossless JSON (escaped characters, preserved structure). Handle occasional JSON formatting issues with retries and strict prompting.
Improving Answer Quality
Learn practical levers to improve results
Tune chunk size/overlap and top-K retrieval
Add richer metadata (roles, skills, locations) for hybrid filtering
Experiment with embedding models and stricter prompts
Consider structured outputs (e.g., JSON lists) for list-style questions
Dockerized Setup
Use Docker Compose to spin up the full stack (FastAPI, Elasticsearch, Kibana, Streamlit, and Ollama (Mistral) so you can run the entire system locally with consistent configuration.
Bonus: Production Patterns
Explore how this prototype maps to a scalable design
Store uploads in a data lake (e.g., S3) and trigger processing with a message queue (Kafka/SQS)
Autoscale workers for chunking and embeddings
Swap LLM backends (e.g., Bedrock/OpenAI) behind clean APIs
Persist chat history in MongoDB/Postgres and replace Streamlit with a React/Next.js UI
Homepage
https://learndataengineering.com/p/genai-rag-with-llamaindex-ollama


Buy Premium From My Links To Get Resumable Support,Max Speed & Support Me


No Password - Links are Interchangeable

Disclaimer

None of the files shown here are hosted or transmitted by this server. The owner of this site, wwebhub.com cannot be held responsible for what its users are posting. The links and content are indexed from other sites on the net. You may not use this site to distribute or download any material when you do not have the legal rights to do so. If you have any doubts about legality of content or you have another suspicions, feel free to contact us at WWEBHUB.COM or use the "REPORT ABUSE" button. Thank you

Comments
Add
reload, if the code cannot be seen

There are no comments yet. You can