How does it works
It works with 2 components:
- ingest.py: uses Langchainto parse the document and create embeddings locally using HuggingFaceEmbeddings (SentenceTransformers). It then stores the result in a local vector database using #Chroma vector store
- privateGPT.py: uses a local LLM based on GPT4All-J or LlamaCpp to understand questions and create answers. The context for the answers is extracted from the local vector store using a #similarity-search search to locate the right piece of context from the docs.