Grounding LLMs with Embeddings and Vector Search
[Hands-on Workshop. Please bring your laptop.]
Grounding large language models (LLMs) is the process of integrating them with external knowledge sources. This can be done using a variety of techniques, including embeddings and RAG.
Many organizations are now starting to think about how to bring Gen AI and LLMs to production services. While doing so, they are greeted with several challenges such as the limit of LLM memory or hallucinations. A quick solution is "grounding with embeddings and vector search". In this workshop, we will learn these crucial concepts to build reliable Gen AI services for enterprise use.