Explore the core concepts of Agent Memory (Semantic, Episodic, Procedural) for LLMs. Learn practical implementation of long-term agent memory using LangGraph & LangMem, based on DeepLearning.AI course notes, with a detailed Email Agent example.
Understand how to train an o1-like model on specific domain
Dive into the HuggingGPT paper, a key work on LLM Agents. Discover how an LLM acts as a controller for task planning and tool usage to solve multi-modal and complex AI challenges.
LLM Agent for Multi-Table QA Task
RAFT paper analysis: Learn how to train LLMs for Domain-Specific RAG, combining external documents with internal knowledge to boost specialized domain QA performance.
Explore how Retrieval-Augmented Generation (RAG) enhances black-box LLMs. This article details the NAACL 2024 paper REPLUG, discussing its innovative methods for Inference and Training stages to improve LLM answer quality and effectively reduce hallucination.
Discover Python's Small Integer Cache! Learn how Python optimizes common integers (-5 to 256) for faster performance and efficient memory use. Simple examples included.