Explore the PLAN-AND-ACT paper: Learn how its innovative Planner-Executor framework and data synthesis methods enhance Large Language Model (LLM) capabilities in long-horizon task planning and execution, and overcome related challenges.
Explore the core concepts of Agent Memory (Semantic, Episodic, Procedural) for LLMs. Learn practical implementation of long-term agent memory using LangGraph & LangMem, based on DeepLearning.AI course notes, with a detailed Email Agent example.
Understand how to train an o1-like model on specific domain
Dive into the HuggingGPT paper, a key work on LLM Agents. Discover how an LLM acts as a controller for task planning and tool usage to solve multi-modal and complex AI challenges.
LLM Agent for Multi-Table QA Task
RAFT paper analysis: Learn how to train LLMs for Domain-Specific RAG, combining external documents with internal knowledge to boost specialized domain QA performance.
Explore how Retrieval-Augmented Generation (RAG) enhances black-box LLMs. This article details the NAACL 2024 paper REPLUG, discussing its innovative methods for Inference and Training stages to improve LLM answer quality and effectively reduce hallucination.