# The 5 memory problems for agents
You ship an agent. It works well in the demo. Users start using it daily. After a week, someone asks: "Why did you suggest the same thing you suggested on Monday? I told you that didn't work." Your...

Source: DEV Community
You ship an agent. It works well in the demo. Users start using it daily. After a week, someone asks: "Why did you suggest the same thing you suggested on Monday? I told you that didn't work." Your agent has no answer because it has no memory of Monday. Or worse, it has a memory of Monday but no idea that Monday's approach failed. This is the problem that shows up in every long-running agent system, and it is not a retrieval problem. Your vector search works fine. Your RAG pipeline returns relevant context. The problem is upstream of retrieval: your agent stores facts but does not learn from outcomes. It records what happened without recording whether it worked. The research has a name for this gap. Hu et al.'s survey on agent memory identifies three functional categories: factual memory (what the agent knows), experiential memory (how the agent improves from past actions), and working memory (what the agent is thinking about right now). Most production agent systems implement factual