ChatGPT Is Ignoring Your Memories and Instructions — And You’re Not Alone
TL;DR
A Reddit post scoring 190 upvotes and 43 comments reveals a growing user frustration: ChatGPT appears to have stopped actively using saved memories, “About Me” profiles, and custom instructions when generating responses. The community consensus is clear — something changed. Whether it’s a bug, a quiet rollout, or an intentional design shift is unclear from the available sources. Alternatives like Claude and Gemini handle persistent context differently, each with their own trade-offs.
What the Sources Say
A thread posted to r/ChatGPT asked the question that apparently many users had been silently wondering: “Anybody else noticed that ChatGPT never uses memories, about me, or instructions anymore?”
With 190 upvotes and 43 comments, the post resonated. That’s not viral by Reddit standards, but for a niche frustration about a specific product feature, it’s a meaningful signal. When dozens of people upvote a “is anyone else experiencing this?” post, the answer is almost always: yes, other people are experiencing this.
What the community flagged:
ChatGPT offers several mechanisms for personalization:
- Memories — facts ChatGPT saves from past conversations to reference later
- About Me — a dedicated section where users describe themselves, their preferences, and context
- Custom Instructions — explicit rules users set for how ChatGPT should behave and respond
The complaint isn’t that these features are broken in a technical sense. Users can still see their memories and instructions in the settings. The issue is that ChatGPT seems to be ignoring them during actual conversations — responding as if they don’t exist.
No contradictions in the source data — there’s only one source here, and its signal is directional: users are noticing this, and enough of them agree to push a thread to triple-digit upvotes.
What the source doesn’t tell us is why this is happening. There’s no official statement from OpenAI, no changelog entry, and no confirmed explanation from power users in the comments. It’s a community observation, not a diagnosed bug report.
The Bigger Picture: How AI Assistants Handle Context
This frustration points to a fundamental challenge in conversational AI: how do you make a stateless system feel like it knows you?
Different products take very different approaches.
ChatGPT (OpenAI) built an explicit memory layer — you can see what it remembers, edit those memories, and add custom instructions. It’s a transparent, user-controlled system. When it works, it’s excellent. When it silently stops using those memories, users notice immediately because the expectation was set so clearly.
Claude (Anthropic) takes a different architectural approach entirely. According to the source data, Claude re-reads the entire conversation history with every single response. There’s no separate “memory” database — context is built from the conversation itself. This means Claude can’t “forget” something you told it in the same session, but it also means it doesn’t carry knowledge across separate conversations unless you explicitly paste it in.
Gemini (Google) sits at the other end of the spectrum. The source notes that users have complained Gemini uses memories too aggressively — surfacing past context in ways that feel intrusive or irrelevant. If ChatGPT’s problem is ignoring memories, Gemini’s reputation problem is the opposite: it can’t stop mentioning them.
These aren’t just technical differences — they reflect genuinely different product philosophies about what “knowing the user” should look like.
Pricing & Alternatives
| Tool | Developer | Memory / Persistence | Starting Price |
|---|---|---|---|
| ChatGPT | OpenAI | Explicit memories, About Me, Custom Instructions | Free; Plus from $20/month |
| Claude | Anthropic | Reads full conversation history per session; no cross-session memory by default | Not specified in sources |
| Gemini | Memory features present; reportedly overused by the system | Not specified in sources |
Notes:
- Pricing for Claude and Gemini was not available in the source data — check their respective websites for current plans
- ChatGPT’s free tier includes basic memory features; full control may require Plus
Why This Matters More Than It Seems
At first glance, “ChatGPT isn’t reading my instructions” sounds like a minor inconvenience. But it’s actually a significant trust issue.
The entire value proposition of memory and custom instructions is that they allow the AI to work for you specifically — not for some generic user. Power users, professionals, and anyone who’s spent time carefully crafting their instructions has invested effort into shaping how the tool behaves.
When that investment gets silently ignored, it undermines confidence in the product at a fundamental level. If you can’t rely on the AI to follow the rules you set, you end up second-guessing every response: Is it ignoring my instructions right now? Should I remind it again? That’s cognitive overhead that defeats the purpose of having a personalized assistant in the first place.
It also creates an interesting opening for competitors. Claude’s approach — no explicit memory, but full conversation re-reading — sidesteps this failure mode entirely. There’s nothing to “forget” or “ignore” mid-session. The trade-off is that you lose cross-session continuity, but you gain consistency within a conversation.
The Bottom Line: Who Should Care?
Power users who’ve invested in ChatGPT customization should care most. If you’ve built out detailed custom instructions or accumulated meaningful memories, this regression directly degrades your daily experience. It’s worth testing explicitly: start a new conversation, ask ChatGPT something that should trigger your instructions or memories, and see if they’re referenced.
Developers and professionals using ChatGPT for workflow automation should care too. Custom instructions are often used to set output format preferences, tone guidelines, or domain-specific context. If those aren’t being applied, outputs may be inconsistent in ways that create downstream problems.
Casual users probably won’t notice — and that’s part of what makes this problem hard for OpenAI to prioritize. Most people using ChatGPT for one-off questions don’t have memories set up and won’t experience any change.
Users evaluating alternatives should use this as a data point when comparing tools. Claude’s session-based approach and Gemini’s memory-heavy approach represent real trade-offs worth considering depending on your use case. There’s no universally right answer — it depends on whether cross-session continuity or within-session consistency matters more to you.
If you’re hitting this problem regularly, the community workaround until OpenAI addresses it is the oldest trick in the book: paste your relevant context directly into each conversation. It’s not elegant, but it works — and it’s essentially what Claude users do by default anyway.