Murph's Take

The Prompting Skill Ceiling: Why It's Not Enough and What Comes After

Getting better at prompting is a real skill with real leverage — until you hit the ceiling. Understanding what comes after prompting is what separates people building compounding systems from people building better one-offs.

Jason MurphyMarch 29, 20267 min read

The prompt engineering discourse peaked sometime in 2023 and is now in a long tail of diminishing returns. Not because prompting isn't important — it is — but because everyone who's been building seriously has hit the same ceiling and started asking the same question: what comes after prompting?

This isn't an abstract question. It has a specific, structural answer.

What the Ceiling Actually Is

Getting better at prompting makes real differences. Specifying output format, adding constraint language, structuring multi-step instructions clearly, giving Claude sufficient context in the prompt itself — these improve outputs materially. The skill compounds for a while.

Then it doesn't. The diminishing returns become visible around the point where you realize that the best possible prompt can only extract what the model has to work with. If the model doesn't have the right context, the most precise prompt in the world produces a constrained output. You've maximized the prompt. You haven't maximized the system.

The ceiling is a context problem, not a prompt problem. Once you see it that way, the path forward becomes clear.

What Comes After: Context Architecture

The next level of leverage is designing the information environment Claude operates in.

This isn't about writing longer prompts. It's about making decisions — structural, architectural decisions — about what information the model has access to, in what form, retrieved when, stored how.

Some of the decisions in this layer:

What goes in standing instructions vs. task-specific context. Standing instructions (system prompts, CLAUDE.md files) are always in context. They compete with task-specific information for the reasoning bandwidth you have. High-quality standing instructions are densely informative and precisely scoped. They encode patterns, not examples. Constraints, not explanations of why constraints exist.

What gets retrieved vs. what gets included. Dumping all available information into a prompt is the naive approach. A designed context architecture retrieves relevant information when it's needed rather than including it always. MCP servers, RAG systems, and structured project files are the retrieval mechanisms. The design question is: what are the conditions under which each piece of information is relevant, and how do you surface it at those moments?

What gets persisted between sessions. Most Claude interactions are stateless. The conversation ends and the context disappears. For systems that need continuity — ongoing projects, evolving business context, accumulated decisions — you need persistence mechanisms. How you design the persistence layer determines how much context accumulates over time and how accessible it is.

What Comes After: System Design

The other post-prompting skill set is system design — the ability to architect how information flows through a process involving Claude.

Prompting is a point-to-point skill: you craft an input, you get an output. System design is a network skill: you design how multiple inputs and outputs connect, how information flows from sources to reasoning to action, how feedback loops improve the system over time.

The difference shows up clearly in automation work. A one-off prompt produces a one-off output. A well-designed automation produces outputs continuously, adapts to input variation, and gets better as it runs because the system is observing outcomes and feeding them back into future iterations.

System design for AI includes: defining the scope and authority of each component (what decisions can it make autonomously, what requires escalation), designing handoff protocols between components (what information passes between them and in what format), building observability (what logs, what metrics, what makes failure visible), and creating feedback mechanisms (how do outcomes improve future performance).

These skills don't appear in "prompt engineering" courses. They appear in distributed systems engineering, in organizational design, in process architecture. The vocabulary translates; the underlying principles carry over.

The Compounding Dynamic

Here's what the post-prompting work enables that prompting alone doesn't: genuine compounding.

A prompt gets better in one direction: you refine it. A system gets better in multiple directions simultaneously: better context architecture, better scope definitions, better retrieval, better feedback loops, better prompt instructions informed by observed performance. Each improvement multiplies with the others rather than adding to them.

After six months of building CEO OS infrastructure, the morning brief I get is qualitatively different from the one I got at week one. Not because I've gotten better at prompting — the prompt has changed, but that's a small part of the delta. Because the context architecture has improved, the persistence layer captures richer state, the instructions have been calibrated against hundreds of real mornings, and the feedback loop has refined what the system surfaces.

That's what compound looks like. It doesn't come from prompts. It comes from systems.

The question for anyone currently investing heavily in prompting skill: what would it look like to invest the same energy in the architectural layer — and what would that return look like in six months?

Want this for your business?

Tell us what you're building. We'll map out exactly what to build and what it costs.

Start Your Project →

Frequently Asked

What is the prompting skill ceiling?

Prompting has a ceiling because the value of a prompt is bounded by the context it operates in. You can write the most precisely calibrated prompt in the world, but if the model doesn't have the right context, the output is limited by the context, not the prompt. Beyond a certain level of prompting skill, the bottleneck shifts from 'how you ask' to 'what the model has access to' — and improving the latter requires a different skill set.

What comes after prompting?

Context architecture. System design. The ability to structure information so a model can reason about it well, build retrieval systems that surface the right context at the right time, define agent scopes that match the shape of the work, and design feedback loops that compound over time. These are infrastructure skills, not prompt skills — and they have different and substantially higher ceilings.

Is prompting still worth learning?

Yes, absolutely. Prompting skill is the foundation. Good prompts — precise, well-structured, with clear constraints and output specifications — matter at every level of the stack. The point isn't that prompting isn't valuable. It's that treating prompting as the primary or terminal skill is like treating handwriting as the terminal skill in writing. It matters; it's not the whole story.

What does 'compounding' look like in AI systems vs. one-off prompts?

A one-off prompt produces a good output once. A well-designed system produces better outputs over time as context accumulates, instructions are refined based on observed performance, and the architecture is adjusted for the patterns that emerge from real use. The compounding comes from iteration on the system, not repetition of the prompt. That's a different kind of work — closer to engineering than to writing.

Jason Murphy

Written by

Murph

Jason Matthew Murphy. Twenty years building digital systems for businesses. Former CardinalCommerce (acquired by Visa). Now running VibeTokens — AI-built websites and content for small businesses.

The window is open.

It won't be forever.

Start Your Project →