System Prompts

Every time you interact with an AI system, there's a layer of instructions you never see. The system prompt sits between you and the model, shaping behaviour, enforcing constraints, and defining what the AI is — before a single user message arrives.

What Is a System Prompt?

A system prompt is a block of text that gets prepended to every conversation. The user does not see it, but the model does — and it follows those instructions as its primary directive.

The hidden layer

When a user sends a message, it does not go straight to the model. The system prompt wraps around it first — adding identity, constraints, formatting rules, and domain knowledge. The model processes both together, but the system prompt takes precedence. This is why the same model can behave completely differently across two products.

Not just a greeting

A system prompt is not "You are a helpful assistant." That is a placeholder, not engineering. A well-written system prompt defines persona, tone, domain boundaries, output format, error handling behaviour, and what the model should refuse to do. It is the single most underinvested piece of most AI deployments.

Why System Prompts Matter

System prompts are the cheapest, fastest way to change how an AI system behaves. No fine-tuning, no retraining, no code changes. Just text.

Behavioural control

System prompts define what the model should and should not do. They set guardrails on tone, restrict the model from discussing certain topics, enforce output formats, and establish the persona the model adopts. Without them, you are relying on the model's default training — which may not match your use case at all.

Consistency across interactions

Every conversation starts from the same foundation. Whether a user asks their first question or their fiftieth, the system prompt ensures the model maintains the same identity, constraints, and behaviour. This is what makes an AI product feel like a product rather than a raw model.

Domain grounding

System prompts can inject critical domain knowledge directly into the model's context. Key definitions, business rules, product details — anything the model needs to know to give accurate, relevant answers. This is a lightweight alternative to RAG for small, stable knowledge sets.

Security boundary

A well-crafted system prompt helps defend against prompt injection and jailbreaking. It establishes what the model treats as authoritative (the developer's instructions) versus what it treats as input (the user's messages). This separation is foundational to building safe AI systems.

Developer-Defined vs Vendor-Defined

There are two layers of system prompts in most AI products, and understanding the distinction matters for control and debugging.

Vendor-defined prompts

Model providers like OpenAI, Anthropic, and Google inject their own system-level instructions before yours. These enforce safety policies, content restrictions, and baseline behaviours. You do not control this layer — it is baked into the API. When a model refuses a request you think it should handle, the vendor's system prompt is often the reason.

Developer-defined prompts

This is the layer you control. When you send a system message via the API, you are adding your instructions on top of the vendor's. Your system prompt defines your product's personality, domain, and constraints. The model treats both layers as authoritative, but conflicts between them can cause unpredictable behaviour — which is why understanding the vendor layer matters.

In practice, I help clients write system prompts that work with the vendor layer rather than fighting it. Trying to override built-in safety restrictions is a losing game. The better approach is to understand what the vendor layer does and design your developer prompt to complement it — adding specificity, domain knowledge, and product-level constraints that the vendor layer does not address.

What Goes Into a Good System Prompt

System prompt engineering is a discipline. Here are the components I work through with clients when designing prompts for production systems.

Identity and persona

Who is the AI? What role does it play? A support agent, a research assistant, a code reviewer? The persona shapes the tone, vocabulary, and approach the model takes. It also determines what the model should and should not claim to be able to do.

Constraints and boundaries

What topics are off-limits? What actions require human approval? What should the model do when it does not know the answer? Constraints prevent the model from drifting outside its intended use case. Without them, users will inevitably push the model into territory you did not design for.

Output format

Should the model respond in JSON? Markdown? Bullet points? A specific schema? Output format instructions in the system prompt are more reliable than asking for them in user messages, because the model treats system instructions with higher priority.

Domain knowledge

Key facts, definitions, product details, and business rules that the model needs for every interaction. For small, stable knowledge sets, embedding them directly in the system prompt is simpler and faster than building a RAG pipeline. The tradeoff is context window space.

Where It Fits

System prompts are one part of the context layer. They work alongside — and sometimes compete with — other context sources for the model's attention.

Prompt schemas

A system prompt is static — the same instructions for every interaction. A prompt schema makes it dynamic, inserting variables, user-specific data, and conditional instructions at runtime. System prompts set the foundation. Schemas make it adaptive.

Models

Different models respond differently to the same system prompt. A prompt optimised for one model may need adjustment for another. System prompt design is model-aware — the best prompts are written for a specific model's strengths and weaknesses.

RAG and memory

System prompts define how the model should use retrieved context and memory. Should it cite sources? Prioritise recent information? Flag when it is unsure? The system prompt is the instruction layer that governs how all other context gets processed.

Safety

System prompts are the first line of defence. They define what the model will and will not do, how it handles adversarial inputs, and when to escalate to a human. A well-written system prompt is a guardrail in itself.

Need help with prompt architecture?

I design system prompts and prompt architectures for production AI systems — the kind that actually shape model behaviour rather than just saying 'be helpful.' If your AI is not behaving the way you want, the system prompt is usually where I start.