Inputs
Every AI interaction starts with an input. But the user's message is just the visible tip. Underneath it sits a stack of instructions, constraints, and context injections that together determine what the model actually sees.
The Prompt Stack
When a user sends a message to a chatbot or an agent fires an API call, the model doesn't just see that one string. It sees a layered stack of prompts, each doing a different job. Understanding these layers is the first step to building reliable AI systems.
User Prompt
The visible tip of the stack. This is what the human types into a chatbot, or what an application constructs on behalf of the user. It carries the intent — the question, the task, the request. In consumer products, this is the only layer the end user sees.
System Prompt
Developer-authored instructions that shape how the model behaves. These define persona, tone, output format, safety boundaries, and task-specific rules. The user typically never sees the system prompt, but it governs every response. This is where most prompt engineering lives.
Programmatic Prompt
In agentic workflows, prompts are often assembled by code — not written by humans in real time. An orchestration layer might inject retrieved documents, chain-of-thought scaffolding, tool-use instructions, or structured output schemas before the model ever sees the request.
Vendor Prompt
The hidden base layer. Model providers prepend their own instructions — safety guardrails, behavioral defaults, content policies — before any developer or user prompt. You rarely see these, but they set the floor for what the model will and won't do.
Types of Input
Not all inputs look the same. The format, source, and structure of an input determine how it should be engineered and what kind of results you can expect.
User Chat Input
Free-form natural language from a human user. This is the most familiar input type — someone typing a question or instruction into a chat interface. The challenge is that it's unstructured, ambiguous, and varies wildly in quality. Good system prompts compensate for this.
Programmatic Prompts
Prompts generated by agents, pipelines, or workflow orchestrators. These are constructed in code, often using templates that slot in retrieved context, tool results, or previous agent outputs. They tend to be more consistent than human input but need careful version control and testing.
Structured Prompt Schemas
Formal definitions of what goes where in a prompt — which sections hold context, which hold instructions, where examples go, and how output format is specified. Schemas make prompts reproducible and testable, especially at scale across multiple agents or use cases.
Few-Shot Examples
Input-output pairs included in the prompt to steer model behavior by demonstration. Instead of explaining what you want in abstract terms, you show the model concrete examples. This is one of the most effective prompt engineering techniques, especially for formatting and edge cases.
Pairs With
Inputs don't exist in isolation. Their effectiveness depends on what surrounds them.
Grounding
A raw prompt becomes powerful when enriched with retrieved documents, memory, and external data. RAG pipelines, conversation history, and knowledge bases all inject context into the prompt stack before it reaches the model. The input sets the intent; context provides the substance.
Model-Specific Engineering
The same prompt can produce radically different results across different models. Prompt engineering is never model-agnostic — what works on GPT-4o may fail on Claude, and vice versa. Understanding model-specific behaviors, context window limits, and instruction-following tendencies is essential to designing inputs that work reliably.
Need help designing your prompt architecture?
I help structure the full input stack — from user-facing prompts to system-level instructions.