Prompt Schemas

Ad-hoc prompting is typing instructions and hoping for the best. Prompt schemas are structured templates with variables, examples, and constraints — turning prompt writing from an art into engineering. This is how you make AI outputs repeatable.

What Is a Prompt Schema?

A prompt schema is a template that defines the structure of a prompt — what goes where, what varies per request, and what stays constant. Think of it as a function signature for natural language.

Beyond copy-paste prompts

Most teams start with prompts stored in code as raw strings. Someone writes a good prompt, it works, and it gets hardcoded. The problem comes when you need to adapt it — for different users, different inputs, or different output formats. Prompt schemas solve this by separating the structure (what is always the same) from the variables (what changes per request).

Templates with slots

At its simplest, a prompt schema is a template with named variables. Instead of writing "Summarise this article about climate change in 3 bullet points," you define a schema with slots for the topic, the content, the format, and the length. At runtime, those slots get filled with actual values. The structure stays the same. The content varies.

Ad-Hoc Prompting vs Prompt Engineering

The difference between prompting and prompt engineering is the difference between writing a script and writing a function. One is a one-off. The other is reusable, testable, and maintainable.

Ad-hoc prompting

You type a prompt. You get a result. If it is not great, you tweak the wording and try again. The prompt lives in a chat window or buried in application code. There is no version history, no testing, and no way to know whether a change improved things or made them worse. This works for exploration. It does not work for production.

Prompt engineering with schemas

You define a schema with a clear structure: system context, user input variables, few-shot examples, output constraints. Each component is versioned independently. You can test the schema against a suite of inputs and measure output quality. When you change one part, you can see exactly what effect it has. This is what production AI requires.

Components of a Prompt Schema

A well-structured prompt schema has distinct parts, each serving a specific purpose in guiding the model's output.

Variables and placeholders

Named slots that get filled at runtime. These could be user inputs, data retrieved from a database, configuration values, or the output of a previous step in a pipeline. Variables turn a static prompt into a dynamic one without rewriting the surrounding structure.

Few-shot examples

Concrete input-output pairs that show the model what you expect. Instead of describing the desired format in abstract terms, you show it. Few-shot examples are one of the most reliable ways to control output quality — they ground the model in specific patterns rather than relying on its interpretation of your instructions.

Output constraints

Explicit rules about what the output should look like. JSON with a specific schema. A numbered list with no more than five items. A response that must include a confidence score. Output constraints reduce ambiguity and make downstream parsing reliable — which matters when the output feeds into another system.

Conditional sections

Parts of the prompt that only appear under certain conditions. If the user is a free-tier customer, include one set of instructions. If they are enterprise, include another. Conditional sections let a single schema handle multiple scenarios without maintaining separate prompts for each one.

Versioning, Testing, and Iteration

Prompts are code. They should be versioned, tested, and iterated on like any other part of your system.

Version control

Every change to a prompt schema should be tracked. When output quality degrades, you need to know what changed, when, and why. I help clients store prompt schemas in version-controlled repositories with clear changelogs — not in application config files where changes happen silently.

Evaluation suites

A set of test inputs with expected outputs. Run the schema against the suite after every change. Measure output quality using automated metrics (format compliance, factual accuracy against known answers) and human review. Without this, prompt changes are guesswork.

A/B testing

Run two versions of a prompt schema side by side with real traffic. Measure which produces better outcomes — not just better-looking text, but better downstream results. Did the new schema lead to more accurate classifications? Fewer support escalations? That is what matters.

Prompt libraries

As schemas accumulate, they become a library — a reusable collection of tested, versioned prompt templates that teams can draw from. A prompt library is an organisational asset. It captures what you have learned about how to communicate with models and makes that knowledge available across projects.

Where It Fits

Prompt schemas sit at the intersection of context and engineering. They turn static system prompts into dynamic, reusable assets.

System prompts

A system prompt defines the baseline behaviour. A prompt schema makes it dynamic — injecting user-specific data, retrieved context, and conditional instructions at runtime. The system prompt is the foundation. The schema is the architecture built on top.

Storage (prompt library)

Prompt schemas need to live somewhere accessible, versioned, and searchable. A prompt library — stored in a database, a Git repository, or a dedicated prompt management platform — is where schemas become organisational knowledge rather than individual expertise.

Agents and pipelines

Agents and pipelines consume prompt schemas at runtime. An agent might select different schemas based on the task at hand. A pipeline might use a schema for each processing step. Schemas are the reusable building blocks that make these systems maintainable.

Observability

When you version your schemas and log which version was used for each request, debugging becomes straightforward. You can trace any output back to the exact prompt that produced it — which is essential for understanding quality issues in production.

Ready to structure your prompts?

I help teams move from ad-hoc prompting to structured prompt engineering — designing schemas, building prompt libraries, and setting up the versioning and testing workflows that make prompt quality measurable.