Why structure matters in educator prompts
A well-structured prompt is less about “clever wording” and more about making your expectations unambiguous. When you ask an AI tool to create a lesson, quiz, rubric, or feedback, the model must infer missing details. Structure reduces guessing by telling the model who it is acting as (role), what situation it is working in (context), what boundaries it must respect (constraints), and what “good” looks like (success criteria). This chapter focuses on a practical prompt structure you can reuse across tasks without coding.
The four-part structure: Role, Context, Constraints, Success Criteria
Think of your prompt as a brief you would give a teaching assistant. The assistant performs best when you specify: Role (the hat it wears), Context (the classroom reality), Constraints (the non-negotiables), and Success Criteria (how you will judge the output). You can write these as labeled sections or as a compact paragraph, but labeled sections are easier to audit and revise.
1) Role: define the professional stance and capabilities
The role tells the model what expertise to simulate and what responsibilities to prioritize. For educators, roles often include “instructional designer,” “assessment writer,” “writing coach,” “special education co-teacher,” or “lab safety supervisor.” A strong role statement includes (a) the job title, (b) the audience served, and (c) the priority lens (clarity, inclusivity, rigor, engagement, accessibility).
Role examples (strong vs. weak)

- Weak: “You are a teacher.”
- Stronger: “You are an instructional designer supporting a Grade 7 science teacher; prioritize clarity, misconceptions, and formative checks for understanding.”
- Stronger: “You are an assessment specialist; write items aligned to the stated learning targets and avoid trick questions.”
- Stronger: “You are a writing coach for multilingual learners; give feedback that is kind, specific, and actionable, with sentence-level examples.”
When role is especially important: generating rubrics, feedback, and assessments. These tasks require consistent tone and professional norms (e.g., avoiding bias, focusing on evidence, matching standards). If you omit role, the model may default to generic advice or an inconsistent voice.
- Listen to the audio with the screen off.
- Earn a certificate upon completion.
- Over 5000 courses for you to explore!
Download the app
2) Context: provide the classroom reality and inputs
Context is everything the model needs to know about the learning situation and the materials it should use. This includes grade level, subject, unit topic, time available, student profile, prior knowledge, and any text or data the output must be based on. Context also includes the “source of truth”: the learning objectives, standards, or success criteria you are already using.
Context checklist (choose what matters)
- Grade/age range and subject
- Unit topic and where this lesson sits (intro, practice, review, assessment)
- Time constraints (e.g., 45-minute period, two 20-minute stations)
- Student needs (ELL supports, IEP accommodations, advanced learners)
- Materials available (lab equipment, devices, readings)
- Learning targets (student-friendly “I can…” statements)
- Any required content to incorporate (vocabulary list, excerpt, dataset)
Context example
“Grade 10 English, 55 minutes. Students are reading a short story and practicing citing textual evidence. Many students struggle to move from summary to analysis. Provide activities that require quoting and explaining.”
Notice that context is not a long narrative; it is a set of decision-making inputs. If you give too little context, the model fills gaps with assumptions (wrong grade level, wrong complexity, wrong pacing). If you give too much irrelevant context, the model may lose focus. Aim for the minimum details that change instructional decisions.
3) Constraints: set boundaries, rules, and non-negotiables
Constraints are the guardrails. They prevent outputs that are unusable (too long, wrong format, inappropriate reading level, misaligned question types). Constraints can be about content (what to include or avoid), pedagogy (e.g., inquiry-based, explicit instruction), format (tables, bullet lists, number of items), language (plain language, bilingual), and policy (no personal data, avoid sensitive content, cite sources if used).
Common educator constraints

- Length: “One page,” “10 questions,” “under 400 words,” “fits on a slide.”
- Format: “Use a table with columns: Question, Answer Key, Rationale.”
- Reading level: “Aim for Grade 5 readability; short sentences; define key terms.”
- Accessibility: “Provide alt-text suggestions; avoid color-only instructions.”
- Assessment rules: “No trick questions; one correct answer; plausible distractors.”
- Academic integrity: “Do not generate answers for students; generate hints only.”
- Content exclusions: “Avoid graphic scenarios; avoid stereotypes; no copyrighted passages.”
Constraint phrasing tips
- Make constraints measurable: “exactly 8 items,” “3 levels in rubric,” “each question has 4 options.”
- Put “must” and “must not” in separate bullets to reduce ambiguity.
- When you need a specific structure, show it: provide a mini-template the model must follow.
4) Success criteria: define what “good” looks like and how to self-check
Success criteria are the quality targets. They tell the model how you will evaluate the output and can prompt the model to self-audit. In teaching, success criteria often include alignment (to learning targets), cognitive demand (appropriate rigor), clarity (student-friendly language), and usefulness (ready to copy into your materials). Without success criteria, you may get something that is “fine” but not optimized for your purpose.
Examples of success criteria
- Alignment: “Every activity explicitly supports one of the listed learning targets.”
- Rigor: “Include at least two questions at application/analysis level.”
- Clarity: “Directions are step-by-step; no unexplained jargon.”
- Feedback quality: “Each comment includes (1) what works, (2) what to improve, (3) a concrete next step.”
- Differentiation: “Provide supports and extensions for each task.”
Self-check instruction
You can ask the model to verify its work against the success criteria. For example: “After drafting, run a checklist and revise until all criteria are met.” This often improves coherence and reduces omissions, especially in multi-part outputs like lesson plans with assessments and accommodations.
A reusable prompt template (copy/paste)
Use the following template as a starting point. Replace bracketed text with your details. Keep it short enough to scan, but specific enough to guide decisions.
ROLE: You are [role], supporting [grade/subject]. Prioritize [lens: clarity, inclusivity, rigor, engagement]. CONTEXT: Topic/unit: [topic]. Lesson purpose: [introduce/practice/review/assess]. Time: [minutes]. Students: [key needs, prior knowledge]. Materials: [what is available]. Learning targets: [list 2–4]. CONSTRAINTS: Must include: [required elements]. Must not include: [exclusions]. Format: [table/bullets/sections]. Length: [limits]. Reading level/tone: [requirements]. SUCCESS CRITERIA: The output is successful if: [3–6 measurable criteria]. After drafting, self-check against these criteria and revise. TASK: [exact deliverable request].Notice that the task comes last. This is intentional: the model reads the “rules of the game” before it starts generating.
Step-by-step: building a structured prompt from a messy request
Educator prompts often start as a quick idea: “Make a quiz on photosynthesis.” The structure helps you turn that into a reliable request. Use this step-by-step process.
Step 1: Write the task in one sentence
Example: “Create a 12-question quiz on photosynthesis.” This is your starting point, not your final prompt.
Step 2: Add role to control voice and assessment norms
Add: “You are an assessment specialist for middle school science.” This reduces the chance of overly advanced items or vague questions.
Step 3: Add context that affects difficulty and content
Add grade, time, and learning targets. Example targets: “Explain the purpose of photosynthesis,” “Identify inputs/outputs,” “Interpret a simple diagram.” If you have a diagram, paste it or describe it; otherwise the model will invent one.
Step 4: Add constraints to make it usable immediately
Decide question types, number of items, and format. Example: “8 multiple-choice (4 options), 2 short answer, 2 diagram-labeling prompts.” Add “include answer key and brief rationale.” Add reading level constraints and “no trick questions.”
Step 5: Add success criteria to enforce alignment and quality
Example: “At least 4 items target common misconceptions (plants get food from soil; oxygen is an input). Vocabulary is defined in-item if used. Each item maps to a learning target.”
Step 6: Ask for a self-check table
Request a final table that maps each question to learning target and difficulty. This makes it easier to review quickly and revise.
Full example prompt (quiz)
ROLE: You are an assessment specialist for Grade 7 life science. Prioritize clarity, fairness, and misconception-checking. CONTEXT: Topic: photosynthesis. Time for quiz: 15 minutes. Students: mixed reading levels; several multilingual learners. Learning targets: (1) Describe the purpose of photosynthesis, (2) Identify inputs and outputs, (3) Explain where plant mass comes from in simple terms, (4) Interpret a basic photosynthesis diagram. CONSTRAINTS: Create exactly 12 questions: 8 multiple-choice with 4 options each, 2 short-answer (1–2 sentences), 2 diagram-based prompts described in words (no images). Provide an answer key and 1–2 sentence rationale per item. Use plain language; define any necessary vocabulary in parentheses. Must not include trick questions or ambiguous wording. SUCCESS CRITERIA: Each question clearly aligns to one learning target; at least 4 questions address common misconceptions; distractors are plausible but clearly incorrect; the quiz fits a 15-minute window. After drafting, self-check and revise. TASK: Produce the quiz, then add a mapping table with columns: Q#, Type, Learning Target, Misconception Targeted (if any), Difficulty (easy/medium/hard).Practical patterns educators can reuse
Below are prompt “patterns” built from the same four parts. Each pattern is a repeatable structure for a common educator task.
Pattern A: Lesson plan with differentiated supports
Use role to set instructional approach, context to anchor to your unit, constraints to control time and materials, and success criteria to ensure differentiation is not an afterthought.
ROLE: You are an instructional designer and co-teacher. Prioritize explicit instruction, checks for understanding, and accessibility. CONTEXT: Grade 5 math. Topic: adding fractions with unlike denominators. Time: 45 minutes. Students: 4 students need visual supports; 3 advanced students need extension. Materials: fraction strips, whiteboard. Learning targets: (1) Find common denominators, (2) Add and simplify, (3) Explain reasoning with visuals. CONSTRAINTS: Include: warm-up (5 min), mini-lesson (10), guided practice (15), independent practice (10), exit ticket (5). Provide teacher script for key moments. No homework. SUCCESS CRITERIA: Each segment includes a quick check for understanding; supports and extensions are embedded in each practice task; directions are step-by-step and classroom-ready. TASK: Write the full lesson plan with materials list, teacher moves, student actions, and differentiation notes.Pattern B: Feedback generator that stays within your rubric
Feedback prompts fail when they are too general or when the model invents criteria you do not use. Put the rubric (or a simplified version) into context, then constrain the feedback format, and define success criteria for tone and actionability.
ROLE: You are a writing coach for Grade 9 students. Prioritize kindness, specificity, and growth mindset language. CONTEXT: Assignment: argumentative paragraph. Rubric criteria: Claim (clear, arguable), Evidence (relevant quote/data), Reasoning (explains how evidence supports claim), Organization (logical flow), Conventions (sentence clarity). Student draft: [paste draft]. CONSTRAINTS: Provide feedback in 5 bullets, one per rubric criterion. Each bullet must include: (a) one strength, (b) one improvement, (c) one concrete next step sentence starter. Do not rewrite the whole paragraph. Avoid grading language like “bad” or “wrong.” SUCCESS CRITERIA: Feedback references exact words/phrases from the draft; next steps are doable in 10 minutes; tone is supportive and direct. After drafting, check that all 5 criteria are addressed. TASK: Generate the feedback.Pattern C: Quiz item bank with controlled cognitive demand
If you need a mix of recall and higher-order thinking, success criteria can require a distribution. Constraints can enforce item types and prevent the model from drifting into essay prompts when you need quick checks.
ROLE: You are an assessment writer for high school history skills (not content). Prioritize source analysis and clear stems. CONTEXT: Skill focus: analyzing primary sources. Students have practiced sourcing, contextualization, and corroboration. CONSTRAINTS: Create 15 items: 6 multiple-choice, 6 short-answer (2–3 sentences), 3 “choose-two” items. Provide answer key and brief scoring notes. Include 3 short source excerpts you write yourself (50–80 words each) so the questions are self-contained. SUCCESS CRITERIA: Items target sourcing/contextualization/corroboration; stems are unambiguous; scoring notes specify what earns full credit. TASK: Produce the item bank and label each item with the targeted skill.Pattern D: Classroom discussion prompts with safety and inclusion constraints
Discussion prompts can accidentally become too personal or sensitive. Constraints can keep prompts academic, optional, and respectful, while success criteria ensure they still generate rich talk.
ROLE: You are a facilitator for inclusive classroom discussions. Prioritize psychological safety and academic focus. CONTEXT: Grade 8 health. Topic: media literacy and body image. Students vary in comfort discussing personal experiences. CONSTRAINTS: Create 10 discussion questions that are text/media-focused (not personal disclosure). Include 3 “opt-in” reflection prompts that can be answered hypothetically. Provide 6 sentence stems for respectful disagreement. SUCCESS CRITERIA: Questions invite multiple perspectives, avoid judgmental framing, and can be answered without sharing personal details. TASK: Generate the questions and stems.How to debug a prompt using the four parts
When the AI output misses the mark, you can diagnose the problem by asking which part of the structure was unclear or missing.
Problem: Output is too generic
Likely missing: context and success criteria. Fix by adding learning targets, student needs, and a quality bar like “include teacher script,” “include misconceptions,” or “include examples and non-examples.”
Problem: Output is the wrong level (too hard or too easy)
Likely missing: context (grade, prior knowledge) and constraints (reading level, vocabulary). Fix by specifying grade band, limiting jargon, and requiring scaffolds like sentence frames or worked examples.
Problem: Output ignores your format needs
Likely missing: constraints. Fix by providing an explicit structure and counts: “exactly 6 bullets,” “table with these columns,” “each activity includes time estimate.”
Problem: Output includes things you cannot use (materials you don’t have, policies you must follow)
Likely missing: context (materials available) and constraints (must not include). Fix by listing available materials and adding “must not require devices,” “must not use external links,” or similar.
Problem: Output is inconsistent or contradicts itself
Likely missing: success criteria and self-check. Fix by asking the model to verify alignment and to produce a mapping table (activity → target) or a checklist audit.
Micro-techniques that strengthen each part
Role micro-techniques
- Add “prioritize” statements: “Prioritize formative assessment and student talk moves.”
- Specify tone: “Warm, professional, concise.”
- Define what the role avoids: “Avoid lecturing; use guided discovery.”
Context micro-techniques
- Provide “knowns” and “unknowns”: “Students know X; they struggle with Y.”
- Paste the exact text students will read, or ask the model to generate a short original text if you need self-contained materials.
- Include time and constraints of the environment: “No internet; one projector; desks in groups of four.”
Constraints micro-techniques
- Use “exactly,” “at most,” “at least” to control counts.
- Require labels: “Label sections as Warm-up, Mini-lesson…”
- Specify output order: “First provide the student handout, then the teacher notes, then the answer key.”
Success criteria micro-techniques
- Ask for alignment mapping: “Include a table mapping each item to a target.”
- Ask for a quick quality audit: “List any assumptions you made and how they affect the output.”
- Define what would make you reject the output: “If any question has ambiguous wording, rewrite it.”
Putting it together: a compact “one-screen” prompt
Sometimes you need a prompt that fits on one screen for quick iteration. You can compress the four parts while keeping them explicit.
ROLE: Co-teacher and formative assessment designer (clear, inclusive). CONTEXT: Grade 6, 40 min, topic: ratios; students confuse part-to-part vs part-to-whole; targets: represent ratios, write equivalent ratios, explain meaning. CONSTRAINTS: No devices; use mini-whiteboards; produce: (1) 5-min warm-up, (2) 10-min mini-lesson with 2 worked examples, (3) 15-min practice with 8 problems + answer key, (4) 5-min exit ticket with 3 questions. SUCCESS: Tasks explicitly address the confusion, include at least 2 visual representations, and each problem states what the ratio compares. TASK: Write the plan and materials.This compact version still includes all four parts. If the output is close but not perfect, you can adjust one part at a time (e.g., tighten constraints, add a success criterion about language, or clarify context about prior lessons).