1) Workspace conventions: structure that scales with the team
A team-ready Postman workspace is predictable: anyone can find requests, understand intent, and safely extend tests without breaking others. Conventions reduce “tribal knowledge” and make reviews faster because the structure itself communicates meaning.
Folder structure: organize by domain and purpose
Pick one primary organizing axis and stick to it. For most teams, organizing by API domain (resource or bounded context) works best, with a consistent sub-structure for common operations and tests.
- Top-level folders by domain: Users, Orders, Inventory, Billing, Admin
- Inside each domain: CRUD requests, workflows, negative tests, and utilities
- Separate “Utilities”: health checks, auth helpers (if any requests exist), data seeding endpoints, cleanup endpoints
Example structure (single collection):
API Test Suite (Collection)├── 00 - Docs & Conventions│ ├── README (request)│ └── Changelog (request)├── 10 - Users│ ├── 11 - CRUD│ ├── 12 - Negative│ └── 19 - Workflows├── 20 - Orders│ ├── 21 - CRUD│ ├── 22 - Negative│ └── 29 - Workflows├── 90 - Utilities│ ├── Health│ ├── Seed Data│ └── Cleanup└── 99 - Sandbox / WIP (excluded from runs)Numbered prefixes keep order stable across exports and make it easy to reference folders in discussions and tickets. Reserve a WIP area to prevent half-finished requests from polluting routine runs.
Naming standards: make intent obvious
Names should answer: what endpoint, what scenario, what expectation. Avoid clever abbreviations; optimize for scanning.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
- Requests:
[METHOD] /path — scenario — expected - Examples (saved responses):
200 OK — minimal,400 — missing required field - Folders:
CRUD,Negative,Workflows,Cleanup(consistent across domains)
Request name examples:
GET /users/{id} — existing user — 200POST /orders — invalid currency — 400PATCH /inventory/{sku} — optimistic lock conflict — 409
Documentation expectations: every request is self-explaining
Team maintainability depends on lightweight, consistent documentation embedded where people work.
- Collection description: purpose, scope, how to run, required environments, data assumptions
- Folder description: what scenarios belong here, any shared setup/cleanup assumptions
- Request description: endpoint intent, required variables, preconditions, what the tests validate, and what data it creates/changes
Suggested request description template:
Purpose: What this request verifiesPreconditions: Required existing data or prior stepsVariables: {{baseUrl}}, {{userId}}, {{token}}Side effects: Creates/updates/deletes what?Cleanup: How to revert (if needed)Notes: Known constraints, rate limits, eventual consistency2) Reusable components: reduce duplication and enforce consistency
Reusable components keep a suite coherent. The goal is to define shared behavior once (at collection or folder level) and let individual requests focus on scenario-specific checks.
Collection-level scripts: shared guardrails
Use collection-level scripts to enforce baseline expectations and to centralize common helpers. Keep them small and deterministic.
Examples of what belongs at collection level:
- Standard response-time threshold checks (with per-request overrides)
- Common helper functions (e.g., safe JSON parsing)
- Standard headers validation (e.g., content-type for JSON endpoints)
- Unified logging format for debugging in CI
Example: collection-level test snippet (baseline checks):
// Collection-level Tests (applies to all requests unless overridden)pm.test('Status code is present', function () { pm.expect(pm.response.code).to.be.a('number');});pm.test('Response time under 1500ms (default)', function () { const limit = Number(pm.variables.get('rt_limit_ms') || 1500); pm.expect(pm.response.responseTime).to.be.below(limit);});Example: collection-level helper (safe JSON):
// Collection-level Pre-request or Tests (helpers)pm.globals.set('helpers_loaded', 'true');function asJson(response) { try { return response.json(); } catch (e) { return null; }}If you add helpers, document them in the collection description and keep naming stable to avoid breaking older requests.
Shared variables: define ownership and scope
Teams struggle when variables are scattered and inconsistently scoped. Define a simple rule set:
- Environment variables: deployment-specific values (base URL, tenant, credentials references)
- Collection variables: suite-specific defaults (timeouts, feature flags for tests, common IDs used across requests)
- Local/request variables: temporary values created during a run (IDs created on the fly)
Adopt naming conventions that communicate scope and intent:
baseUrl,authToken(environment)rt_limit_ms,feature_newCheckout_enabled(collection)createdOrderId,tempUserEmail(runtime)
Practical step-by-step: create a “Variables Map” request in 00 - Docs & Conventions:
- Add a request named
README — Variables Map(it does not need to be runnable). - In its description, list variables, scope, example values, and who owns them.
- During reviews, require updates to this map when new shared variables are introduced.
Consistent auth configuration: one approach per suite
Even if the team already knows authentication patterns, maintainability requires consistency in how auth is applied across requests.
- Prefer configuring auth at the collection level when most requests share it.
- Override at folder/request level only for explicit exceptions (public endpoints, different roles).
- Document role assumptions: e.g., “Default token is a read-write user; admin endpoints override with admin token.”
Practical step-by-step: enforce auth consistency:
- Set collection auth to the default scheme used by the suite.
- Create folders for role-based overrides (e.g.,
Adminfolder uses admin token). - Add a folder description stating which role/token is expected.
3) Versioning mindset: treat collections like code
Collections are test assets that evolve with the API. A versioning mindset means changes are intentional, traceable, and reversible.
Semantic change thinking: what changed and why
When you modify a request or test, record the type of change:
- Patch: fix a flaky assertion, clarify naming, update docs
- Minor: add new endpoints/tests without breaking existing runs
- Major: restructure folders, change shared variables, update assumptions that can break existing pipelines
Even if you do not publish formal versions, thinking this way improves communication and reduces surprise breakages.
Change tracking practices inside Postman
Use lightweight, in-workspace tracking so changes are visible without hunting through messages.
- Changelog request in
00 - Docs & Conventionswith dated entries - Request-level notes: “Updated assertion to accept 202 for async processing”
- Deprecation markers: prefix deprecated requests with
DEPRECATEDand link to replacement
Example changelog entry format:
2026-01-16 - Minor- Added Orders/Negative: POST /orders — invalid currency — 400- Updated Users/CRUD: GET /users/{id} now accepts 200 with optional 'middleName'- Deprecated: GET /users/search?q (use GET /users?query=)Export discipline and “source of truth”
Agree on where the authoritative suite lives (workspace vs. exported JSON in a repository). Whatever you choose, enforce one source of truth to avoid divergent copies.
- If you export collections, standardize export naming:
api-tests.collection.json,env.staging.json. - Never store real secrets in exported environments; use placeholders and document how to inject secrets in CI.
4) Review checklist for requests and tests: clarity, determinism, cleanup
A review checklist turns “best practices” into repeatable quality gates. Apply it to every new or modified request, especially before sharing with the team or wiring into automated runs.
Clarity checklist
- Request name follows the standard and includes scenario + expected outcome.
- Description includes purpose, preconditions, variables, side effects, cleanup.
- Folder placement matches the domain and purpose (CRUD/Negative/Workflows/Utilities).
- Examples (saved responses) are attached for key scenarios when helpful for onboarding.
Determinism checklist (tests should be stable)
- Assertions do not depend on current time unless explicitly controlled (fixed timestamps or tolerance windows).
- Assertions avoid fragile ordering assumptions (e.g., arrays sorted) unless the API guarantees order.
- Tests validate what matters (contract/behavior), not incidental fields that change frequently.
- Random data is generated in a controlled way (unique but traceable), and stored in variables for later steps.
- Retries are not used to hide flakiness; if eventual consistency exists, tests explicitly wait/poll with limits and document it.
Example: avoid brittle equality on dynamic fields:
// Prefer checking presence/type over exact value for server-generated fieldsconst body = pm.response.json();pm.test('id is a non-empty string', () => { pm.expect(body.id).to.be.a('string').and.not.empty;});pm.test('createdAt is ISO string', () => { pm.expect(body.createdAt).to.match(/^\d{4}-\d{2}-\d{2}T/);});Cleanup checklist (leave the environment tidy)
- If the request creates data, there is a documented cleanup strategy (delete endpoint, cleanup folder, or dedicated teardown run).
- Temporary variables are unset when no longer needed (avoid leaking IDs into later runs).
- Requests that mutate shared resources are isolated (e.g., run only in dedicated environments).
Example: variable cleanup pattern:
// After using a temporary idpm.variables.unset('createdOrderId');Consistency checklist (team-wide standards)
- Auth is inherited from collection/folder unless there is a documented exception.
- Headers are consistent (content-type, accept) unless endpoint requires otherwise.
- Error tests validate stable error shapes (code/message fields) rather than full text blobs.
5) Keeping tests stable as APIs evolve
APIs change: fields are added, renamed, deprecated, or behavior shifts (sync to async). Stable tests are designed to detect meaningful breaking changes while tolerating safe evolution.
Contract changes: test the contract, not the implementation
Focus assertions on:
- Required fields and their types
- Allowed status codes for a scenario
- Key invariants (e.g., total equals sum of line items if guaranteed)
- Backward-compatible additions (new optional fields) should not break tests
Example: allow additive fields without failing:
const body = pm.response.json();pm.test('User has required fields', () => { pm.expect(body).to.have.property('id'); pm.expect(body).to.have.property('email');});Deprecations: make them visible and time-bound
When an endpoint or field is deprecated:
- Mark requests as
DEPRECATEDand add a link to the replacement request. - Add a deprecation deadline in the description (date or version).
- Move deprecated items into a
98 - Deprecatedfolder to keep routine runs clean.
Backward compatibility assumptions: document what you rely on
Many test failures come from undocumented assumptions, such as:
- Sorting order of list endpoints
- Default pagination size
- Precision/rounding rules
- Whether unknown fields are ignored or rejected
Write these assumptions in folder/request descriptions and encode them as explicit tests only when the API guarantees them.
Handling behavior shifts (sync vs async, eventual consistency)
If an API transitions to asynchronous processing (e.g., returns 202 with a status endpoint), update tests to accept both behaviors during migration windows.
Example: accept 200 or 202 with branching logic:
pm.test('Order creation returns 200 or 202', () => { pm.expect([200, 202]).to.include(pm.response.code);});if (pm.response.code === 200) { const body = pm.response.json(); pm.collectionVariables.set('createdOrderId', body.id);}if (pm.response.code === 202) { const body = pm.response.json(); pm.collectionVariables.set('orderJobId', body.jobId);}Document the migration state in the request description so reviewers know why multiple outcomes are allowed.
Capstone exercise: standardize an existing collection into a clean, shareable suite
Goal: take an existing, messy collection and refactor it into a team-ready workspace asset using a checklist. Output: a clean collection structure, consistent naming/docs, reusable shared components, and a review-ready test suite.
Provided checklist (use as your refactoring guide)
- Structure: domains at top level; consistent subfolders (CRUD/Negative/Workflows/Utilities); WIP isolated
- Naming: request names include method, path, scenario, expected status
- Docs: collection README + Variables Map + Changelog; each request has purpose/preconditions/side effects/cleanup
- Reuse: baseline checks at collection level; shared variables documented; auth inherited consistently
- Stability: deterministic assertions; avoid brittle ordering/time dependencies; tolerate additive fields
- Cleanup: teardown strategy exists; temporary variables unset; mutating tests isolated
- Evolution: deprecated items moved and labeled; migration windows documented; assumptions explicit
Step-by-step tasks
- Step 1: Audit — Scan the collection and list problems under: structure, naming, docs, reuse, stability, cleanup.
- Step 2: Restructure — Create the standard folder layout (including
00 - Docs & Conventions,90 - Utilities,99 - Sandbox / WIP). Move requests accordingly. - Step 3: Rename — Apply the naming standard to every request. Ensure negative tests clearly state the invalid condition and expected status.
- Step 4: Document — Fill in collection README, Variables Map, and Changelog. Add request descriptions using the template.
- Step 5: Centralize reuse — Move baseline checks and helper utilities to collection-level scripts. Remove duplicated snippets from individual requests when possible.
- Step 6: Normalize auth — Ensure most requests inherit auth from the collection. Create explicit overrides only where required and document them.
- Step 7: Stabilize tests — Replace brittle assertions with contract-focused checks. Add tolerances or branching only when the API behavior requires it and document why.
- Step 8: Add cleanup — Create a
Cleanupfolder or teardown requests. Ensure created resources can be removed, and unset temporary variables. - Step 9: Mark deprecations — Move deprecated requests to a dedicated folder, label them, and link to replacements with a deadline.
- Step 10: Produce the shareable suite — Ensure the collection runs without WIP items, docs are complete, and the checklist items are all satisfied.
Deliverable format
- A standardized collection with the agreed folder structure and naming.
- Completed
README,Variables Map, andChangelogitems inside00 - Docs & Conventions. - A short reviewer note (in the Changelog entry) summarizing what you changed and any remaining known limitations.