Free Ebook cover Postman for API Testing: Collections, Environments, and Automated Checks

Postman for API Testing: Collections, Environments, and Automated Checks

New course

10 pages

Postman Collections: Organizing API Tests for Maintainable Suites

Capítulo 2

Estimated reading time: 10 minutes

+ Exercise

1) Collection design patterns

A Postman collection is the unit you version, share, run in the Collection Runner, and use as the backbone of a maintainable API test suite. The main design decision is how you group requests so that people can find, run, and extend tests without breaking others.

Pattern A: Organize by resource (REST-centric)

Use this when your API is primarily CRUD over stable resources (e.g., /users, /orders). It scales well as endpoints grow.

  • Pros: predictable location for endpoints; easy to ensure full CRUD coverage per resource.
  • Cons: end-to-end workflows may span multiple folders and feel fragmented.

Example structure:

Collection: Store API Tests  Collection Variables: baseUrl, authToken  Folder: Users    - Users - List (GET /users)    - Users - Create (POST /users)    - Users - Get by Id (GET /users/:id)    - Users - Update (PUT /users/:id)    - Users - Delete (DELETE /users/:id)  Folder: Orders    - Orders - List ...

Pattern B: Organize by feature (product-centric)

Use this when your API supports features that cut across resources (e.g., “Checkout”, “Reporting”, “Notifications”).

  • Pros: mirrors how product teams think; aligns with acceptance criteria.
  • Cons: the same endpoint might appear in multiple features, increasing duplication risk.

Example structure:

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

Collection: Store API Tests  Folder: Checkout    - Checkout - Create Cart    - Checkout - Add Item    - Checkout - Apply Coupon    - Checkout - Place Order  Folder: Account Management    - Account - Register    - Account - Login    - Account - Update Profile

Pattern C: Organize by workflow (journey-centric)

Use this when you frequently run multi-step flows and want a runnable “script” of requests (e.g., onboarding, provisioning, CRUD lifecycle). This is common for smoke tests and regression flows.

  • Pros: easy to run end-to-end; dependencies are explicit.
  • Cons: less discoverable for single endpoints; can drift from API surface as it expands.

Example structure:

Collection: Store API Tests  Folder: User Lifecycle (CRUD)    01 - Create User    02 - Get User    03 - Update User    04 - Delete User

In practice, many teams combine patterns: top-level folders by domain/resource, with a separate “Workflows” folder for end-to-end runs.

2) Naming conventions and metadata

Consistent naming and rich metadata reduce cognitive load and make failures easier to interpret in Runner results and CI logs.

Naming conventions that scale

  • Start with the domain or resource: Users - Create, Orders - Get by Id.
  • Include the HTTP method only if helpful: some teams add [GET] or GET prefix; others rely on Postman’s method badge. Choose one style and apply it everywhere.
  • Use action verbs: Create, List, Get, Update, Delete, Search, Validate, Export.
  • For workflows, prefix with step numbers: 01 -, 02 - to preserve run order.
  • For negative tests, label intent: Users - Create - Missing email (400), Auth - Login - Invalid password (401).

Descriptions: make the request self-explanatory

Use the request (or folder) description to document what the request is for, what it expects, and what variables it sets for downstream steps. Keep it short but operational.

Example request description template:

Purpose: Creates a user for test runs. Preconditions: authToken is set. Expected: 201; response has id, email; id stored as userId. Side effects: creates persistent record; cleanup via Users - Delete.

Examples as living documentation

Save one or more examples for key requests (especially those with complex responses). Examples help reviewers understand response shape and allow quick manual inspection without re-running.

  • Save a “happy path” example for each critical endpoint.
  • Optionally save an error example (e.g., 400 validation error) to document error schema.

Tags (if your team uses them)

If you use tags in your workflow (naming-based or documented in descriptions), standardize them. Common tag categories:

  • Scope: @smoke, @regression, @contract
  • Stability: @flaky, @quarantined
  • Dependency: @requires-auth, @creates-data

If Postman UI tagging is not consistently available in your setup, place tags at the start of the request name or in the first line of the description.

3) Foldering strategies to mirror API domains and test scope

Folders should help you answer two questions quickly: “Where is the endpoint?” and “What should I run for this purpose?” A practical approach is to use a two-axis structure: domain (what) and scope (why).

Option 1: Domain folders with scope subfolders

Collection: Store API Tests  Folder: Users    Folder: Smoke      - Users - List      - Users - Get by Id    Folder: Regression      - Users - Create      - Users - Update      - Users - Delete    Folder: Negative      - Users - Create - Missing email (400)

This keeps all user-related requests together while still allowing targeted runs.

Option 2: Scope folders with domain subfolders

Collection: Store API Tests  Folder: Smoke    Folder: Users      - Users - List      - Users - Get by Id    Folder: Orders      - Orders - List  Folder: Regression    Folder: Users      - Users - Create ...

This is useful when CI pipelines run by scope (smoke vs regression) and you want a single click to run the suite.

Option 3: Separate “Workflows” folder

Keep resource/feature folders for discoverability, and add a dedicated folder for multi-step flows that are run in order.

Collection: Store API Tests  Folder: Users  Folder: Orders  Folder: Workflows    Folder: User Lifecycle (CRUD)      01 - Users - Create      02 - Users - Get by Id      03 - Users - Update      04 - Users - Delete

When you do this, avoid duplicating logic: the workflow requests can reference the same underlying requests (via duplication with discipline) or share the same scripts and variables (see reuse strategies below).

4) Request reuse: duplication vs templating

Maintainability depends on how you reuse request definitions (URL, headers, body, scripts). In Postman, reuse is typically achieved through variables, shared scripts at folder/collection level, and careful duplication when necessary.

When duplication is acceptable

Duplicate a request when the intent is different and you want the test to be readable as a standalone artifact.

  • Different assertions: same endpoint, but one request validates minimal smoke checks and another validates full schema/edge cases.
  • Different data setup: same endpoint, but one uses “valid minimal payload” and another uses “maximal payload”.
  • Different auth context: admin vs user token.

Rule of thumb: duplicate for clarity, but keep shared parts (base URL, common headers, auth) parameterized so changes don’t require editing many requests.

Templating with variables (preferred for shared parts)

Use variables to avoid hardcoding values that change across environments or runs.

  • Collection variables for defaults used by the suite (e.g., baseUrl, apiVersion).
  • Environment variables for environment-specific values (e.g., hostnames, client IDs).
  • Dynamic variables for generated data (e.g., random email) when needed.

Example URL and headers using variables:

GET {{baseUrl}}/v1/users/{{userId}}  Headers: Authorization: Bearer {{authToken}}  Content-Type: application/json

Shared scripts at folder/collection level

Put common checks and helpers in folder or collection-level scripts so individual requests stay focused.

  • Collection-level: baseline checks (e.g., response time threshold, JSON parsing safety).
  • Folder-level: domain-specific checks (e.g., all /users responses include id).

Example of a lightweight baseline check in a folder-level test script:

pm.test('Status code is not 500', () => {   pm.expect(pm.response.code).to.not.equal(500); });

Use request-level scripts for assertions that are unique to that endpoint or scenario.

5) Ordering and dependencies for multi-step flows (create → read → update → delete)

Multi-step flows are where collections shine: you can create data, capture identifiers, and reuse them in subsequent requests. The key is to make dependencies explicit and safe.

Design principles for dependent flows

  • Make order obvious with numeric prefixes in request names.
  • Store outputs as variables immediately after the request that produces them.
  • Fail fast: if a create step fails, downstream steps should not run with empty IDs.
  • Clean up when possible (delete created data) to keep environments stable.

Step-by-step: implement a CRUD workflow folder

Step 1: Create (01 - Users - Create)

  • Send POST {{baseUrl}}/v1/users with a valid body.
  • In Tests, assert 201 and store the created ID.
pm.test('Created (201)', () => {   pm.response.to.have.status(201); }); const json = pm.response.json(); pm.expect(json).to.have.property('id'); pm.collectionVariables.set('userId', json.id);

Step 2: Read (02 - Users - Get by Id)

  • Use {{userId}} in the path.
  • Assert 200 and validate key fields.
pm.test('Fetched (200)', () => {   pm.response.to.have.status(200); }); const user = pm.response.json(); pm.expect(user.id).to.eql(pm.collectionVariables.get('userId'));

Step 3: Update (03 - Users - Update)

  • Send PUT/PATCH to /users/{{userId}} with updated fields.
  • Assert 200 (or 204) and verify the change.

Step 4: Delete (04 - Users - Delete)

  • Send DELETE /users/{{userId}}.
  • Assert 204 (or 200), then optionally verify a subsequent GET returns 404.

Handling optional dependencies safely

If a request requires a variable (like userId), add a guard test so failures are clear.

pm.test('userId is set for this request', () => {   pm.expect(pm.collectionVariables.get('userId'), 'userId').to.be.ok; });

This produces a readable failure instead of a confusing 404 caused by an empty path parameter.

6) Documenting expected behaviors directly in collection elements

A maintainable suite documents expectations where they are executed: in folder descriptions, request descriptions, and tests. This reduces reliance on external documents that drift over time.

Folder descriptions as contracts

Use folder descriptions to define shared expectations and constraints.

  • Authentication requirements (e.g., “All requests require Bearer token”).
  • Common response headers (e.g., correlation IDs).
  • Data rules (e.g., “Emails must be unique; tests generate random emails”).
  • Cleanup policy (e.g., “Requests that create data must have a paired delete request”).

Request descriptions as executable specs

For each request, document:

  • Purpose (what behavior is being tested).
  • Inputs (required variables, headers, body fields).
  • Expected status and key response fields.
  • Side effects (creates data, triggers emails, etc.).
  • Variables set for downstream steps.

Tests that read like requirements

Write assertions with messages that explain intent. Prefer multiple small tests over one large block so failures are pinpointed.

pm.test('Returns validation error schema', () => {   const json = pm.response.json();   pm.expect(json).to.have.property('error');   pm.expect(json.error).to.have.property('code');   pm.expect(json.error).to.have.property('message'); });

Activity: Refactor an unorganized set of requests into a maintainable collection

You are given a messy set of requests (mixed naming, no folders, inconsistent variables). Refactor it into a collection that is easy to run and understand.

Starting point (unorganized)

login POST /auth/login getUsers GET /users CreateUser POST /users getUser GET /users/123 update-user PUT /users/123 deleteUser DELETE /users/123 addItem POST /carts/55/items placeOrder POST /orders getOrder GET /orders/999

Goal structure

Collection: Store API Tests  Folder: Auth    - Auth - Login  Folder: Users    Folder: CRUD Workflow      01 - Users - Create      02 - Users - Get by Id      03 - Users - Update      04 - Users - Delete    Folder: Queries      - Users - List  Folder: Checkout (Workflow)    01 - Cart - Add Item    02 - Orders - Place Order    03 - Orders - Get by Id

Step-by-step refactor checklist

  • Step 1: Create the collection named Store API Tests. Add collection variables: baseUrl, authToken, userId, orderId, cartId.
  • Step 2: Move requests into folders according to the goal structure. Create folders first, then drag requests into place.
  • Step 3: Rename requests consistently using the pattern Domain - Action - Scenario (ExpectedStatus) where applicable. Add numeric prefixes for workflow steps.
  • Step 4: Parameterize URLs by replacing hardcoded hosts and IDs: /users/123/users/{{userId}}, /orders/999/orders/{{orderId}}, /carts/55/carts/{{cartId}}, and prefix all paths with {{baseUrl}}.
  • Step 5: Add descriptions to each folder and request using the template: Purpose, Preconditions, Expected, Variables set, Cleanup.
  • Step 6: Implement dependency handling: in Auth - Login, store authToken; in create requests, store IDs like userId and orderId; add guard tests to ensure required variables exist before dependent requests run.
  • Step 7: Add minimal automated checks per request: status code, content type (if applicable), and 2–5 key fields. Ensure workflow steps assert what they produce (e.g., Create sets userId).
  • Step 8: Save examples for at least Auth - Login, Users - Create, and Orders - Place Order (happy path). Optionally add one error example (e.g., invalid login).
  • Step 9: Validate runnability: run Users > CRUD Workflow in order and confirm it creates, reads, updates, and deletes without manual edits.

Deliverable

Export the refactored collection and verify that a teammate can understand how to run: (1) a smoke subset (list + get), and (2) the full CRUD workflow, using only folder names, request names, and descriptions.

Now answer the exercise about the content:

When building a multi-step CRUD workflow in Postman (create → read → update → delete), which approach best makes dependencies explicit and prevents confusing failures in downstream requests?

You are right! Congratulations, now go to the next page

You missed! Try again.

Numbered steps clarify execution order. Capturing IDs into variables enables later requests to reuse them safely. Guard tests fail fast when required variables are missing, and cleanup helps keep environments stable.

Next chapter

Variables and Data Reuse in Postman: Dynamic Values Without Rework

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.