What migrations solve (and why Alembic)
As your API evolves, your database schema must evolve with it: new columns, renamed fields, new indexes, tighter constraints, and occasional rollbacks. Manually editing tables is risky because it is hard to reproduce across environments (local, staging, production) and hard to review. Alembic is the migration tool used with SQLAlchemy/SQLModel projects to apply schema changes in a controlled, versioned, reversible way.
Alembic works by generating and running migration scripts. Each script contains two functions: upgrade() (apply change) and downgrade() (revert change). Your team can review these scripts in code review, and your deployment pipeline can apply them deterministically.
Initialize Alembic in a FastAPI project
1) Install Alembic
pip install alembicIf you already use SQLAlchemy/SQLModel, Alembic integrates directly with your metadata.
2) Create the Alembic folder
alembic init alembicThis creates:
alembic.ini(configuration)alembic/(migration environment)alembic/versions/(migration scripts)alembic/env.py(wires your app metadata to Alembic)
Configure the database URL (keep environments consistent)
Hardcoding the database URL in alembic.ini is a common source of drift between environments. Prefer reading from environment variables and using the same settings mechanism your app uses.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Option A: Set URL in alembic.ini (simple, less flexible)
Edit alembic.ini:
[alembic]sqlalchemy.url = postgresql+psycopg://user:pass@localhost:5432/app_dbOption B (recommended): Inject URL in alembic/env.py
In alembic/env.py, override the URL at runtime:
import osfrom alembic import contextfrom sqlalchemy import engine_from_config, pool# ... inside run_migrations_online()config = context.configdb_url = os.environ.get("DATABASE_URL")if not db_url: raise RuntimeError("DATABASE_URL is not set")config.set_main_option("sqlalchemy.url", db_url)connectable = engine_from_config( config.get_section(config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool,)Practical guidance:
- Use one variable name (for example
DATABASE_URL) across local/staging/prod. - Pin the same Alembic/SQLAlchemy versions across environments to avoid autogenerate diffs.
- Ensure your CI uses the same migration commands you use locally.
Connect Alembic to your models (target_metadata)
Alembic can autogenerate migrations by comparing the current database schema to your model metadata. For that, env.py must import your model metadata and set target_metadata.
SQLAlchemy example
If you have a declarative base like Base:
# alembic/env.pyfrom myapp.db.base import Base # where Base = declarative_base()target_metadata = Base.metadataSQLModel example
If you use SQLModel:
# alembic/env.pyfrom sqlmodel import SQLModeltarget_metadata = SQLModel.metadataImportant: ensure all model modules are imported somewhere so their tables are registered in metadata. A common pattern is a myapp/db/models/__init__.py that imports all models, and then env.py imports that package.
# alembic/env.pyimport myapp.db.models # ensures models are importedtarget_metadata = SQLModel.metadataGenerate the initial migration from models
With target_metadata configured and your database URL set, create the first migration:
alembic revision --autogenerate -m "initial schema"This creates a new file in alembic/versions/. Review it carefully before applying (autogenerate is helpful, not infallible).
Apply the migration
alembic upgrade headTo see the current revision in the database:
alembic currentTo list migration history:
alembic history --verbosePractice: upgrading and downgrading safely
Upgrade to a specific revision
alembic upgrade <revision_id>Downgrade one step
alembic downgrade -1Downgrade to a specific revision
alembic downgrade <revision_id>Operational tip: downgrades are easiest when migrations are additive and reversible. Destructive changes (dropping columns/tables) require extra care and sometimes cannot be fully reversed without data loss.
Common schema changes and how to migrate them
Add a new nullable column (safe, common)
Suppose you add a new field to a table (for example, users.display_name) and you want it nullable initially.
Model change (illustrative):
display_name: str | None = Field(default=None, max_length=80)Create migration:
alembic revision --autogenerate -m "add display_name to users"Review generated operations (typical output):
def upgrade(): op.add_column('users', sa.Column('display_name', sa.String(length=80), nullable=True))def downgrade(): op.drop_column('users', 'display_name')Add a non-null column to an existing table (requires a plan)
Adding a NOT NULL column to a table with existing rows will fail unless you provide a default or backfill existing data. A controlled approach is a two-step migration:
- Step 1: add the column as nullable
- Step 2: backfill values
- Step 3: alter column to non-null
Example migration snippet:
def upgrade(): op.add_column('users', sa.Column('timezone', sa.String(length=64), nullable=True)) op.execute("UPDATE users SET timezone = 'UTC' WHERE timezone IS NULL") op.alter_column('users', 'timezone', existing_type=sa.String(length=64), nullable=False)def downgrade(): op.drop_column('users', 'timezone')Tip: prefer deterministic backfills. If you need complex logic, write a data migration with explicit SQL and test it on a copy of production data.
Create an index (performance and uniqueness)
Indexes speed up lookups and enforce uniqueness when defined as unique indexes/constraints.
Example: add an index on users.email:
def upgrade(): op.create_index('ix_users_email', 'users', ['email'], unique=True)def downgrade(): op.drop_index('ix_users_email', table_name='users')Notes:
- On PostgreSQL, you may need concurrent index creation for large tables; that requires special handling (outside a transaction) and is not the default Alembic behavior.
- Use consistent naming for indexes and constraints to avoid diffs across environments.
Add a foreign key constraint
Foreign keys enforce referential integrity. Example: items.user_id references users.id.
def upgrade(): op.create_foreign_key( 'fk_items_user_id_users', source_table='items', referent_table='users', local_cols=['user_id'], remote_cols=['id'], ondelete='CASCADE', )def downgrade(): op.drop_constraint('fk_items_user_id_users', 'items', type_='foreignkey')Tip: ensure the referenced column is indexed (primary keys already are). Also consider how deletes should behave (RESTRICT, SET NULL, CASCADE).
Rename a column (autogenerate may not detect intent)
Alembic autogenerate often interprets renames as “drop old column + add new column”, which can lose data. For renames, write an explicit operation.
Example (PostgreSQL):
def upgrade(): op.alter_column('users', 'fullname', new_column_name='full_name')def downgrade(): op.alter_column('users', 'full_name', new_column_name='fullname')If your database requires a different approach, use op.execute with raw SQL.
Reviewing migration scripts (what to look for)
Before running alembic upgrade, open the generated script and verify:
- Correct operations: no accidental table drops, no unintended type changes.
- Reversibility:
downgrade()should be meaningful and safe where possible. - Data safety: destructive changes are deliberate; data migrations are deterministic.
- Constraint/index names: stable naming avoids repeated diffs.
- Server defaults: if you need a database-level default, use
server_default(not just Python-side defaults).
Helpful commands:
alembic upgrade head --sqlto preview SQL without applyingalembic downgrade -1 --sqlto preview rollback SQL
Keeping local, staging, and production consistent
Recommended workflow rules
- One migration per logical change: keep scripts small and reviewable.
- Never edit applied migrations in shared environments: create a new migration to fix issues.
- Run migrations automatically in CI: apply
alembic upgrade headagainst a test database. - Check for drift: if a database was modified manually, Alembic history and actual schema can diverge.
- Pin dependencies: Alembic autogenerate output can vary with SQLAlchemy versions.
Handling multiple developers (merge migrations)
If two branches create migrations from the same base revision, you can end up with multiple “heads”. Detect it with:
alembic headsResolve by creating a merge revision:
alembic merge -m "merge heads" <head1> <head2>This creates a migration that joins the branches so future migrations have a single linear head.
Controlled schema change + corresponding API update (end-to-end example)
Scenario: you want to add a display_name field to users, expose it in responses, and allow clients to update it. You will do this as a controlled, backward-compatible change.
Step 1: Add the column via migration
Update your model to include display_name as nullable. Generate and review:
alembic revision --autogenerate -m "add display_name to users"Confirm the migration adds a nullable column, then apply:
alembic upgrade headStep 2: Update your Pydantic schemas (response + update input)
Add display_name to the user response schema so clients can see it, and to the update schema so clients can set it. Keep it optional to avoid breaking existing clients.
from pydantic import BaseModel, Fieldclass UserRead(BaseModel): id: int email: str display_name: str | None = Noneclass UserUpdate(BaseModel): display_name: str | None = Field(default=None, max_length=80)Step 3: Update the persistence logic to write the new field
When handling an update, only change display_name if it is provided (partial update semantics). Example pattern:
payload = user_update.model_dump(exclude_unset=True)if "display_name" in payload: db_user.display_name = payload["display_name"]session.add(db_user)session.commit()session.refresh(db_user)Step 4: Verify behavior across environments
- Local: run
alembic upgrade headand run your tests. - Staging: deploy code, run migrations, then smoke-test endpoints that read/write
display_name. - Production: run migrations as a controlled step (often before switching traffic), ensuring the code is compatible with both pre- and post-migration states if you do zero-downtime deployments.