Transactions are the “trust layer” of databases: they turn a sequence of reads and writes into a single, reliable unit of work. If you’re building applications that handle money, inventory, bookings, user state, or analytics pipelines that can’t afford duplicated or missing rows, understanding transactions is one of the most valuable database skills you can learn.
This guide focuses on how transactions actually behave—ACID properties, isolation levels, locks, MVCC, deadlocks, and practical patterns you can apply in MySQL, SQL Server, Oracle, PostgreSQL, and MongoDB. For a structured learning path, browse the https://cursa.app/free-courses-information-technology-online within https://cursa.app/free-online-information-technology-courses.
What a transaction is (and what it is not)
A transaction groups multiple operations so the database can guarantee a consistent outcome. In SQL databases, it typically begins with BEGIN (or implicit start), ends with COMMIT to make changes durable, or ROLLBACK to undo them. It is not just “multiple queries”—it’s the database promising certain correctness rules regardless of crashes or concurrent users.
ACID in plain language
Atomicity
All-or-nothing. If any part fails, none of the changes are applied. Think: “debit and credit must both happen, or neither happens.”
Consistency
Constraints and rules remain true after commit. Primary keys stay unique, foreign keys still point to valid rows, check constraints are satisfied, triggers enforce business rules, and so on.
Isolation
Concurrent transactions don’t corrupt each other’s work. Isolation is configurable; different levels trade correctness guarantees for throughput.
Durability
After a commit, changes survive crashes (via write-ahead logging, redo logs, and storage guarantees).
Isolation levels: what anomalies they prevent
Isolation levels define which “weird” effects are allowed when transactions overlap. The most discussed anomalies are:
- Dirty read: reading uncommitted changes from another transaction.
- Non-repeatable read: rereading the same row returns different data because another transaction committed an update.
- Phantom read: rerunning a range query returns additional (or missing) rows because another transaction inserted/deleted rows that match the predicate.

Common SQL isolation levels (names vary slightly by engine):
- READ UNCOMMITTED: allows dirty reads (rarely desirable).
- READ COMMITTED: prevents dirty reads; non-repeatable reads and phantoms may occur.
- REPEATABLE READ: prevents dirty and non-repeatable reads; phantom behavior depends on the engine.
- SERIALIZABLE: strongest; aims to make concurrency behave as if transactions ran one-by-one (highest contention).
Choosing the right level is a practical skill: strong isolation improves correctness but can reduce concurrency and increase deadlocks/timeouts.
How databases enforce isolation: locks vs MVCC
Most systems rely on a combination of:
- Locking: shared/exclusive locks (plus intent locks, key-range locks, etc.) prevent conflicting operations.
- MVCC (Multi-Version Concurrency Control): readers see a consistent snapshot while writers create new versions; reduces blocking between reads and writes.
PostgreSQL is MVCC-first, favoring non-blocking reads. MySQL (InnoDB) also uses MVCC, with important nuances around gap locks and next-key locks depending on isolation and query patterns. SQL Server supports both lock-based behavior and snapshot-based isolation modes (e.g., Read Committed Snapshot), while Oracle is known for strong read consistency via multi-versioning. MongoDB supports multi-document transactions (on replica sets and sharded clusters) but with different operational tradeoffs than classic relational systems.
To go deeper in specific engines, explore dedicated courses: https://cursa.app/free-online-courses/postgre-sql, https://cursa.app/free-online-courses/mysql, https://cursa.app/free-online-courses/sql-server, and https://cursa.app/free-online-courses/mongo-db.
Deadlocks: why they happen and how to reduce them
A deadlock occurs when two transactions each hold locks the other needs, creating a cycle. Databases detect this and abort one transaction (the “victim”), returning an error so your application can retry.
Practical ways to reduce deadlocks:
- Access tables/rows in a consistent order (e.g., always update parent then child, or always lock by ascending key).
- Keep transactions short (do validation and network calls outside the transaction when possible).
- Use appropriate indexes so updates and range scans lock fewer rows.
- Retry logic for deadlock/serialization failures with backoff.
- Avoid
SELECT … FOR UPDATEon large ranges unless necessary.
Deadlocks aren’t a sign your database is “bad”—they’re often a sign your concurrency is real, and your locking patterns need tightening.
Common transaction patterns you should master
1) Optimistic concurrency (version columns / compare-and-swap)
Instead of locking early, you detect conflicts at commit time by checking a version number or timestamp. If no row matches (someone else updated first), you retry.
Why it’s useful: high throughput when conflicts are rare.
2) Pessimistic concurrency (explicit locks)
You lock the row so others must wait (or fail fast). This can be safer for high-conflict hotspots, but it can reduce concurrency.
Typical tools: SELECT … FOR UPDATE (MySQL/PostgreSQL/Oracle), locking hints/transactions in SQL Server.
3) Idempotent writes and de-duplication
For payment callbacks, background jobs, and “at least once” message delivery, you often need operations that can safely run multiple times without double effects. Patterns include unique constraints on idempotency keys and “insert-if-not-exists” approaches.
External resource: This aligns with distributed systems guidance from the https://martinfowler.com/articles/patterns-of-distributed-systems/ catalog.
4) Read-modify-write safely
The classic bug: read a value, compute a new value, write it back—while another transaction does the same, causing lost updates. Solve it with atomic updates (e.g., UPDATE ... SET count = count + 1), proper isolation, or explicit locking.
Transactions in MongoDB: what to know
MongoDB supports multi-document transactions, which is especially useful when you need atomic changes across multiple documents/collections. However, modeling still matters: embedding related data can reduce the need for cross-document transactions, improving performance and simplicity. In practice, MongoDB transactions can be the right tool, but they’re not a free replacement for designing good document boundaries.

How to choose an isolation level (a practical checklist)
- Start with the engine default and measure. Defaults are chosen to balance safety and speed.
- Identify correctness requirements: are phantoms acceptable in reports? Are lost updates acceptable anywhere?
- Watch contention hotspots: a few rows/tables often cause most blocking.
- Prefer targeted fixes (better queries/indexes, atomic updates, idempotency keys) over globally raising isolation.
- Use retries for expected conflicts (deadlocks, serialization failures).
Skill-building next steps (with free certificates)
If you want to turn transaction theory into practical, job-ready ability, focus on exercises that force concurrency: two sessions updating the same rows, retry loops, and measuring blocking. Then deepen your expertise per database engine:
- https://cursa.app/free-online-courses/mysql for InnoDB transactions, MVCC behavior, and locking nuances.
- https://cursa.app/free-online-courses/sql-server for locking, snapshot isolation options, and concurrency troubleshooting.
- https://cursa.app/free-online-courses/postgre-sql for MVCC, vacuum implications, and isolation semantics.
- https://cursa.app/free-online-courses/mongo-db for document modeling and transaction tradeoffs.
Once transactions feel intuitive, you’ll notice your application logic gets simpler: fewer “mystery” bugs, cleaner retry strategies, and data that stays correct even under pressure.







