What seismic maps and catalogs are (and why you should use both)
Seismic maps and earthquake catalogs are complementary tools for tracking nearby earthquake activity. A catalog is a structured list of events (time, location, depth, magnitude, and often quality metrics). A map is a visualization of those events in space (and sometimes time), often layered with faults, administrative boundaries, and station locations. For hazard awareness, the goal is not to “predict” a specific earthquake, but to recognize patterns that matter for preparedness: where activity clusters, whether it is migrating, how deep it is, whether it is concentrated near critical infrastructure, and whether the catalog is complete enough to trust the pattern you think you see.
Using only a map can mislead you because symbol sizes and colors can hide uncertainty, and because the map may show only a subset of events. Using only a catalog can mislead you because it is hard to perceive spatial patterns from rows of numbers. A practical workflow is: (1) define an area and time window, (2) pull a catalog with quality fields, (3) visualize it on a map, (4) check completeness and uncertainty, (5) test alternative views (depth slices, time animation), and (6) document what you found in a way that can be repeated next month.
Key fields in an earthquake catalog you should understand
Most public catalogs include many columns. You do not need all of them, but you should know which ones control whether an event is reliable for your analysis.
Core event descriptors
- Origin time: the best estimate of when the rupture started, usually in UTC. For local tracking, convert to local time for communication, but keep UTC for analysis to avoid daylight-saving confusion.
- Latitude/longitude: the epicenter estimate. Remember this is a point estimate with uncertainty; small differences between agencies are normal.
- Depth: depth estimate, often the least stable parameter for small events. Some catalogs fix depth when data are insufficient; treat fixed depths cautiously.
- Magnitude: the reported magnitude type and value. Catalogs may include multiple magnitude types; do not mix them without noting the type.
Quality and uncertainty fields (the “trust” indicators)
- Horizontal and vertical uncertainty (or error ellipse): indicates how well the location is constrained. Large uncertainty can smear a cluster across a wide area on a map.
- Number of phases / picks: how many P and S arrivals were used. More picks generally improve location, but station geometry matters too.
- Azimuthal gap: the largest angle around the epicenter without a station. Smaller gaps (better station coverage) usually mean better locations.
- RMS residual: how well the travel-time model fits the picks. High RMS can indicate poor picks, a mismatched velocity model, or complex paths.
- Review status: automatic vs reviewed solutions. Automatic solutions are useful for rapid awareness but may be revised.
Event identifiers and provenance
- Event ID: a unique identifier you can use to revisit the event later. Keep it in your notes.
- Agency/source: which network produced the solution. Different agencies may have different station coverage and processing methods.
Practical rule: when you see an apparent “line” of earthquakes on a map, check whether it persists after filtering to better-quality locations (e.g., smaller uncertainty, smaller azimuthal gap). Many false patterns disappear when you remove poorly constrained events.
Common seismic map types and what each is good for
Epicenter maps (plan view)
These show event locations on a basemap. They are best for identifying clusters, gaps, and proximity to communities or assets. They are weak for understanding depth structure unless depth is encoded by color and you trust the depth estimates.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Depth-coded maps
Depth is shown by color (e.g., shallow to deep). Use these to see whether activity is mostly shallow or distributed. Be cautious: if many depths are fixed or have large uncertainty, the color pattern may be an artifact.
Time-coded maps and animations
Time is encoded by color gradient or by animation. These are useful for tracking migration (e.g., a swarm moving along a corridor). They can also reveal catalog changes (e.g., a sudden increase in detections after a station upgrade).
Cross-sections
A cross-section plots depth versus distance along a chosen line. This is one of the most powerful ways to see whether a cluster is a shallow patch, a dipping plane, or a vertical column. Cross-sections require careful choice of line orientation and width.
Heatmaps and density maps
These show where earthquakes are most frequent over a period. They are good for long-term patterns but can hide recent changes. Always pair a density map with a recent-time-window epicenter map.
Step-by-step workflow: tracking nearby activity with maps and catalogs
This workflow is designed for a community safety officer, educator, or informed resident who wants a repeatable monthly or weekly check. It assumes you can access a public catalog through a web interface or download as CSV/GeoJSON.
Step 1: Define your monitoring area and time window
Start with a clear question. Examples: “What earthquakes occurred within 50 km of town in the last 30 days?” or “Has the cluster near the reservoir changed in the last week?” Define:
- Center point: your community or asset (hospital, dam, industrial site).
- Radius: choose based on what you can act on. For local awareness, 25–100 km is common.
- Time window: 7 days for rapid changes, 30–90 days for trends, 1–5 years for baseline context.
- Magnitude threshold: set a minimum magnitude to reduce noise, but remember small events can show patterns. A practical approach is to start low, then raise the threshold to see if the pattern persists.
Step 2: Download the catalog with quality fields
When exporting, include not only time, lat, lon, depth, magnitude, but also uncertainty, azimuthal gap, number of phases, RMS, and review status if available. If the interface offers multiple output formats, CSV is easiest for spreadsheets; GeoJSON is convenient for GIS mapping.
Practical tip: save the raw file with a date in the filename (e.g., catalog_50km_2026-01-11.csv). Catalogs can be revised; keeping snapshots helps you understand changes.
Step 3: Clean and filter for a “reliable view”
Before interpreting patterns, create at least two versions of the dataset:
- All events: to see what the network is detecting.
- Quality-filtered events: to see what patterns remain when you keep only better-constrained locations.
Example filtering logic (adjust to your region and catalog):
- Remove events with missing depth or magnitude if your analysis needs them.
- Keep events with azimuthal gap below a chosen threshold (e.g., < 180°) to reduce poorly constrained locations.
- Keep events with horizontal uncertainty below a chosen threshold (e.g., < 5 km) for local clustering analysis.
- Optionally separate reviewed solutions from automatic solutions for critical decisions.
If you are using a spreadsheet, create filterable columns and a “keep/remove” flag. If you are using a script, store the filter parameters in the script so you can reproduce the same view later.
Step 4: Map the events and choose symbology that does not mislead
On a map, symbol choices can create false impressions. Use these practices:
- Magnitude scaling: scale symbol size by magnitude, but cap the maximum size so one event does not hide others.
- Depth color: use a perceptually uniform color ramp and include a legend. Avoid red-green ramps if your audience may include color-vision deficiencies.
- Transparency: set some transparency so dense clusters remain readable.
- Basemap restraint: choose a simple basemap; overly detailed satellite imagery can distract from patterns.
Overlay relevant reference layers if available: major roads, towns, critical facilities, and (if provided by a reputable source) mapped faults. Use these layers for context, not for definitive statements about which fault moved.
Step 5: Add time as a dimension (simple and advanced options)
Time is often the key to recognizing whether you are seeing a stable background pattern or a changing situation.
- Simple approach: make separate maps for different windows (last 7 days, last 30 days, last 365 days) using the same symbology.
- Intermediate approach: color events by time (older to newer) to see migration.
- Advanced approach: animate events through time. Even a basic frame-by-frame animation can reveal whether activity is spreading, jumping, or staying fixed.
Practical example: If a cluster appears “new,” check whether it is truly new by mapping the last 5 years at a higher magnitude threshold. If the area has long had occasional events, the “new” cluster may simply reflect improved detection of smaller earthquakes.
Step 6: Build cross-sections to understand depth structure
Cross-sections help answer questions like: “Is this activity shallow and localized?” or “Does it form a dipping plane?” To make a cross-section:
- Choose a line that cuts across the cluster’s long axis.
- Define a corridor width (e.g., 5–20 km) to include events near the line.
- Plot depth versus distance along the line, with magnitude as symbol size.
Interpretation tips:
- A “vertical column” can indicate location uncertainty or a near-vertical structure; check uncertainties.
- A tight band at a single depth can indicate fixed depths in the catalog; verify whether depths are constrained.
- A dipping alignment is more credible if it persists after quality filtering and if station coverage is good.
Step 7: Check catalog completeness before comparing rates
Apparent changes in earthquake rate can be caused by changes in detection capability, not changes in the Earth. Catalog completeness refers to the smallest magnitude above which you can assume most events are being detected in your area and time period.
Practical checks you can do without advanced statistics:
- Magnitude-frequency plot: make a histogram of magnitudes for your area and window. If counts rise toward smaller magnitudes and then suddenly drop off, the drop-off often marks the detection limit. Compare this plot across time windows.
- Station changes: if the catalog notes network changes, be cautious comparing small-event rates before and after upgrades.
- Day/night bias: cultural noise can reduce detection during daytime in some areas; if your catalog includes very small events, check whether detections cluster at night.
When communicating to others, avoid statements like “earthquakes doubled” unless you have checked that the completeness threshold is stable across the periods you are comparing.
Step 8: Identify clusters, swarms, and aftershock-like sequences carefully
Catalogs often show bursts of activity. To describe them responsibly, focus on observable features rather than labels.
- Cluster: events concentrated in space over a time window.
- Burst: a short-term increase in event rate.
- Migration: the centroid of activity shifts over time.
Practical approach: compute simple summaries for each day or week: number of events, maximum magnitude, median depth, and the centroid location. Plot these through time. A swarm-like pattern often shows many similar-sized events without a single dominant event; an aftershock-like pattern often shows a dominant event followed by many smaller ones. However, do not over-interpret without additional analysis; your goal is awareness and documentation.
Hands-on examples you can replicate
Example 1: Weekly “nearby activity” bulletin for a community
Objective: produce a one-page internal update for local emergency management.
- Query: within 50 km of the town center, last 7 days, all magnitudes.
- Filter: remove events with missing magnitude; create a second map with horizontal uncertainty < 5 km.
- Outputs: (1) map with magnitude-scaled symbols and depth colors, (2) table of the 5 largest events with time (local), magnitude, depth, distance and direction from town, (3) a small plot of daily counts.
Interpretation checklist:
- Are events concentrated in a known cluster area or newly distributed?
- Do the largest events align with the cluster or occur elsewhere?
- Do quality-filtered events show the same pattern?
Example 2: Tracking a suspected migrating cluster along a corridor
Objective: determine whether activity is moving toward a populated area.
- Query: within a rectangular box that covers the corridor, last 90 days, magnitude above a low threshold that the catalog reliably detects.
- Visualization: time-colored map plus a time series of centroid location projected along the corridor axis.
- Cross-section: one cross-section along the corridor and one perpendicular to it.
Practical interpretation: If the “front” of activity appears to move, verify it is not caused by location uncertainty by repeating the analysis with stricter quality filters. If the migration persists, document the rate of movement (e.g., km per week) as an observational metric, not as a forecast.
Example 3: Comparing activity near two critical facilities
Objective: prioritize outreach or inspection planning by understanding where activity is concentrated.
- Define two circles (e.g., 20 km radius) around Facility A and Facility B.
- Pull catalogs for the same time window (e.g., last 365 days) and apply the same magnitude threshold and quality filters.
- Compare: number of events above threshold, maximum magnitude, median depth, and distance of the closest event to each facility.
Important: this comparison is about observed activity, not structural vulnerability. Keep the scope narrow: “Facility A had more nearby recorded events above Mx in the last year than Facility B,” with notes about completeness and uncertainty.
Common pitfalls and how to avoid them
Pitfall: treating the epicenter as exact
Even good catalogs have location uncertainty. When assessing proximity to a town or facility, compute distance with uncertainty in mind. A practical method is to classify proximity in bands (e.g., 0–10 km, 10–25 km, 25–50 km) and avoid over-precise statements like “3.2 km away” unless uncertainty is small.
Pitfall: mixing catalogs or magnitude types without noting it
If you switch sources mid-analysis, you may introduce artificial changes. If you must combine sources, keep a “source” column and compare overlapping periods to see systematic differences. If multiple magnitude types exist, choose one consistently for rate comparisons.
Pitfall: interpreting a detection change as a seismic change
A sudden increase in small events can occur when new stations are added or processing improves. Check for metadata notes, and compare only magnitudes above a stable completeness threshold when discussing rate changes.
Pitfall: ignoring depth uncertainty
Depth patterns are tempting to interpret, but depth can be poorly constrained. If the catalog provides depth uncertainty, use it. If many events share identical depths (e.g., exactly 10 km), suspect fixed-depth solutions and treat depth-coded maps cautiously.
Simple tools and reproducible documentation
Whether you use a web map, a spreadsheet, GIS software, or a script, aim for reproducibility. Keep a small “methods log” each time you check activity:
- Query parameters: area definition, time window, magnitude threshold.
- Catalog source and download time.
- Filters applied (with numeric thresholds).
- Map symbology choices (size scaling, depth colors).
- Notes on completeness and any network changes you suspect.
This log turns casual checking into a consistent monitoring practice. It also helps you explain to others why your interpretation is cautious and what would change your assessment (e.g., “If the quality-filtered cluster expands toward town over multiple weeks, we will increase communication and preparedness reminders”).
Optional: a lightweight scripting pattern for repeatable analysis
If you are comfortable with basic scripting, you can automate the workflow: download catalog, filter, compute summaries, and generate plots. The key is not the programming language but the structure: parameters at the top, functions for filtering and plotting, and saved outputs with timestamps.
# Pseudocode outline for a repeatable weekly check parameters: center_lat, center_lon, radius_km, start_time, end_time, min_mag, max_gap, max_herr steps: 1) download_catalog(params) 2) save_raw_snapshot() 3) filtered = filter_quality(catalog, max_gap, max_herr, min_mag) 4) summaries = compute_daily_counts(filtered) 5) make_map(filtered, size_by='mag', color_by='depth') 6) export_table_top_events(filtered, n=10) 7) write_methods_log(params, filters, source)Even if you never run code, thinking in this structured way improves your manual process: you will know which choices you made, and you can repeat them consistently.