Why “turning light into numbers” matters
Astronomy is an observational science: you cannot touch most objects you study, but you can measure the light they send you. “Turning light into numbers” means converting what a detector records—photons arriving over time and across wavelengths—into calibrated quantities you can compare, model, and combine with other data. The goal is to move from a picture or a squiggly spectrum to measurements with units, uncertainties, and a clear chain of assumptions.
In practice, this conversion has three recurring parts: (1) a detector produces raw digital values (counts), (2) you correct those values for instrumental and environmental effects, and (3) you map corrected counts into physical or standardized units (flux, magnitude, wavelength, radial velocity, etc.). This chapter focuses on the measurement pipeline itself: what the numbers mean, how to calibrate them, and how to keep track of uncertainty so your results remain interpretable.
What a detector actually measures
From photons to electrons to counts
Modern astronomical detectors (CCD/CMOS for optical, HgCdTe arrays for near-IR, bolometers for far-IR, etc.) convert incoming photons into an electrical signal. For many imaging detectors, each pixel accumulates photoelectrons during an exposure. The camera electronics then convert those electrons into a digital number (DN), often called “counts” or “ADU” (analog-to-digital units).
A simplified relationship is:
counts = (electrons / gain) + bias_offsetwhere gain is in electrons per count (e−/ADU). The bias offset is an electronic baseline added so values don’t go negative. Even if no light hits the detector, you still measure nonzero counts because of bias and other noise sources.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Signal, background, and noise
In a real observation, the counts in a pixel (or in an aperture around a star) include multiple contributions:
- Source signal: photons from the object of interest.
- Sky/background: airglow, scattered moonlight, zodiacal light, thermal background (especially in IR), and diffuse astrophysical emission.
- Instrumental offsets: bias, dark current, amplifier glow, pattern noise.
- Noise: photon (Poisson) noise from source and background, read noise from electronics, and additional systematics (flat-field errors, nonlinearity, cosmic rays).
A key point: the uncertainty is not an afterthought. It is part of the measurement. If you can’t estimate uncertainty, you can’t reliably compare to models or other datasets.
Core calibration frames and what they correct
Calibration is the process of removing detector- and optics-related signatures so that remaining variations correspond to real differences in incoming light. The specific details vary by instrument, but the conceptual roles are consistent.
Bias (or zero) correction
A bias frame measures the electronic offset added during readout. It is typically a zero-second exposure (or the shortest possible) with the shutter closed. The bias can have a 2D structure across the detector and can vary by amplifier.
Bias correction is usually:
image_bias_corrected = raw_image - master_biaswhere master_bias is a combined (median/mean) bias frame made from many bias exposures to reduce noise.
Dark correction
Dark current is thermally generated electrons accumulating during exposure. A dark frame is taken with the shutter closed for the same exposure time (and ideally same temperature) as the science image.
If the dark includes bias, you either subtract bias first or build a master dark that is already bias-corrected. A common approach:
master_dark = combine(dark_frames - master_bias)image_dark_corrected = image_bias_corrected - master_darkIn many cooled optical CCD systems, dark current is small; in other regimes (warm detectors, long exposures, IR), it can be significant.
Flat-field correction
Even if uniform light hits the detector, pixels respond differently (pixel-to-pixel sensitivity) and the optics introduce vignetting and dust shadows. A flat field measures the relative response across the detector for a given filter/bandpass.
After bias/dark correction, you normalize the flat so its average is 1, then divide:
master_flat = normalize(combine(flat_frames - master_bias - master_dark_scaled))image_flat_corrected = image_dark_corrected / master_flatFlat fields are filter-dependent. Using the wrong flat can imprint artificial gradients or alter photometry.
Cosmic ray and artifact handling
High-energy particles can create bright pixels or streaks. Common strategies include: (1) taking multiple exposures and combining with a median, (2) applying a cosmic-ray rejection algorithm, or (3) masking affected pixels. The important measurement principle is to avoid letting rare artifacts bias the flux you extract.
A practical step-by-step: from raw image to calibrated image
This workflow is a general template for imaging data. The exact file formats and tools vary, but the logic is universal.
Step 1: Organize and inspect
- Group files by type: science, bias, dark, flat (per filter), and any calibration lamp frames if applicable.
- Check metadata: exposure time, filter, detector temperature, gain/readout mode, date/time.
- Visually inspect a few frames to catch saturation, tracking issues, clouds, or severe gradients.
Step 2: Build master calibration frames
- Master bias: combine many bias frames using a median (robust to outliers).
- Master dark: subtract master bias from each dark, then combine; if you have multiple exposure times, build a dark-current rate (e−/s) model.
- Master flat: subtract bias (and dark if needed), combine, then normalize to mean or median of 1.
Step 3: Calibrate each science frame
- Subtract master bias.
- Subtract (or scale and subtract) master dark.
- Divide by master flat.
- Optionally correct known detector effects: nonlinearity, bad pixels, fringing (common in red optical), persistence (some IR detectors).
Step 4: Track uncertainty (recommended)
Create or propagate an uncertainty map. A simplified per-pixel variance model after calibration might include:
variance ≈ (signal_electrons + background_electrons + dark_electrons) + read_noise^2Then convert back to the units of your calibrated image if needed. Even if you don’t carry a full variance image, you should estimate uncertainties for extracted measurements (photometry, centroid positions, etc.).
Photometry: measuring brightness from images
Photometry turns a calibrated image into a brightness measurement for an object. The central idea is to sum the object’s light while subtracting the local background, then convert the result into a standardized scale.
Aperture photometry: the basic method
In aperture photometry, you choose a circular aperture around a star (or other compact source) and sum the pixel values inside it. You estimate background from an annulus around the aperture and subtract it.
Let:
S= sum of pixel values in the aperture (in counts)B= average background per pixel (in counts/pixel) estimated from the annulusN= number of pixels in the aperture
Then the net source counts are:
F = S - N * BThis F is your measured flux in instrumental units (counts). If you know the exposure time t, you often work with count rate F/t.
Step-by-step aperture photometry workflow
Step 1: Choose an aperture size
- Measure or estimate the point spread function (PSF) width (often described by FWHM).
- Pick an aperture radius that captures most of the light (commonly 1–2× FWHM for good signal-to-noise, larger for total flux if background is low).
Step 2: Choose a background annulus
- Set an inner radius beyond the star’s wings (e.g., 3× FWHM).
- Set an outer radius wide enough to sample background but not so wide that it includes gradients or other sources.
- Use a robust statistic (median) to reduce contamination from faint stars.
Step 3: Compute net flux and uncertainty
A common approximate uncertainty for aperture photometry (in electrons) is:
sigma_F ≈ sqrt( F_e + N * (B_e + D_e) + N * RN^2 )where F_e is source electrons, B_e is background electrons per pixel, D_e is dark electrons per pixel, and RN is read noise in electrons. If you work in counts, convert using gain.
Step 4: Repeat consistently across frames
For time-series photometry (e.g., variable stars, transits), use the same aperture and annulus definitions across frames, or use an adaptive aperture tied to FWHM if seeing varies. Consistency reduces systematic errors.
Instrumental magnitudes and zero points
Astronomers often express brightness in magnitudes. An instrumental magnitude is:
m_inst = -2.5 * log10(F/t)To compare across nights/instruments, you need a calibration to a standard system. A basic photometric calibration uses a zero point ZP:
m_cal = m_inst + ZPThe zero point is determined by observing stars with known magnitudes in the same filter and fitting the offset between known and measured instrumental magnitudes. In more careful work, you also include atmospheric extinction and color terms, but the essential measurement idea is: you need reference objects to map your counts to a standardized scale.
Spectroscopy: turning a spectrum image into wavelength and flux
Spectroscopy starts with a 2D detector image where one axis corresponds roughly to wavelength (dispersion direction) and the other corresponds to spatial position along the slit (or fiber profile). Turning that into numbers requires extracting a 1D spectrum and assigning a wavelength to each pixel.
Key steps in a spectroscopy measurement pipeline
Step 1: Calibrate the 2D frame (bias/dark/flat)
The same calibration logic applies, but flats may be taken with internal lamps and can also correct pixel response along the dispersion direction.
Step 2: Trace the spectrum
You identify where the target’s light falls on the detector as a function of wavelength. Because of optical distortions, the spectrum is often slightly curved. Tracing means determining the centerline of the spectrum across columns (or rows).
Step 3: Extract the 1D spectrum
Two common extraction approaches:
- Box extraction: sum a fixed number of pixels around the trace.
- Optimal extraction: weight pixels by the expected spatial profile to maximize signal-to-noise and reduce the impact of cosmic rays.
The output is a 1D array of flux-like values (still in counts) versus pixel index.
Step 4: Wavelength calibration
You use a calibration lamp (arc) spectrum with known emission lines. The task is to map pixel position x to wavelength λ by fitting a function:
λ(x) = a0 + a1*x + a2*x^2 + ...Step-by-step:
- Identify several arc lines in the extracted arc spectrum.
- Match them to known wavelengths from a line list.
- Fit a polynomial (or other model) that minimizes residuals.
- Apply the mapping to the science spectrum to label each pixel with a wavelength.
A good calibration reports residuals (e.g., RMS in Å or nm). Those residuals are part of your wavelength uncertainty budget.
Step 5: Sky subtraction
For slit spectra, you estimate sky emission from regions along the slit away from the target and subtract it. For fiber spectra, sky fibers or nodding strategies are used. Poor sky subtraction can dominate errors, especially in the red/near-IR where sky lines are strong.
Step 6: Flux calibration (optional but common)
To convert counts into physical flux units (e.g., erg/s/cm²/Å), you observe a spectrophotometric standard star with known spectral energy distribution. You derive the instrument response function (sensitivity vs wavelength) and apply it to the science spectrum. Even if you do not need absolute flux, relative flux calibration helps correct the instrument’s wavelength-dependent throughput.
Astrometry: measuring positions from images
Astrometry turns pixel coordinates into sky coordinates. The measurement problem is: given a star at pixel position (x, y), what are its right ascension and declination?
Centroiding: getting precise pixel positions
Before mapping to the sky, you measure the object’s centroid in pixel space. A simple centroid uses intensity-weighted averages, but more precise methods fit a 2D Gaussian or PSF model. The achievable precision can be much smaller than a pixel if the signal-to-noise is high and the PSF is well-sampled.
World Coordinate System (WCS) solution
A WCS solution is a model that maps (x, y) to (RA, Dec), accounting for scale, rotation, and optical distortion. In practice, you match stars in your image to a reference catalog and fit transformation parameters. Once WCS is established, you can measure separations, proper motions (with multiple epochs), and align images for stacking.
Time as a measurement axis: timestamps you can trust
Many astronomical measurements depend on accurate timing: variability studies, occultations, pulsations, and transit light curves. Turning observations into numbers includes turning “when” into a standardized timestamp.
What to record and standardize
- Exposure start and end: mid-exposure time is often the relevant timestamp for photometry.
- Time standard: UTC is common in headers, but some analyses require conversion to other standards; the key is consistency and documentation.
- Clock accuracy: computer clocks can drift; disciplined time sources reduce systematic timing errors.
Even if you don’t perform advanced time corrections, the measurement principle is to treat time like any other axis: define it, calibrate it, and quantify its uncertainty.
Data quality checks that prevent bad measurements
Saturation and nonlinearity
If pixels saturate, counts no longer scale with incoming light. Nonlinearity can occur before saturation. Practical checks:
- Inspect maximum pixel values and compare to detector full well / saturation level.
- Prefer exposure settings that keep bright sources in the linear regime.
- If nonlinearity correction is available, apply it before photometry.
Seeing, focus, and tracking
Changes in PSF shape affect aperture losses and centroid precision. Practical checks:
- Measure FWHM across frames; flag outliers.
- Check ellipticity to detect tracking drift.
- Use consistent photometric apertures or PSF photometry when crowded or variable seeing.
Background gradients and clouds
Gradients can bias background estimates; thin clouds can change throughput. Practical checks:
- Examine background maps or sample multiple sky regions.
- Use differential photometry (target relative to comparison stars) when absolute calibration is unstable.
Worked example: measuring a star’s brightness in a single calibrated image
Suppose you have a calibrated image (bias/dark/flat corrected) in a given filter with exposure time t = 60 s. You measure a star with an aperture containing N = 80 pixels. The sum inside the aperture is S = 120000 counts. The background annulus median is B = 900 counts/pixel.
Compute net counts:
F = S - N*B = 120000 - 80*900 = 120000 - 72000 = 48000 countsCount rate:
F/t = 48000 / 60 = 800 counts/sInstrumental magnitude:
m_inst = -2.5*log10(800)Numerically, log10(800) ≈ 2.9031, so:
m_inst ≈ -2.5 * 2.9031 ≈ -7.26This value is not yet a standard magnitude; it is an internal brightness scale for your setup. If you observe a reference star in the same image with known magnitude m_ref and measure its m_inst,ref, then:
ZP = m_ref - m_inst,refand your calibrated magnitude is m_cal = m_inst + ZP. The measurement becomes meaningful because it is tied to a reference.
Worked example: measuring a spectral line wavelength
You extract a 1D spectrum and perform wavelength calibration with an arc lamp. After fitting, you obtain:
λ(x) = 500.0 + 0.10*x (nm)If an emission line peak is at pixel x = 2300, then:
λ = 500.0 + 0.10*2300 = 730.0 nmIf your wavelength solution RMS is 0.05 nm, then your line wavelength measurement should be reported as approximately 730.0 ± 0.05 nm (plus any additional uncertainty from centroiding the line peak). This illustrates the general rule: every derived number should carry uncertainty from both the calibration and the measurement step.
Keeping a measurement log: reproducibility as part of the data
Turning light into numbers is not only about computation; it is about traceability. A minimal measurement log for an observing run or dataset should include:
- Instrument configuration: detector mode, gain, read noise, binning, filters/grating.
- Calibration frames used: counts, dates, combination method.
- Processing steps: bias/dark/flat order, cosmic ray handling, bad pixel masks.
- Photometry settings: aperture radius, annulus radii, background statistic.
- Wavelength calibration details (for spectra): line list, fit order, RMS residuals.
- Quality flags: saturation, clouds, tracking issues, frames rejected.
This information turns a set of numbers into a defensible measurement that others (or you in the future) can reproduce and interpret.