1) Writing Tests with _test.go, testing.T, Table-Driven Tests, and Subtests
In Go, tests live next to the code they verify. A test file ends with _test.go and belongs to the same package. The Go tool discovers and runs tests automatically.
Minimal test structure
Create a file like mathutil_test.go and write functions named TestXxx that accept *testing.T. Use t.Fatalf to stop immediately on a failure, and t.Errorf to report a failure but continue the test.
package mathutil
import "testing"
func TestAdd(t *testing.T) {
got := Add(2, 3)
want := 5
if got != want {
t.Fatalf("Add(2,3)=%d; want %d", got, want)
}
}Table-driven tests (the default Go style)
Table-driven tests scale well: you list inputs/expected outputs once, then loop. This makes it easy to add cases and reduces duplicated code.
func TestClamp(t *testing.T) {
tests := []struct {
name string
x, lo, hi int
want int
}{
{name: "below", x: -1, lo: 0, hi: 10, want: 0},
{name: "inside", x: 7, lo: 0, hi: 10, want: 7},
{name: "above", x: 99, lo: 0, hi: 10, want: 10},
}
for _, tc := range tests {
got := Clamp(tc.x, tc.lo, tc.hi)
if got != tc.want {
t.Errorf("%s: Clamp(%d,%d,%d)=%d; want %d", tc.name, tc.x, tc.lo, tc.hi, got, tc.want)
}
}
}Subtests with t.Run
Subtests give each case its own name in output, allow selective running, and integrate nicely with table-driven tests.
func TestClamp_Subtests(t *testing.T) {
tests := []struct {
name string
x, lo, hi int
want int
}{
{name: "below", x: -1, lo: 0, hi: 10, want: 0},
{name: "inside", x: 7, lo: 0, hi: 10, want: 7},
{name: "above", x: 99, lo: 0, hi: 10, want: 10},
}
for _, tc := range tests {
tc := tc // capture range variable
t.Run(tc.name, func(t *testing.T) {
got := Clamp(tc.x, tc.lo, tc.hi)
if got != tc.want {
t.Fatalf("Clamp(%d,%d,%d)=%d; want %d", tc.x, tc.lo, tc.hi, got, tc.want)
}
})
}
}The tc := tc line prevents a common bug when subtests run in parallel or when closures capture the loop variable. Even if you are not using t.Parallel() today, this pattern keeps the test safe if you add parallelism later.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Testing error cases
When a function returns an error, test both the error and the result. Prefer checking that an error is present or absent, and then validate the output.
func TestParsePort(t *testing.T) {
tests := []struct {
name string
in string
want int
wantErr bool
}{
{name: "ok", in: "8080", want: 8080, wantErr: false},
{name: "bad", in: "abc", want: 0, wantErr: true},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
got, err := ParsePort(tc.in)
if (err != nil) != tc.wantErr {
t.Fatalf("ParsePort(%q) err=%v; wantErr=%v", tc.in, err, tc.wantErr)
}
if err == nil && got != tc.want {
t.Fatalf("ParsePort(%q)=%d; want %d", tc.in, got, tc.want)
}
})
}
}2) Running Tests with go test, Filtering, and Interpreting Failures
go test compiles your package and runs all tests in it. You can run tests for one package, all packages, or a subset of tests.
Core commands
- Run tests in the current package:
go test - Run tests in a specific package:
go test ./path/to/pkg - Run tests in all packages in the module:
go test ./... - Show individual test names and logs:
go test -v
Filtering which tests run
Use -run with a regular expression to run only matching tests and subtests. This is especially useful with table-driven subtests.
- Run one test:
go test -run '^TestClamp$' - Run subtests by name:
go test -run 'TestClamp_Subtests/below' - Run all tests that contain a word:
go test -run 'Clamp'
Re-running and debugging failures
When a test fails, the output includes the failing test name, file, and line number, plus your failure message. Make your failure messages include inputs and outputs so you can reproduce quickly.
--- FAIL: TestClamp_Subtests (0.00s)
--- FAIL: TestClamp_Subtests/above (0.00s)
clamp_test.go:23: Clamp(99,0,10)=99; want 10
FAIL
exit status 1
FAIL example.com/project/mathutil 0.003sUseful flags while iterating:
go test -count=1disables test result caching so you always re-run.go test -failfaststops after the first failure (good for quick feedback).t.Logandt.Logfprint only with-vor on failure, helping you inspect intermediate values without cluttering normal runs.
3) Benchmarks with testing.B: Structure, Pitfalls, and Reading Output
Benchmarks measure performance of small units of code. They are not a replacement for profiling, but they are excellent for comparing two approaches and preventing regressions during refactoring.
Basic benchmark structure
Benchmarks live in _test.go files and are named BenchmarkXxx. The benchmark function receives *testing.B and runs the code under test inside a loop from 0 to b.N.
func BenchmarkJoinPlus(b *testing.B) {
parts := []string{"a", "b", "c", "d", "e"}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = JoinPlus(parts)
}
}Run benchmarks with:
go test -bench .(run all benchmarks in the package)go test -bench '^BenchmarkJoin'(filter by name)go test -bench . -benchmem(include allocation stats)
Avoiding common benchmark mistakes
Including setup work in the timed loop: Build large inputs outside the loop. If you must do per-iteration setup, consider
b.StopTimer()/b.StartTimer()around it.Letting the compiler optimize away the work: If the result is unused, the compiler may remove the computation. Assign results to a package-level variable (a “sink”).
var sink string
func BenchmarkJoinBuilder(b *testing.B) {
parts := []string{"a", "b", "c", "d", "e"}
b.ResetTimer()
for i := 0; i < b.N; i++ {
sink = JoinBuilder(parts)
}
}Comparing benchmarks across noisy environments: Run on the same machine, close heavy background tasks, and prefer multiple runs. For quick stability, you can use
-countto repeat:go test -bench . -count=5.
Reading benchmark output
A typical line looks like this:
BenchmarkJoinBuilder-8 2000000 620 ns/op 112 B/op 2 allocs/op-8is the GOMAXPROCS value used during the run.620 ns/opis average time per operation.B/opandallocs/opshow memory cost per operation (requires-benchmem).
When comparing two approaches, look for meaningful differences (often >5–10% depending on noise) and consider both time and allocations. A faster approach that allocates much more may hurt overall performance under load.
4) Coverage with go test -cover: Using It as a Guide
Coverage tells you which statements were executed by tests. It helps you find untested paths, but it does not prove correctness. High coverage with weak assertions can still miss bugs.
Quick coverage check
- Package coverage summary:
go test -cover - All packages:
go test ./... -cover
Generating and viewing a coverage report
To see exactly which lines were covered, generate a profile and open it as HTML.
go test -coverprofile=cover.out ./...
go tool cover -html=cover.outUse the report to spot:
- Branches that never run (error paths, boundary conditions).
- Code that is hard to test (a sign you may want to refactor for better separation).
- Missing cases in table-driven tests.
Use coverage as a map: it points to areas worth testing. The real goal is confidence: tests that fail when behavior breaks, and pass when refactoring preserves behavior.
5) The Tooling Loop: gofmt, go vet, and Keeping Packages Tidy
Go’s tooling is designed to be part of your everyday loop: format, check, test. This keeps code consistent and catches common mistakes early.
Format with gofmt
gofmt is the standard formatter. Run it on save in your editor, or run it across a package/module.
- Format a file:
gofmt -w file.go - Format everything under the module:
gofmt -w .
Consistent formatting reduces diff noise and makes code review easier.
Static checks with go vet
go vet finds suspicious constructs that compile but are likely wrong (printf-style formatting issues, unreachable code patterns, incorrect struct tags, and more).
- Vet current package:
go vet - Vet all packages:
go vet ./...
Run go vet before committing or as part of CI. Treat new warnings as problems to fix, not as background noise.
Keep packages tidy
Run tests frequently:
go test ./...should be a habit before refactors and before pushing changes.Remove dead code and unused exports: smaller packages are easier to test and reason about.
Keep test helpers local: helper functions in tests should call
t.Helper()so failures point to the caller.
func mustParsePort(t *testing.T, s string) int {
t.Helper()
p, err := ParsePort(s)
if err != nil {
t.Fatalf("ParsePort(%q) unexpected err: %v", s, err)
}
return p
}Lab: Test and Benchmark a Slice/Map Utility
In this lab you will write table-driven tests and a small benchmark comparing two implementations. The goal is to practice the full loop: write tests, run them selectively, check coverage, and benchmark two approaches.
Step 1: Create a small utility package
Create a folder collectionutil with a file dedupe.go:
package collectionutil
// DedupeStable returns a new slice with duplicates removed, preserving
// the first occurrence order.
func DedupeStable(in []string) []string {
seen := make(map[string]struct{}, len(in))
out := make([]string, 0, len(in))
for _, s := range in {
if _, ok := seen[s]; ok {
continue
}
seen[s] = struct{}{}
out = append(out, s)
}
return out
}
// DedupeSort returns a new slice with duplicates removed by sorting a copy.
// Note: output order is sorted, not original order.
func DedupeSort(in []string) []string {
if len(in) == 0 {
return nil
}
cp := append([]string(nil), in...)
sort.Strings(cp)
out := cp[:0]
var prev string
for i, s := range cp {
if i == 0 || s != prev {
out = append(out, s)
prev = s
}
}
return out
}This file intentionally includes two different approaches: a map-based stable dedupe and a sort-based dedupe. The second requires importing sort.
Step 2: Write table-driven tests with subtests
Create dedupe_test.go in the same folder. Focus on correctness and edge cases: empty input, already unique, all duplicates, mixed duplicates, and inputs with repeated patterns.
package collectionutil
import (
"reflect"
"testing"
)
func TestDedupeStable(t *testing.T) {
tests := []struct {
name string
in []string
want []string
}{
{name: "nil", in: nil, want: nil},
{name: "empty", in: []string{}, want: []string{}},
{name: "unique", in: []string{"a", "b"}, want: []string{"a", "b"}},
{name: "dupes", in: []string{"a", "b", "a", "c", "b"}, want: []string{"a", "b", "c"}},
{name: "all same", in: []string{"x", "x", "x"}, want: []string{"x"}},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
got := DedupeStable(tc.in)
if !reflect.DeepEqual(got, tc.want) {
t.Fatalf("DedupeStable(%v)=%v; want %v", tc.in, got, tc.want)
}
})
}
}For DedupeSort, the expected output is sorted unique values. Write separate tests so you don’t accidentally assume stable order.
func TestDedupeSort(t *testing.T) {
tests := []struct {
name string
in []string
want []string
}{
{name: "nil", in: nil, want: nil},
{name: "empty", in: []string{}, want: nil},
{name: "mixed", in: []string{"b", "a", "b", "c"}, want: []string{"a", "b", "c"}},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
got := DedupeSort(tc.in)
if !reflect.DeepEqual(got, tc.want) {
t.Fatalf("DedupeSort(%v)=%v; want %v", tc.in, got, tc.want)
}
})
}
}Step 3: Run tests and filter a single case
- Run package tests:
go test ./collectionutil - Verbose output:
go test -v ./collectionutil - Run one subtest:
go test -run 'TestDedupeStable/dupes' ./collectionutil - Disable caching while iterating:
go test -count=1 ./collectionutil
Step 4: Check coverage and inspect gaps
- Coverage summary:
go test -cover ./collectionutil - HTML report:
go test -coverprofile=cover.out ./collectionutilthengo tool cover -html=cover.out
If you see uncovered lines, add a test that triggers them (for example, confirm behavior for nil vs empty input, or add a case that exercises the first-element branch in DedupeSort).
Step 5: Add benchmarks comparing the two approaches
Add benchmarks to dedupe_test.go. Use realistic input sizes and ensure the result is used (sink) to prevent compiler elimination.
var sinkSlice []string
func makeInput(n int, unique int) []string {
out := make([]string, n)
for i := 0; i < n; i++ {
out[i] = string(rune('a' + (i % unique)))
}
return out
}
func BenchmarkDedupeStable_1e4_100(b *testing.B) {
in := makeInput(10_000, 100)
b.ResetTimer()
for i := 0; i < b.N; i++ {
sinkSlice = DedupeStable(in)
}
}
func BenchmarkDedupeSort_1e4_100(b *testing.B) {
in := makeInput(10_000, 100)
b.ResetTimer()
for i := 0; i < b.N; i++ {
sinkSlice = DedupeSort(in)
}
}Remember to import any packages you used in the benchmark helper (for example, if you use randomness, keep it outside the timed loop). Run:
go test -bench 'BenchmarkDedupe' -benchmem ./collectionutilgo test -bench . -benchmem -count=5 ./collectionutil
Compare ns/op and allocations. The map-based approach often preserves order with predictable allocations, while the sort-based approach may trade CPU for fewer allocations depending on input shape. Use the numbers to guide decisions, and keep the benchmark as a safety net for future refactors.