1) Package layout for small projects
A production-style Go program is easiest to maintain when the main package is a thin entrypoint and most logic lives in reusable packages. This keeps responsibilities clear: main wires dependencies (config, logger, I/O) and calls into your application code.
Keep main thin
In practice, main should do only a few things: parse configuration, set up logging, open input/output resources, call a function like app.Run(...), and handle the exit code.
A simple, scalable layout
For a small CLI, a common layout is:
cmd/linecount/: the entrypoint for the CLI binaryinternal/app/: orchestration (high-level workflow)internal/linecount/: domain logic (counting lines, worker pool)internal/config/: configuration parsing/validation
The internal/ directory prevents other modules from importing your packages accidentally, which is helpful for “application code” that is not meant to be a public library.
Avoid cyclic imports
Cyclic imports happen when package A imports B and B imports A (directly or indirectly). To avoid them, keep dependencies flowing one way: cmd depends on internal/app, which depends on lower-level packages like internal/linecount. If two packages need to share types, consider moving shared types into a third package (or redesign so one package owns the types and the other depends on it).
- Listen to the audio with the screen off.
- Earn a certificate upon completion.
- Over 5000 courses for you to explore!
Download the app
2) Configuration: flags, environment variables, defaults, validation
Configuration should be explicit, validated early, and easy to override. For CLIs, a practical approach is: defaults in code, override via flags, optionally override via environment variables, then validate once and fail fast.
Define a config struct
Keep configuration in a single struct so it can be passed around cleanly.
package config
type Config struct {
Workers int
OutPath string
Verbose bool
}Parse flags with flag
Use the standard library flag package for predictable CLI behavior. Parse once in main or a dedicated config package.
package config
import (
"flag"
"fmt"
"os"
"strconv"
)
func Parse() (Config, error) {
cfg := Config{
Workers: 4,
OutPath: "",
Verbose: false,
}
flag.IntVar(&cfg.Workers, "workers", cfg.Workers, "number of concurrent workers")
flag.StringVar(&cfg.OutPath, "out", cfg.OutPath, "output file path (default: stdout)")
flag.BoolVar(&cfg.Verbose, "v", cfg.Verbose, "verbose logging")
flag.Parse()
// Optional environment overrides
if v := os.Getenv("LINECOUNT_WORKERS"); v != "" {
n, err := strconv.Atoi(v)
if err != nil {
return Config{}, fmt.Errorf("invalid LINECOUNT_WORKERS: %w", err)
}
cfg.Workers = n
}
if v := os.Getenv("LINECOUNT_OUT"); v != "" {
cfg.OutPath = v
}
if err := Validate(cfg); err != nil {
return Config{}, err
}
return cfg, nil
}
func Validate(cfg Config) error {
if cfg.Workers <= 0 {
return fmt.Errorf("workers must be > 0")
}
return nil
}Validate early and centrally
Validation belongs near parsing so the rest of the program can assume config is correct. This reduces defensive checks scattered throughout the code.
3) Logging: consistent messages and separating output from logs
For many production CLIs, the standard library log package is enough. The key is consistency: include context, keep messages structured, and avoid mixing logs with the program’s “real output.”
Use a dedicated logger
Direct logs to stderr so that stdout can be reserved for machine-readable output (or the main results). This makes piping and scripting reliable.
package app
import (
"io"
"log"
"os"
)
type Logger interface {
Printf(format string, v ...any)
}
func NewLogger(verbose bool) *log.Logger {
// Always log to stderr; adjust flags based on verbosity.
flags := 0
if verbose {
flags = log.Ldate | log.Ltime | log.Lmicroseconds
}
return log.New(os.Stderr, "linecount ", flags)
}
func NewResultWriter(out io.Writer) io.Writer {
return out
}Log with context
Prefer messages that include what happened and which input caused it. For example: "open file", path, err. Even with Printf, you can keep a consistent pattern.
logger.Printf("open file path=%q err=%v", path, err)Don’t log normal results
Results should go to stdout (or an output file). Logs should go to stderr. This separation is one of the simplest “production” improvements you can make to a CLI.
4) Input/output patterns with io abstractions
To keep code testable and reusable, avoid hard-coding file paths and global I/O inside core logic. Instead, accept io.Reader and io.Writer in functions that process data. Then main (or orchestration code) can decide whether the reader comes from a file, stdin, network, or a test buffer.
Reading files safely
Open files in orchestration code and pass the file handle (which implements io.Reader) into your logic. Always close files you open.
f, err := os.Open(path)
if err != nil {
return 0, fmt.Errorf("open %s: %w", path, err)
}
defer f.Close()Write results via io.Writer
Instead of printing directly, write to an injected writer. This makes it easy to test output and to support -out files.
func writeResult(w io.Writer, path string, lines int) error {
_, err := fmt.Fprintf(w, "%s\t%d\n", path, lines)
return err
}Keep core logic independent of the filesystem
Core logic should operate on streams. The filesystem is an implementation detail handled at the edges.
5) Capstone: a concurrent file line counter CLI
You will implement a small CLI tool named linecount that counts lines in one or more files concurrently. It demonstrates: a clear entrypoint, reusable packages, configuration parsing and validation, consistent logging, I/O abstractions, error wrapping, and tests.
Project structure
linecount/
go.mod
cmd/
linecount/
main.go
internal/
app/
run.go
config/
config.go
linecount/
counter.go
workerpool.go
counter_test.goStep 1: core line counting logic (testable)
Implement a function that counts lines from an io.Reader. This is the heart of the program and is easy to unit test.
package linecount
import (
"bufio"
"io"
)
func CountLines(r io.Reader) (int, error) {
s := bufio.NewScanner(r)
lines := 0
for s.Scan() {
lines++
}
if err := s.Err(); err != nil {
return 0, err
}
return lines, nil
}Step 2: worker pool to process file paths concurrently
Create a small worker pool that takes file paths, opens each file, counts lines, and emits results. Keep it focused: orchestration and concurrency here, counting in CountLines.
package linecount
import (
"fmt"
"os"
)
type Result struct {
Path string
Lines int
Err error
}
func ProcessFiles(paths []string, workers int) []Result {
jobs := make(chan string)
results := make(chan Result)
for i := 0; i < workers; i++ {
go func() {
for path := range jobs {
res := Result{Path: path}
f, err := os.Open(path)
if err != nil {
res.Err = fmt.Errorf("open %s: %w", path, err)
results <- res
continue
}
lines, err := CountLines(f)
_ = f.Close()
if err != nil {
res.Err = fmt.Errorf("count lines %s: %w", path, err)
results <- res
continue
}
res.Lines = lines
results <- res
}
}()
}
go func() {
defer close(jobs)
for _, p := range paths {
jobs <- p
}
}()
out := make([]Result, 0, len(paths))
for i := 0; i < len(paths); i++ {
out = append(out, <-results)
}
return out
}This design returns results in completion order. If you need stable ordering, you can sort later or include an index in the job.
Step 3: app orchestration (wiring config, logging, output)
The app layer coordinates everything: reads arguments, runs the worker pool, writes results, and decides the exit code. It depends on lower-level packages, not the other way around.
package app
import (
"fmt"
"io"
"os"
"example.com/linecount/internal/config"
"example.com/linecount/internal/linecount"
)
type Logger interface {
Printf(format string, v ...any)
}
func Run(cfg config.Config, args []string, out io.Writer, logger Logger) int {
paths := args
if len(paths) == 0 {
logger.Printf("no input files")
fmt.Fprintln(os.Stderr, "usage: linecount [--workers N] [--out FILE] [-v] file1 file2 ...")
return 2
}
logger.Printf("start files=%d workers=%d", len(paths), cfg.Workers)
results := linecount.ProcessFiles(paths, cfg.Workers)
exit := 0
for _, r := range results {
if r.Err != nil {
exit = 1
logger.Printf("error path=%q err=%v", r.Path, r.Err)
continue
}
if _, err := fmt.Fprintf(out, "%s\t%d\n", r.Path, r.Lines); err != nil {
logger.Printf("write output err=%v", err)
return 1
}
}
logger.Printf("done")
return exit
}Step 4: entrypoint in cmd/linecount/main.go
The entrypoint parses config, sets up output destination, constructs the logger, and calls app.Run. This keeps main small and predictable.
package main
import (
"io"
"log"
"os"
"example.com/linecount/internal/app"
"example.com/linecount/internal/config"
)
func main() {
cfg, err := config.Parse()
if err != nil {
log.New(os.Stderr, "linecount ", 0).Printf("config error: %v", err)
os.Exit(2)
}
logger := app.NewLogger(cfg.Verbose)
var out io.Writer = os.Stdout
if cfg.OutPath != "" {
f, err := os.Create(cfg.OutPath)
if err != nil {
logger.Printf("open out path=%q err=%v", cfg.OutPath, err)
os.Exit(2)
}
defer f.Close()
out = f
}
exit := app.Run(cfg, os.Args[1:], out, logger)
os.Exit(exit)
}Step 5: tests for the core logic
Because CountLines accepts an io.Reader, you can test it without touching the filesystem.
package linecount
import (
"strings"
"testing"
)
func TestCountLines(t *testing.T) {
in := "a\n\nxyz\n"
n, err := CountLines(strings.NewReader(in))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if n != 3 {
t.Fatalf("got %d, want %d", n, 3)
}
}Operational checklist (what makes it “production-style”)
- Clear boundaries:
mainwires dependencies; packages do the work. - Config is validated early: invalid values fail fast with a clear error.
- Logs go to stderr: results remain clean on stdout or in
-outfile. - I/O is abstracted: core logic uses
io.Reader/io.Writerfor testability. - Errors are wrapped: failures include context like file path and operation.
- Concurrency is controlled: worker count is configurable and bounded.