stdin, stdout, and stderr: how commands move data
Most command-line tools follow a simple model: they read input, produce output, and report problems. Understanding these three streams lets you connect tools into workflows.
- stdin (standard input): where a command reads data from (often your keyboard, or data coming from another command).
- stdout (standard output): normal results a command prints (usually to your terminal).
- stderr (standard error): error messages and diagnostics (also usually to your terminal, but separate from stdout).
This separation matters because you can redirect stdout to a file while still seeing errors on the screen, or capture errors into a log without mixing them with normal output.
Quick demonstration
ls /etc /does-not-existYou will typically see a directory listing (stdout) and an error message for the missing path (stderr). They appear together on screen, but they are different streams.
Redirection operators: send input/output where you want
Redirection changes where stdin/stdout/stderr come from or go to. The shell performs redirection before running the command.
Redirect stdout to a file: >
> writes stdout to a file, replacing the file if it exists.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
ls /etc > etc-list.txtAfter this, etc-list.txt contains the output. Nothing is printed to the terminal (unless there are errors).
Append stdout to a file: >>
>> appends stdout to the end of a file.
date >> run-report.txtThis is useful for building a running report over time.
Redirect stdin from a file: <
< makes a command read from a file instead of the keyboard.
sort < etc-list.txtThis prints the sorted contents of etc-list.txt to stdout.
Redirect stderr: 2>
File descriptor 2 refers to stderr. Use 2> to capture errors.
ls /etc /does-not-exist 2> errors.logNow the error message goes into errors.log, while the normal listing still prints to the terminal.
Redirect both stdout and stderr: &>
&> sends both stdout and stderr to the same file.
ls /etc /does-not-exist &> all-output.logUse this when you want a complete record of what happened, including errors.
Combine streams carefully
When you mix stdout and stderr into one file, it can be harder to process the results later (because errors are mixed with data). A common pattern is to keep them separate: data to one file, errors to another.
somecommand > results.txt 2> errors.txtPipelines with |: connect commands into workflows
A pipeline uses | to send stdout of the left command into stdin of the right command. This lets you build multi-step transformations without creating temporary files.
command1 | command2 | command3Think of each command as a small “data transformer.” The output of one becomes the input of the next.
Pipeline example
ls /etc | sortls produces names, sort orders them.
Practical text tools for pipelines
The following tools are common building blocks for turning raw text into structured results.
sort: order lines
- Sort alphabetically:
sort - Sort numerically:
sort -n - Reverse order:
sort -r
printf "3\n10\n2\n" | sort -nuniq: collapse or count repeated lines
uniq only detects adjacent duplicates, so it is usually paired with sort.
- Remove adjacent duplicates:
uniq - Count occurrences:
uniq -c
printf "apple\nbanana\napple\n" | sort | uniq -ccut: extract columns/fields
cut selects parts of each line.
- By delimiter and field:
cut -d',' -f2 - By character positions:
cut -c1-8
printf "name,score\nAva,10\nNoah,7\n" | cut -d',' -f1tr: translate or delete characters
tr is great for simple character-level cleanup.
- Lowercase to uppercase:
tr 'a-z' 'A-Z' - Delete digits:
tr -d '0-9' - Replace spaces with newlines (tokenize):
tr ' ' '\n'
printf "Hello World" | tr 'a-z' 'A-Z'tee: split output to screen and file
tee reads stdin and writes it to stdout and to a file at the same time. This is useful when you want to keep a record but still see output.
- Overwrite file:
tee output.txt - Append to file:
tee -a output.txt
ls /etc | tee etc-list.txt | sortHere, the unsorted list is saved to etc-list.txt while the sorted list continues down the pipeline.
Debugging pipelines step-by-step
When a pipeline produces unexpected results, debug it by validating each stage. The goal is to find the first command whose output is not what you expect.
Method 1: run each stage separately
Start with the first command and inspect its output, then add the next command, and so on.
# Stage 1: does this output look right?cat data.txt# Stage 2: add the next transformationcat data.txt | tr 'A-Z' 'a-z'# Stage 3: add another stepcat data.txt | tr 'A-Z' 'a-z' | sortMethod 2: use tee as a “checkpoint”
Insert tee to capture intermediate output without stopping the pipeline.
cat data.txt | tr 'A-Z' 'a-z' | tee step1.txt | sort | tee step2.txt | uniq -cIf the final counts look wrong, inspect step1.txt and step2.txt to see where the data changed in an unexpected way.
Method 3: separate errors from data
If a command might produce errors, redirect stderr to a log so it does not contaminate your data stream.
somecommand 2> pipeline-errors.log | sort | uniq -cIf the pipeline output is empty or strange, check pipeline-errors.log for clues.
Mini-lab: build a frequency list and keep an audit report
Goal: from a text file, produce a frequency list of words, save outputs to files, and append audit results to a running report. This lab uses only pipelines and redirection.
Setup
Assume you have a text file named sample.txt in your current directory. The steps below treat “words” as sequences separated by whitespace and punctuation; we will normalize case and strip common punctuation.
Step 1: Normalize text (lowercase, one word per line)
We will: convert to lowercase, replace non-word separators with newlines, and remove empty lines.
cat sample.txt | tr 'A-Z' 'a-z' | tr -cs "a-z0-9'" '\n' > words.txttr 'A-Z' 'a-z'normalizes case.tr -cs "a-z0-9'" '\n'complements the set (everything except allowed characters) and squeezes runs into a single newline, effectively tokenizing into one word per line.> words.txtsaves the word list.
Step 2: Create a frequency list (count each word)
Now sort the words and count them.
sort words.txt | uniq -c | sort -nr > frequency.txtuniq -cprefixes each unique word with its count.sort -nrsorts numerically, highest count first.
Step 3: Keep a copy while viewing results with tee
If you want to see the top results and also save them, use tee. This example saves the full frequency list and shows it on screen.
sort words.txt | uniq -c | sort -nr | tee frequency.txtIf you only want to view a subset while still saving the full file, place tee before the “viewing” step (for example, before another filter you might add later).
Step 4: Save errors separately during processing
If your input file might be missing or unreadable, capture errors into an audit log while keeping normal output clean.
cat sample.txt 2> audit-errors.log | tr 'A-Z' 'a-z' | tr -cs "a-z0-9'" '\n' > words.txtIf sample.txt cannot be read, the error goes to audit-errors.log.
Step 5: Append audit results to a running report
Create (or append to) a report file that records what you generated and basic counts. Use >> to append.
date >> audit-report.txtprintf "Input file: sample.txt\n" >> audit-report.txtprintf "Total words: " >> audit-report.txt; wc -l < words.txt >> audit-report.txtprintf "Unique words: " >> audit-report.txt; cut -c1-8 < /dev/null 2> /dev/nullInstead of the last line above (which is intentionally not useful), use a correct pipeline to count unique words by counting lines in the frequency list. This keeps the report reproducible from generated files.
printf "Unique words: " >> audit-report.txt; wc -l < frequency.txt >> audit-report.txtAdd a blank line between runs to keep the report readable.
printf "\n" >> audit-report.txtStep 6: Debug the lab pipeline if counts look wrong
If frequency.txt looks suspicious (for example, many empty lines or strange tokens), checkpoint each stage.
cat sample.txt | tr 'A-Z' 'a-z' | tee stage1-lower.txt | tr -cs "a-z0-9'" '\n' | tee stage2-words.txt | sort | uniq -c | sort -nr > frequency.txt- Inspect
stage1-lower.txtto confirm case normalization. - Inspect
stage2-words.txtto confirm tokenization (one word per line, no punctuation noise). - If tokenization is wrong, adjust the allowed character set in
tr -cs(for example, remove apostrophes if you do not want contractions).