Free Ebook cover Linux Command Line for Beginners: Navigate, Search, and Automate Simple Tasks

Linux Command Line for Beginners: Navigate, Search, and Automate Simple Tasks

New course

10 pages

Pipes and Redirection: Building Command Line Workflows

Capítulo 7

Estimated reading time: 8 minutes

+ Exercise

stdin, stdout, and stderr: how commands move data

Most command-line tools follow a simple model: they read input, produce output, and report problems. Understanding these three streams lets you connect tools into workflows.

  • stdin (standard input): where a command reads data from (often your keyboard, or data coming from another command).
  • stdout (standard output): normal results a command prints (usually to your terminal).
  • stderr (standard error): error messages and diagnostics (also usually to your terminal, but separate from stdout).

This separation matters because you can redirect stdout to a file while still seeing errors on the screen, or capture errors into a log without mixing them with normal output.

Quick demonstration

ls /etc /does-not-exist

You will typically see a directory listing (stdout) and an error message for the missing path (stderr). They appear together on screen, but they are different streams.

Redirection operators: send input/output where you want

Redirection changes where stdin/stdout/stderr come from or go to. The shell performs redirection before running the command.

Redirect stdout to a file: >

> writes stdout to a file, replacing the file if it exists.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

ls /etc > etc-list.txt

After this, etc-list.txt contains the output. Nothing is printed to the terminal (unless there are errors).

Append stdout to a file: >>

>> appends stdout to the end of a file.

date >> run-report.txt

This is useful for building a running report over time.

Redirect stdin from a file: <

< makes a command read from a file instead of the keyboard.

sort < etc-list.txt

This prints the sorted contents of etc-list.txt to stdout.

Redirect stderr: 2>

File descriptor 2 refers to stderr. Use 2> to capture errors.

ls /etc /does-not-exist 2> errors.log

Now the error message goes into errors.log, while the normal listing still prints to the terminal.

Redirect both stdout and stderr: &>

&> sends both stdout and stderr to the same file.

ls /etc /does-not-exist &> all-output.log

Use this when you want a complete record of what happened, including errors.

Combine streams carefully

When you mix stdout and stderr into one file, it can be harder to process the results later (because errors are mixed with data). A common pattern is to keep them separate: data to one file, errors to another.

somecommand > results.txt 2> errors.txt

Pipelines with |: connect commands into workflows

A pipeline uses | to send stdout of the left command into stdin of the right command. This lets you build multi-step transformations without creating temporary files.

command1 | command2 | command3

Think of each command as a small “data transformer.” The output of one becomes the input of the next.

Pipeline example

ls /etc | sort

ls produces names, sort orders them.

Practical text tools for pipelines

The following tools are common building blocks for turning raw text into structured results.

sort: order lines

  • Sort alphabetically: sort
  • Sort numerically: sort -n
  • Reverse order: sort -r
printf "3\n10\n2\n" | sort -n

uniq: collapse or count repeated lines

uniq only detects adjacent duplicates, so it is usually paired with sort.

  • Remove adjacent duplicates: uniq
  • Count occurrences: uniq -c
printf "apple\nbanana\napple\n" | sort | uniq -c

cut: extract columns/fields

cut selects parts of each line.

  • By delimiter and field: cut -d',' -f2
  • By character positions: cut -c1-8
printf "name,score\nAva,10\nNoah,7\n" | cut -d',' -f1

tr: translate or delete characters

tr is great for simple character-level cleanup.

  • Lowercase to uppercase: tr 'a-z' 'A-Z'
  • Delete digits: tr -d '0-9'
  • Replace spaces with newlines (tokenize): tr ' ' '\n'
printf "Hello World" | tr 'a-z' 'A-Z'

tee: split output to screen and file

tee reads stdin and writes it to stdout and to a file at the same time. This is useful when you want to keep a record but still see output.

  • Overwrite file: tee output.txt
  • Append to file: tee -a output.txt
ls /etc | tee etc-list.txt | sort

Here, the unsorted list is saved to etc-list.txt while the sorted list continues down the pipeline.

Debugging pipelines step-by-step

When a pipeline produces unexpected results, debug it by validating each stage. The goal is to find the first command whose output is not what you expect.

Method 1: run each stage separately

Start with the first command and inspect its output, then add the next command, and so on.

# Stage 1: does this output look right?
cat data.txt
# Stage 2: add the next transformation
cat data.txt | tr 'A-Z' 'a-z'
# Stage 3: add another step
cat data.txt | tr 'A-Z' 'a-z' | sort

Method 2: use tee as a “checkpoint”

Insert tee to capture intermediate output without stopping the pipeline.

cat data.txt | tr 'A-Z' 'a-z' | tee step1.txt | sort | tee step2.txt | uniq -c

If the final counts look wrong, inspect step1.txt and step2.txt to see where the data changed in an unexpected way.

Method 3: separate errors from data

If a command might produce errors, redirect stderr to a log so it does not contaminate your data stream.

somecommand 2> pipeline-errors.log | sort | uniq -c

If the pipeline output is empty or strange, check pipeline-errors.log for clues.

Mini-lab: build a frequency list and keep an audit report

Goal: from a text file, produce a frequency list of words, save outputs to files, and append audit results to a running report. This lab uses only pipelines and redirection.

Setup

Assume you have a text file named sample.txt in your current directory. The steps below treat “words” as sequences separated by whitespace and punctuation; we will normalize case and strip common punctuation.

Step 1: Normalize text (lowercase, one word per line)

We will: convert to lowercase, replace non-word separators with newlines, and remove empty lines.

cat sample.txt | tr 'A-Z' 'a-z' | tr -cs "a-z0-9'" '\n' > words.txt
  • tr 'A-Z' 'a-z' normalizes case.
  • tr -cs "a-z0-9'" '\n' complements the set (everything except allowed characters) and squeezes runs into a single newline, effectively tokenizing into one word per line.
  • > words.txt saves the word list.

Step 2: Create a frequency list (count each word)

Now sort the words and count them.

sort words.txt | uniq -c | sort -nr > frequency.txt
  • uniq -c prefixes each unique word with its count.
  • sort -nr sorts numerically, highest count first.

Step 3: Keep a copy while viewing results with tee

If you want to see the top results and also save them, use tee. This example saves the full frequency list and shows it on screen.

sort words.txt | uniq -c | sort -nr | tee frequency.txt

If you only want to view a subset while still saving the full file, place tee before the “viewing” step (for example, before another filter you might add later).

Step 4: Save errors separately during processing

If your input file might be missing or unreadable, capture errors into an audit log while keeping normal output clean.

cat sample.txt 2> audit-errors.log | tr 'A-Z' 'a-z' | tr -cs "a-z0-9'" '\n' > words.txt

If sample.txt cannot be read, the error goes to audit-errors.log.

Step 5: Append audit results to a running report

Create (or append to) a report file that records what you generated and basic counts. Use >> to append.

date >> audit-report.txt
printf "Input file: sample.txt\n" >> audit-report.txt
printf "Total words: " >> audit-report.txt; wc -l < words.txt >> audit-report.txt
printf "Unique words: " >> audit-report.txt; cut -c1-8 < /dev/null 2> /dev/null

Instead of the last line above (which is intentionally not useful), use a correct pipeline to count unique words by counting lines in the frequency list. This keeps the report reproducible from generated files.

printf "Unique words: " >> audit-report.txt; wc -l < frequency.txt >> audit-report.txt

Add a blank line between runs to keep the report readable.

printf "\n" >> audit-report.txt

Step 6: Debug the lab pipeline if counts look wrong

If frequency.txt looks suspicious (for example, many empty lines or strange tokens), checkpoint each stage.

cat sample.txt | tr 'A-Z' 'a-z' | tee stage1-lower.txt | tr -cs "a-z0-9'" '\n' | tee stage2-words.txt | sort | uniq -c | sort -nr > frequency.txt
  • Inspect stage1-lower.txt to confirm case normalization.
  • Inspect stage2-words.txt to confirm tokenization (one word per line, no punctuation noise).
  • If tokenization is wrong, adjust the allowed character set in tr -cs (for example, remove apostrophes if you do not want contractions).

Now answer the exercise about the content:

You want to save a command’s normal output to results.txt while keeping error messages in errors.txt so they don’t mix with the data. Which command pattern does this?

You are right! Congratulations, now go to the next page

You missed! Try again.

> redirects stdout (normal output) to results.txt, while 2> redirects stderr (errors) to errors.txt. This keeps data and errors separate.

Next chapter

Editing Basics in the Terminal: Nano Essentials and Quick Fixes

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.