Bash essentials#
Using CLI effectively#
First things first: the terminal can feel awkward to use. What can we do about this?
Each section below is some set of tips for using the interactive bash CLI effectively.
Keyboard shortcuts#
Keyboard shortcuts: details & examples
This section was adapted from [@BashKeyboardShortcuts].
Completions
Use TAB completion for file/directory names. Type just enough characters to uniquely
identify the item.
For example, to move to a directory sample1; Type cd sam. Then press TAB and
ENTER.
Moving the cursor
Ctrl+a: Go to the beginning of the line (Home).Ctrl+e: Go to the End of the line (End).Ctrl+p: Previous command (Up arrow).Ctrl+n: Next command (Down arrow).Alt+b: Back (left) one word.Alt+f: Forward (right) one word.Ctrl+f: Forward one character.Ctrl+b: Backward one character.
While using man or command --help | less
k: Scroll up one linej: Scroll down one lineCtrl+u: Page upCtrl+d: Page down/: Begin forward search?: Begin reverse searchn/N: Find next/previous matchq: close thelesspager
Editing
Ctrl+L: Clear the Screen, similar to the clear command.Alt+Del: Delete the Word before the cursor.Alt+d: Delete the Word after the cursor.Ctrl+d: Delete character under the cursor.Ctrl+h: Delete character before the cursor (Backspace).Ctrl+w: Cut the Word before the cursor to the clipboard.Ctrl+k: Cut the Line after the cursor to the clipboard.Ctrl+u: Cut/delete the Line before the cursor to the clipboard.Alt+t: Swap current word with previous.Ctrl+t: Swap the last two characters before the cursor (typo).ctrl+y: Paste the last thing to be cut (yank).Alt+u: UPPER capitalize every character from the cursor to the end of the current word.Alt+l: Lower the case of every character from the cursor to the end of the current word.Alt+c: Capitalize the character under the cursor and move to the end of the word.Alt+r: Cancel the changes and put back the line as it was in the history (revert).ctrl+_: Undo.
Special keys
Ctrl+vtells the terminal to not interpret the following characterso
Ctrl+vTABwill display a tab character rather than attempting completion.similarly
Ctrl+vENTERwill display the escape sequence for the Enter key:^M
History
Ctrl+r: Recall the last command including the specified character(s).Ctrl+p: Previous command in history (walk back).Ctrl+n: Next command in history (walk forward).Ctrl+o: Execute the command found viaCtrl+rorCtrl+sCtrl+oCtrl+g: Escape from history searching mode.
Process Control
Ctrl+c: Interrupt/Kill whatever you are running (SIGINT).Ctrl+l: Clear the screen.Ctrl+s: Stop output to the screen (for long running verbose commands). Then use PgUp/PgDn for navigation.Ctrl+q: Allow output to the screen (if previously stopped using command above).Ctrl+d: Send an EOF marker, unless disabled by an option, this will close the current shell (EXIT).Ctrl+z: Send the signal SIGTSTP to the current task, which suspends it. To return to it later enterfg 'process name'
Configuration#
.bashrc details & examples
Every time you open a new terminal window/tab in the bash shell, the ~/.bashrc file is
read and executed.
The typical usecases for customising ~/.bashrc are:
setting a custom Command prompt
setting various useful
shopts(shell options)setting environment variables/aliases
sourcing other bash files
Examples of each of these are shown below.
# Set a prompt like: [username@hostname:~/CurrentWorkingDirectory]$
export PS1='[\u@\h:\w]\$ '
# Explanation:
# \u: username
# \h: hostname
# \w: the current working directory
# \$: the character $
# all other characters are interpreted literally
# See https://ss64.com/bash/syntax-prompt.html for more examples
# Set useful shell options
shopt -s autocd # auto-appends `cd` to directory, so you can cd to a path without writing `cd`
shopt -s globstar # enables the pattern '**' for recursive file/directory wildcard matching
shopt -s extglob # fancier pattern matching
# See https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html for more options
# Set environment variables/aliases
export EDITOR="nvim" # neovim
export EDITOR="code" # vscode (overwrites previous line)
# See https://ss64.com/bash/export.html for more information
alias ll="ls -l" # create new alias ll for a long list
alias cp="cp -iv" # replace default cp command with interactive/verbose cp
# See https://ss64.com/bash/alias.html for more alias info and examples
# Note that aliases cannot handle complex logic or accept positional parameters
# For that, we would need functions.
# Source all bash files in ~/.bashrc.d/
# This lets you define functions in various shell files in this folder and source them at startup.
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
if [ -f "$rc" ]; then
source "$rc"
fi
done
fi
unset rc
# See https://ss64.com/bash/source.html for more info on sourcing
Ever Wonder Why it’s Called .bashrc?
There are many files that end with the mysterious suffix rc like .bashrc, .vimrc, etc.
Why is that? It’s a holdover from ancient Unix. Its original meaning was “run commands,” but it later became
“run-control.” A run-control file is generally some kind of script or configuration file
that prepares an environment for a program to use. In the case of .bashrc for example,
it’s a script that prepares a user’s bash shell environment.
.profile details & examples
Every time you log in to a linux user, the ~/.profile file is read and executed.
The typical usecases for customizing ~/.profile are:
setting environment variables INDEPENDENT of bash instances
i.e., these variables will work in sh, zsh, and other shells
setting environment variables once per session
particularly useful for
PATH, since setting it in~/.bashrcwill cause it to be updated more frequently than useful
Examples of these are shown below:
# Add a directory to PATH, checking if that directory is not already in PATH first
if ! [[ "$PATH" =~ "$HOME/bin:" ]]; then
export PATH="$PATH:$HOME/bin" # Adds ~/bin to your path
fi
# Source all profile files in ~/.profile.d/
# This is useful for programs like npm, you can put its bashrc/path stuff in here instead.
for script in $HOME/.profile.d/*.sh ; do
if [ -r "$script" ] ; then
. "$script"
fi
done
unset script
# See https://ss64.com/bash/source.html for more info on sourcing
.inputrc details & examples
This section was adapted from [@HowToBashStartup].
The library that is used to implement a command line interface for bash is called the Readline library.
While it comes with a set of default keybindings (see the #keyboard-shortcuts section), it
is possible to modify these and other behaviors of the CLI interface by putting commands
into a .inputrc file, typically in the home directory.
The configuration options in .inputrc are particularly useful for customising the way
Tab-completion works, e.g. with the ls command.
The inputrc variable syntax is simple:
set variable value
Below are a list of variables I find particularly useful, as well as a sample .inputrc
file showing how each of these are set.
bell-styleControls what happens when Readline wants to ring the terminal bell. If set to ‘none’, Readline never rings the bell. If set to ‘visible’, Readline uses a visible bell if one is available. If set to ‘audible’ (the default), Readline attempts to ring the terminal’s bell.
completion-ignore-caseIf set to ‘on’, Readline performs filename matching and completion in a case-insensitive fashion. The default value is ‘off’.
editing-modeThe editing-mode variable controls which default set of key bindings is used. By default, Readline starts up in Emacs editing mode, where the keystrokes are most similar to Emacs. This variable can be set to either ‘emacs’ or ‘vi’.
mark-symlinked-directoriesIf set to ‘on’, completed names which are symbolic links to directories have a slash appended (subject to the value of mark-directories). The default is ‘off’.
show-all-if-ambiguousThis alters the default behavior of the completion functions. If set to ‘on’, words which have more than one possible completion cause the matches to be listed immediately instead of ringing the bell. The default value is ‘off’.
A sample ~/.inputrc file with these variables in use:
set bell-style none
set completion-ignore-case On
set editing-mode vi
set mark-symlinked-directories On
set show-all-if-ambiguous On
You can find many more configuration options in [@HowToBashStartup].
Bare necessities#
The following sections explain the purpose of each command and show a few use cases and useful options.
These are commands you probably already know – if you don’t, you’ll know by the end of
lab-0, as you’ll need them all!
Getting around: cd and ls#
NAME
cd - change the current directory
ls - list directory contents
SYNOPSIS
cd [DIR]
ls [OPTION]... [FILE]...
cd & ls details & examples
cd
Useful shorthands for cd to know:
# Change to user home directory
# (usually: /home/username)
$ cd ~
# WSL: Change to Windows mounted directory
$ cd /mnt/c/
# Return to previous directory
$ cd - # in this case, /home/username
ls
Useful ls options:
-l use a long listing format
-a, --all do not ignore entries starting with .
-d, --directory list directories themselves, not their contents
-s, --size print the allocated size of each file, in blocks
-t sort by time, newest first; see --time
-h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc.
--si likewise, but use powers of 1000 not 1024
-R, --recursive list subdirectories recursively
Viewing files: cat and tac#
NAME
cat - concatenate files and print on the standard output
tac - concatenate and print files in reverse
SYNOPSIS
cat [OPTION]... [FILE]...
tac [OPTION]... [FILE]...
Creating files: touch and mkdir#
NAME
touch - Update the modification times of each `FILE` to the current time.
Creates the files if they do not exist.
mkdir - Create the given DIRECTORY(ies) if they do not exist
SYNOPSIS
touch [FILE]...
mkdir [-p/--parents] [DIRECTORY]...
Moving files: mv and cp#
NAME
mv - Move `SOURCE` to `DEST`, or multiple `SOURCE`(s) to `DIRECTORY`.
cp - Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY.
SYNOPSIS
mv [-f/--force] [-i/--interactive] [-g/--progress] [SOURCE]... [DEST]
cp [-f/--force] [-i/--interactive] [-g/--progress] [-R/--recursive] [SOURCE]... [DEST]
Managing permissions: chmod and chown#
NAME
chmod - Change the permissions mode of each FILE to MODE.
chown - Change file owner and group of each FILE to USER:GROUP
SYNOPSIS
chmod [-R/--recursive] [MODE] [FILE]
chown [-R/--recursive] [USER:GROUP] [FILE]
Deleting files: rm#
NAME
rm - Remove the FILE(s)
SYNOPSIS
rm [-f/--force] [-i/--interactive] [-r/--recursive] [FILE]...
The five fingers of death#
Five Fingers of Death, or King Boxer as it is known on Wikipedia, is a martial-arts movie I have not seen, but I have heard referenced in many songs. It speaks to me that the mastery of a seemlingly small set tools (five fingers) can lead to drastic increases in capability (the ability to inflict death) and I believe this spirit applies directly to working with unix tools. Image source#
The following 5 sets of commands are indispensable GNU Coreutils that are included on all linux systems.
There are many more coreutils that I have not included – I have chosen these 5 sets as I believe that mastering them, above all, will bring you in harmony with your linux system, and therefore closer to truth, happiness, and the meaning of life – or, if not, at least they will help you solve the labs that I give you in this course.
Almost all of these notes are adapted from a resource I found that’s pretty much exactly what I wanted to write myself: [@CLITextProcessing]. It comes with great explanations and exercises and solutions. I may base some quizzes and tests on it!
find files and grep content#
NAME
find - search for files that match a given expression
grep - print lines in file(s) that match a given pattern
SYNOPSIS
find [STARTING-POINT...] [OPTION...] [EXPRESSION]
grep [OPTION...] PATTERNS [FILE...]
find details & examples
find
This section was adapted from [@PracticalGuideGNU2023]
Let’s begin by looking first at find’s general syntax:
find [STARTING-POINT...] [OPTION...] [EXPRESSION]
What are these different elements?
Element |
Description |
Default |
|---|---|---|
|
Options are arguments about symlinks and search optimization. |
None |
|
List of directories to search through. The subdirectories are recursively included. |
Current directory |
|
List of expressions with their (often required) values. |
None |
Nothing is mandatory here: running find alone will give you some output.
Here are the different categories of [EXPRESSION]. Each of these are queries describing
how to match files, or what action to perform on these files. They’re always prefixed with
a single dash - (like -name for example).
Category |
Description |
|---|---|
Test expressions |
Most common expressions. They’re used to filtering your files. |
Action expressions |
Expressions used to perform an action on each file found. |
Operators |
Boolean operators to manage the relationships between the different expressions. |
Let’s see an example that demonstrates these categories:
find . -name '*.png' -or -perm '664' -delete
This will recursively search the current directory for all files that EITHER have a filename ending with
.png OR that has the permissions 664, then will delete those files. (see the course notes on permissions for more details on
the meaning of 664 here.)
Let’s see what category each of these expressions is:
Expression |
Category |
|---|---|
|
Test expressions |
|
Action expression |
|
Operator expression |
There are, of course, many different Test/Action/Operator expressions, and the beauty of
the find command is combining each of these types of expressions to create stunningly
efficient file search commands.
I recommend reading/bookmarking the following resources for great explanations and examples of the various uses for the find command:
grep details & examples
grep
This section was adapted from [@MasteringLinuxGrep2024]
The Linux grep command is one of the most powerful and frequently used tools for text
search and data filtering. Whether you’re managing system logs, searching through files,
or debugging code, grep helps you find specific patterns within large sets of data quickly
and efficiently.
The basic syntax of grep is as follows:
grep [OPTION...] PATTERNS [FILE...]
[OPTION...]: various options you can providegrepto modify the default behavior.PATTERNS: the string or regular expression you want to search for.[FILE...]: the file(s) where you want to search.
Practical examples
Essentially, grep searches for a pattern and displays the matching lines. Here are a few common use-cases:
Example 1: Searching for a Word in a File
If you want to search for a specific word in a file, the most basic command would be:
grep "word" filename.txt
This will return all lines in filename.txt that contain the word “word.”
Example 2: Case-Insensitive Search
By default, grep is case-sensitive. If you want to ignore case distinctions, you can use the -i option:
grep -i "word" filename.txt
This command will return matches for both “Word” and “word” in filename.txt.
Example 3: Searching Across Multiple Files
To search for a pattern in multiple files at once, you can use wildcards *:
grep "word" *.txt
This will search for “word” in all .txt files in the current directory.
Example 4: Displaying Line Numbers
To see the line numbers where the matches occur, use the -n option:
grep -n "word" filename.txt
This command will display the line numbers along with the matching lines.
Example 5: Recursive Search in Directories
If you want to search for a pattern across all files in a directory and its subdirectories, use the -r (recursive) option:
grep -r "word" /path/to/directory/
This will search for “word” in all files within /path/to/directory/, including subdirectories.
Example 6: Inverting Search (Exclude a Pattern)
If you want to exclude lines that contain a specific pattern, you can use the -v option:
grep -v "word" filename.txt
This command will return all lines that do not contain “word”.
Example 7: Counting Matches
To count how many times a pattern appears in a file, use the -c option:
grep -c "word" filename.txt
This will output the number of lines that contain “word” in filename.txt.
tr characters and cut fields#
NAME
tr - Translate characters matching STRING1 in stdin/FILE to STRING2,
writing to stdout
cut - Prints specified columns from each line of stdin, writes to stdout
SYNOPSIS
tr [OPTION]... STRING1 STRING2
cut [-d/--delimiter] [-f/--fields] [FILE]
tr details & examples
tr
The following section was adapted from [@CLITextProcessing]
tr helps you to map one set of characters to another set of characters. Features like
range, repeats, character sets, squeeze, complement, etc makes it a must know text
processing tool.
Here are some examples that map one set of characters to another. As a good practice, always enclose the sets in single quotes to avoid issues due to shell metacharacters.
# 'l' maps to '1', 'e' to '3', 't' to '7' and 's' to '5'
$ echo 'leet speak' | tr 'lets' '1375'
1337 5p3ak
# example with shell metacharacters
$ echo 'apple;banana;cherry' | tr
:
tr: missing operand
Try 'tr --help' for more information.
$ echo 'apple;banana;cherry' | tr ';' ':'
apple:banana:cherry
Character ranges
You can use - between two characters to construct a range (ascending order only).
# uppercase to lowercase
$ echo 'HELLO WORLD' | tr 'A-Z' 'a-z'
hello world
# swap case
$ echo 'Hello World' | tr 'a-zA-Z' 'A-Za-z'
hELLO wORLD
# rot13
$ echo 'Hello World' | tr 'a-zA-Z' 'n-za-mN-ZA-M'
Uryyb Jbeyq
$ echo 'Uryyb Jbeyq' | tr 'a-zA-Z' 'n-za-mN-ZA-M'
Hello World
Deleting characters
Use the -d option to specify a set of characters to be deleted.
$ echo ‘2024-08-12’ | tr -d ‘-’
20240812
# delete all punctuation characters
$ s='"Hi", there! How *are* you? All fine here.'
$ echo "$s" | tr -d '[:punct:]'
Hi there How are you All fine here
Squeezing characters
The -s option changes consecutive repeated characters to a single copy of that character.
$ echo 'HELLO... hhoowwww aaaaaareeeeee yyouuuu!!' | tr -s 'a-z'
HELLO... how are you!!
# translate and squeeze
$ echo 'hhoowwww aaaaaareeeeee yyouuuu!!' | tr -s 'a-z' 'A-Z'
HOW ARE YOU!!
# delete and squeeze
$ echo 'hhoowwww aaaaaareeeeee yyouuuu!!' | tr -sd '!' 'a-z'
how are you
# squeeze other than lowercase alphabets
$ echo 'apple noon banana!!!!!' | tr -cs 'a-z'
apple noon banana!
You can see more examples and explanations here: [@CLITextProcessing].
cut details & examples
cut
The following section was adapted from [@CLITextProcessing]
By default, cut splits the input content into fields based on the tab (\t) character. You can
use the -f option to select a desired field from each input line. To extract multiple
fields, specify the selections separated by the comma character.
# only the second field
$ printf 'apple\tbanana\tcherry\n' | cut -f2
banana
# first and third fields
$ printf 'apple\tbanana\tcherry\n' | cut -f1,3
apple cherry
Field ranges
You can use the - character to specify field ranges. You can skip the starting or ending
range, but not both.
# 2nd, 3rd and 4th fields
$ printf 'apple\tbanana\tcherry\tfig\tmango\n' | cut -f2-4
banana cherry fig
# all fields from the start till the 3rd field
$ printf 'apple\tbanana\tcherry\tfig\tmango\n' | cut -f-3
apple banana cherry
# all fields from the 3rd one till the end
$ printf 'apple\tbanana\tcherry\tfig\tmango\n' | cut -f3-
cherry fig mango
Input Delimiter
Use the -d option to change the input delimiter. Only a single byte character is
allowed. By default, the output delimiter will be same as the input delimiter.
$ cat scores.csv
Name,Maths,Physics,Chemistry
Ith,100,100,100
Cy,97,98,95
Lin,78,83,80
$ cut -d, -f2,4 scores.csv
Maths,Chemistry
100,100
97,95
78,80
# use quotes if the delimiter is a shell metacharacter
$ echo 'one;two;three;four' | cut -d -f3
cut: option requires an argument -- 'd'
Try 'cut --help' for more information.
-f3: command not found
$ echo 'one;two;three;four' | cut -d';' -f3
three
Output Delimiter
Use the --output-delimiter option to customize the output separator to any string of
your choice. The string is treated literally. Depending on your shell you can use ANSI-C
quoting to allow escape sequences.
$ printf 'apple\tbanana\tcherry\n' | cut --output-delimiter=, -f1-
apple,banana,cherry
# example for multicharacter output separator
$ echo 'one;two;three;four' | cut -d';' --output-delimiter=' : ' -f1,3-
one : three : four
# ANSI-C quoting example
# depending on your environment, you can also press Ctrl+v and then the Tab key
$ echo 'one;two;three;four' | cut -d';' --output-delimiter=$'\t' -f1,3-
one three four
# newline as the output field separator
$ echo 'one;two;three;four' | cut -d';' --output-delimiter=$'\n' -f2,4
two
four
You can see more examples and explanations here: [@CLITextProcessing].
sort data and uniq duplicates#
NAME
sort - Display sorted concatenation of all FILE(s).
With no FILE, or when FILE is -, read stdin
uniq - Report or omit repeated lines.
SYNOPSIS
sort [FILE]...
uniq [-d/--repeated] [FILE]...
sort details & examples
sort
The following section was adapted from [@CLITextProcessing]
The sort command provides a wide variety of features. In addition to
lexicographic ordering, it supports
various numerical formats. You can also sort based on particular columns. And there are
nifty features like merging already sorted input, debugging, determining whether the input
is already sorted and so on.
By default, sort orders the input in ascending order:
$ cat greeting.txt
Hi there
Have a nice day
# extract and sort space separated words
$ <greeting.txt tr ' ' '\n' | sort
a
day
Have
Hi
nice
there
Dictionary sort
The -d option will consider only alphabets, numbers and blanks for sorting. Space and tab characters are considered as blanks, but this would also depend on the locale.
$ printf '(banana)\n{cherry}\n[apple]' | LC_ALL=C sort -d
[apple]
(banana)
{cherry}
Reversed order
The -r option will reverse the output order. Note that this doesn’t change how sort performs comparisons, only the output is reversed. You’ll see an example later where this distinction becomes clearer.
$ printf 'peace\nrest\nquiet' | sort -r
rest
quiet
peace
Numeric sort
The sort command provides various options to work with numeric formats. For most cases, the -n option is enough. Here’s an example:
# lexicographic ordering isn't suited for numbers
$ printf '20\n2\n3\n111\n314' | sort
111
2
20
3
314
# -n helps in this case
$ printf '20\n2\n3\n111\n314' | sort -n
2
3
20
111
314
uniq details & examples
uniq
The following section was adapted from [@CLITextProcessing]
The uniq command identifies similar lines that are adjacent to each other. There are various options to help you filter unique or duplicate lines, count them, group them, etc.
Retain single copy of duplicates
This is the default behavior of the uniq command. If adjacent lines are the same, only the first copy will be displayed in the output.
# only the adjacent lines are compared to determine duplicates
# which is why you get 'red' twice in the output for this input
$ printf 'red\nred\nred\ngreen\nred\nblue\nblue' | uniq
red
green
red
blue
You’ll need sorted input to make sure all the input lines are considered to determine duplicates. For some cases, sort -u is enough, like the example shown below:
# same as sort -u for this case
$ printf 'red\nred\nred\ngreen\nred\nblue\nblue' | sort | uniq
blue
green
red
Sometimes though, you may need to sort based on some specific criteria and then identify duplicates based on the entire line contents. Here’s an example:
# can't use sort -n -u here
$ printf '2 balls\n13 pens\n2 pins\n13 pens\n' | sort -n | uniq
2 balls
2 pins
13 pens
Duplicates only
The -d option will display only the duplicate entries. That is, only if a line is seen more than once.
$ cat purchases.txt
coffee
tea
washing powder
coffee
toothpaste
tea
soap
tea
$ sort purchases.txt | uniq -d
coffee
tea
To display all the copies of duplicates, use the -D option.
$ sort purchases.txt | uniq -D
coffee
coffee
tea
tea
tea
Unique only
The -u option will display only the unique entries. That is, only if a line doesn’t occur more than once.
$ sort purchases.txt | uniq -u
soap
toothpaste
washing powder
# reminder that uniq works based on adjacent lines only
$ printf 'red\nred\nred\ngreen\nred\nblue\nblue' | uniq -u
green
red
You can see more examples and explanations here: [@CLITextProcessing].
know head from tail#
NAME
head - Print the first 10 lines of each `FILE` to standard output.
With no `FILE`, or when `FILE` is `-`, read stdin
tail - Print the last 10 lines of each `FILE` to standard output.
With no `FILE`, or when `FILE` is `-`, read stdin
SYNOPSIS
head [-n/--lines] [FILE]...
tail [-n/--lines] [-f/--follow] [FILE]...
head & tail details & examples
head and tail
The following section was adapted from [@CLITextProcessing]
head and tail, or a combination of both, are used to extract text content that you
know is at the beginning, end, or specific line number of a file.
Leading and trailing lines
Consider this sample file, with line numbers prefixed for convenience.
$ cat sample.txt
1) Hello World
2)
3) Hi there
4) How are you
5)
6) Just do-it
7) Believe it
8)
9) banana
10) papaya
11) mango
12)
13) Much ado about nothing
14) He he he
15) Adios amigo
By default, head and tail will display the first and last 10 lines respectively.
$ head sample.txt
1) Hello World
2)
3) Hi there
4) How are you
5)
6) Just do-it
7) Believe it
8)
9) banana
10) papaya
$ tail sample.txt
6) Just do-it
7) Believe it
8)
9) banana
10) papaya
11) mango
12)
13) Much ado about nothing
14) He he he
15) Adios amigo
Note: If there are less than 10 lines in the input, only those lines will be displayed.
You can use the -nN option to customize the number of lines:
# first three lines
# space between -n and N is optional
$ head -n3 sample.txt
1) Hello World
2)
3) Hi there
# last two lines
$ tail -n2 sample.txt
14) He he he
15) Adios amigo
Excluding N lines
By using a “subtraction” style syntax, like head -n -N, you can invert the selection –
that is, get all the input lines EXCEPT the last -N lines in the case of head, or the
first -N lines in the case of tail.
# except the last 11 lines
$ head -n -11 sample.txt
1) Hello World
2)
3) Hi there
4) How are you
# except the first 11 lines
$ tail -n -11 sample.txt
12)
13) Much ado about nothing
14) He he he
15) Adios amigo
You can see more examples and explanation at [@CLITextProcessing].
tree and tee#
NAME
tree - list contents of DIRECTORIES in a tree-like format.
tee - Copy standard input to each FILE, and also to standard output.
SYNOPSIS
tree [-L level] [DIRECTORY]...
tee [FILE]...
tree & tee details & examples
There’s nothing here yet… stay tuned!
Redirection and Pipes#
This section was adapted from [@HowToRedirectionProcess]
When Bash starts, normally, 3 file descriptors are opened, 0, 1 and 2 also known as standard input (stdin), standard output (stdout) and standard error (stderr).
You can use the > operator to “redirect” the output of commands (which normally goes to
stdout) to different files or other file descriptors. Some common examples are shown
below:
command > filename Redirect command output (stdout) into a file
command > /dev/null Discard stdout of command
command 2> /dev/null Discard stderr of command
command >&2 Redirect command output (stdout) to stderr
command >> filename Redirect command output and APPEND into a file
command < filename Redirect a file into a command
commandA | commandB Pipe stdout of commandA to commandB
commandA | tee filename Pipe stdout of commandA into filename AND stdout
Redirection explained further
This section was adapted from [@IllustratedRedirectionTutorial2023]
Output Redirection n> file
> is probably the simplest redirection.
echo foo > file
the > file after the command alters the file descriptors belonging to the command echo.
It changes the file descriptor 1 (> file is the same as 1>file) so that it points to the
file file. They will look like:
--- +-----------------------+
standard input ( 0 ) ---->| /dev/pts/5 |
--- +-----------------------+
--- +-----------------------+
standard output ( 1 ) ---->| file |
--- +-----------------------+
--- +-----------------------+
standard error ( 2 ) ---->| /dev/pts/5 |
--- +-----------------------+
Now characters written by our command, echo, that are sent to the standard output, i.e., the file descriptor 1, end up in the file named file.
In the same way, command 2> file will change the standard error and will make it point
to file. For example, command 2> /dev/null will delete all errors outputted by
command:
--- +-----------------------+
standard input ( 0 ) ---->| /dev/pts/5 |
--- +-----------------------+
--- +-----------------------+
standard output ( 1 ) ---->| /dev/pts/5 |
--- +-----------------------+
--- +-----------------------+
standard error ( 2 ) ---->| /dev/null |
--- +-----------------------+
Input Redirection n< file
When you run a command using command < file, it changes the file descriptor 0 so that it
looks like:
--- +-----------------------+
standard input ( 0 ) <----| file |
--- +-----------------------+
--- +-----------------------+
standard output ( 1 ) ---->| /dev/pts/5 |
--- +-----------------------+
--- +-----------------------+
standard error ( 2 ) ---->| /dev/pts/5 |
--- +-----------------------+
If the command reads from stdin, it now will read from file and not from the console.
Pipes |
What does this | do? Among other things, it connects the standard output of the command on
the left to the standard input of the command on the right. That is, it creates a special
file, a pipe, which is opened as a write destination for the left command, and as a read
source for the right command.
command: echo foo | cat
--- +--------------+ --- +--------------+
( 0 ) ---->| /dev/pts/5 | ------> ( 0 ) ---->|pipe (read) |
--- +--------------+ / --- +--------------+
/
--- +--------------+ / --- +--------------+
( 1 ) ---->| pipe (write) | / ( 1 ) ---->| /dev/pts |
--- +--------------+ --- +--------------+
--- +--------------+ --- +--------------+
( 2 ) ---->| /dev/pts/5 | ( 2 ) ---->| /dev/pts/ |
--- +--------------+ --- +--------------+
This is possible because the redirections are set up by the shell before the commands are executed, and the commands inherit the file descriptors.
Core utilities#
ssh#
NAME
ssh - OpenSSH remote login client
SYNOPSIS
ssh [-l login_name] [-p port] DESTINATION [command [argument...]
ssh is a program for logging into a remote machine and for executing commands on a
remote machine. It is intended to provide secure encrypted communications between two
untrusted hosts over an insecure network.
ssh connects and logs into the specified destination, which may be specified as either
[user@]hostname or a URI of the form ssh://[user@]hostname[:port].
If a command is specified, it will be executed on the remote host instead of a login
shell.
ssh details & examples
There’s nothing here yet… stay tuned!
rsync#
NAME
rsync - a fast, versatile, remote (and local) file-copying tool
SYNOPSIS
Local:
rsync [OPTION...] SRC... [DEST]
Access via remote shell:
Pull:
rsync [OPTION...] [USER@]HOST:SRC... [DEST]
Push:
rsync [OPTION...] SRC... [USER@]HOST:DEST
Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon.
It is famous for sending only the differences between the source files and the existing files in the destination, increasing efficiency for repetitive synchronization between source and destination.
Rsync is widely used for backups and mirroring, and as an improved cp command for
everyday use.
rsync details & examples
There’s nothing here yet… stay tuned!
tar, zip, and unzip#
NAME
tar - a general archiving utility for creation/extraction/compression and more
zip - package and compress files into a ZIP archive
unzip - list, test and extract compressed files from a ZIP archive
SYNOPSIS
tar --create/--extract [--file ARCHIVE] [OPTIONS] [FILE...]
zip [OPTIONS] [ARCHIVE] [FILE...]
unzip [ARCHIVE] [-d OUTPUTDIR]
The tar, zip, and unzip programs provide the ability to create, extract, and
otherwise manipulate archives of files, where an archive of files is simply a file
that stores a collection of other files.
tar, zip, and unzip details & examples
This section was adapted from [@GNUTar135].
The specific usecases for tar/zip/unzip are similar but vary slightly.
All three tools are used for efficient storage, transfer, and backup of collections of files, particularly large files via compression.
tar:default: create/extract an uncompressed archive (
.tar) of a collection of fileswith
--gzip/-z: create/extract a compressed archive (.tar.gz) of a collection of fileswith
--bzip2/-j: create/extract a compressed archive (.tar.bz2) of a collection of files
zip:create a compressed collection of files (
.zip)
unzip:extract a compressed collection of files (
.zip)
Operations
There are two main operations of interest for archiving programs:
create: create a new archive (
.zip,.tar,.tar.gz,tar.bz2)extract: extract the files of an archive to a directory
Examples of each follow below:
# Assume you have a directory called music/ and three folders inside it:
$ tree music
music/
├── blues
│ └── nina-simone
├── folk
│ └── phil-ochs
└── jazz
└── charles-mingus
# Create an uncompressed archive (.tar) of all three files
$ tar --create --file=collection.tar music
# Creates a compressed archive (.zip, .tar.gz, .tar.bz2)
$ zip -r collection.zip music
$ tar --create --gzip --file=collection.tar.gz music
$ tar --create --bzip2 --file=collection.tar.bz2 music
# tar has shorthand versions of the above parameters
$ tar -c -f collection.tar music
$ tar -c -z -f collection.tar.gz music
$ tar -cjf collection.tar.bz2 music
# Assume you have the archives from the Create example:
$ tar --list collection.tar
music/
├── blues
│ └── nina-simone
├── folk
│ └── phil-ochs
└── jazz
└── charles-mingus
# Extract all files from an uncompressed archive (.tar) to the current directory
$ tar --extract --file=collection.tar
# Extract all files from a compressed archive (.zip, .tar.gz, .tar.bz2) to the current directory
$ unzip collection.zip
$ tar --extract --gzip --file=collection.tar.gz
$ tar --extract --bzip2 --file=collection.tar.bz2
# Extract all files from a compressed archive, specifying a different output directory
$ unzip collection.zip -d ~/some-folder
$ tar --extract --gzip --file=collection.tar.gz --directory ~/music
$ tar --extract --bzip2 --file=collection.tar.bz2 --directory /tmp/music
# tar has shorthand versions of the above parameters
$ tar -x -f collection.tar
$ tar -x -z -f collection.tar.gz -C ~/music
$ tar -xjf collection.tar.bz2 -C /tmp/music
Each of these operations is mutually exclusive, which makes some sense. You cannot create and extract an archive at the same time, that doesn’t make sense!
You can read more:
git#
NAME
git - the stupid content tracker
SYNOPSIS
git <command> [<args>]
Git is a fast, scalable, distributed revision control system with an unusually rich command set that provides both high-level operations and full access to internals.
See man 7 gittutorial to get started, then see man 7 giteveryday for a useful minimum
set of commands.
git details & examples
There’s nothing here yet… stay tuned!
More resources#
-
Short guides on learning bash shell and bash scripting.
Links to interactive learning games under “Adventures”. Basic Shell Features
Complete reference with examples.