0% found this document useful (0 votes)
22 views19 pages

Unit 2 2 Basic Linux Concepts and Commands File System File and

This document covers basic Linux concepts and commands, focusing on file system commands, date and time management, and text processing tools like grep, awk, and sed. It provides examples of command usage for managing files, searching for text, and editing text streams. Additionally, it includes exercises to reinforce learning and understanding of these commands.

Uploaded by

b28244238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views19 pages

Unit 2 2 Basic Linux Concepts and Commands File System File and

This document covers basic Linux concepts and commands, focusing on file system commands, date and time management, and text processing tools like grep, awk, and sed. It provides examples of command usage for managing files, searching for text, and editing text streams. Additionally, it includes exercises to reinforce learning and understanding of these commands.

Uploaded by

b28244238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Unit 2.2 - Basic Linux Concepts and Commands.

File System (File and Directory Commands).


Package Manager
date and cal : The date, time and calendar
date
# Shows the date and time

# Shows date and time in CET format


Dadate te

# Our date and time in UTC format


date -u

# Show the year


date +"%Y"

# Shows the month


date +"%m"

# Displays the day, month and year in the indicated format


# The %Y character will be replaced with the year, %m with the month, and %d with
the day of the month.
date +"%d-%m-%Y"
date +"%d-%m-%Y %H:%M:%S"
date +"Year: %Y, Month: %m, Day: %d"

# Set the date and time (you need root permissions and not have NTP
synchronization enabled)
sudo date --set 2020-10-02
sudo date --set 19:00:07
sudo date --set="20190701 14:30"

cal
# Show calendar
Cal (current month)
cal -3 (shows previous month, current month, subsequent month)
CAL -M 12 (December calendar or 12th month of the current year)
cal -m 3 2019 (calendar of month 3 of the year 2019)
cal -y (annual calendar)
CAL-Y 2021 (2021 calendar)
NCAL (vertical format)
ncal -e (holy week closes, orthodox holy week)
NCAL - w -3 (shows the number of the week of the previous, current and next month)

timedatectl
# Management of the date and time

# The timedatectl command displays the current hardware time (rtc) and system
clock time
timedatectl

# timedatectl: set the date and time (can only be done if automatic
synchronization is not established)
timedatectl set-time 22:53:48
timedatectl set-time "2019-10-02 19:00:07"
timedatectl set-time "2019-10-02"

# timedatectl: list time zones


timedatectl list-timezones
timedatectl list-timezones | grep -i madrid
timedatectl list-timezones | grep -i europe

# timedatectl: set the time zone


timedatectl set-timezone Europe/Madrid

# Off/Enable date and time synchronization by NTP


timedatectl set-ntp no
timedatectl set-ntp yes
NTP (Network Time Protocol) is a protocol that is based on sending time synchronization signals through
the IP network. It is the most common method of synchronizing the software clock of a GNU/Linux
system with Internet time servers.

Other time-related commands


#Start,Status, files, PID, of the NTP server
systemctl status systemd-timesyncd.service

#Information about the NTP server


timedatectl timesync-status

man date
man cal
man timedatectl
We should now remember some commands on files that we can
also use when leaving a command
tail : Display the last 10 lines of a file or the output of a command

• -n displays the last n lines of the file


• -f displays the last lines of the file in real time (for viewing changes to the file)

tail /etc/rsyslog.conf
tail -n 5 /etc/rsyslog.conf
tail -f /var/log/messages

# Last 2 lines of the ls -l command


ls -l | tail -n 2

man tail

head : Display the first 10 lines of a file (header) or command

head /etc/rsyslog.conf
head -n 5 /etc/rsyslog.conf

# First 2 lines of the ls -l command


ls -l | head -n 2

man head

truncate : How to empty a file


# With the -s option we set the size of thero file
history > mi_fichero.txt
ls -l
truncate -s 0 mi_fichero.txt
ls -l

# And there are more ways...

#Redirigir "nothing" to a file


> mi_fichero.txt

# echo without any parameter and without new line at the end (-n)
echo -n > mi_fichero.txt

#true does nothing except return an output value of 0, meaning "success".


true > mi_fichero.txt
#/dev/null or "null device", is a special file that discards all information that
is written to or redirected to it. In turn, it provides no data to any process
that tries to read from it, simply returning an EOF or file end.
cp /dev/null mi_fichero.txt

man truncate

grep : Search for texts in files and to the output of commands


# Search for an expression in a fileor
grep "miguel" /etc/group
grep "miguel" /etc/passwd
grep $(whoami) /etc/passwd

Grep syntax
grep -r recursive search
grep -n displays line numbers
grep -i case insensitive (mayusculas o minusculas)
grep -c counts the times a word appears within a file
grep -v exclude a search expression
grep -e -e search for various expressions
grep "^string" searching for expressions that begin with string
grep "string$" searching for expressions ending in a string

# Search for an expression in multiple files, recursively (-r)


cd /etc
grip -r "miguel" *

# Search for an expression in a file showing line numbers (-n)


grip -rn " miguel" *
cat /etc/group | grep -n "miguel"

# Find an expression at the output of another command


date --help
date --help | less

Date --Help | grep "%" (shows the modifiers of date %xxx )

# Previous note:
# ps command: shows running processes. Options:
#A: Displays processes for all users (all terminals)
# u: Displays additional information
#x: Displays information from terminal-less processes (daemons)

PS (displays user processes)


PSU (Extended User Information)
ps aux (a: all users, x: processes without terminal -daemons-, u: extended )

# Search without considering CASE/CASE (-i)


PS to
ps to | grep -i "Cron"
cat /etc/group | grep -ni "miguel" | less

# Exclude an expression in a search (-v)


# We will execute first
PS to | grep -i "cron"

# And now:
PS aux | grip -in "cron" | grip -v "grip"

#Con the -i option ignores SHIFT/Mins, that is, ignores the string despite typing
"Grep"
PS aux | grip -in "cron" | grep -we "Grep"

# Exclude multiple words from a search ( -v: exlcuye, -e: expression)


cat /etc/passwd

grep -v -e "miguel" -e "nologin" /etc/passwd


grep -vE "miguel|nologin" /etc/passwd

history | hook "ls" | hook-v-is "-it" -it's "-a" -it's "~"

# See what happens if we use this format:


history | grep "ls" | grep -vE "-l|-a|~"

# You have to 'escape' the first sign -


history | grep "ls" | grep -vE "\-l|-a|~"

#---

cat /etc/passwd | grep "miguel"

# Find lines that begin with a pattern (^)


cat /etc/passwd | grep "^miguel"

# Find lines ending with a pattern ($)


cat /etc/passwd | grep "miguel$"

Man arrested

EXERCISE:

ip neigh gives us the arp table of the Ubuntu interface. Select the interfaces that are in the STALE state
and display only the last 2 lines of the result.

find : Searching for files within a directory (and subdirectories)


Basic syntax

Find search_directory options search_term

• search_directory: This is the point of origin from where you want to start
searchingoptions. It can be the root directory "/", the current directory ".", the working
directory "~" or any other address.
• Options: This is the filter to use to search for the file. This could be the name, type, date
the file was created or modified, etc.
• search_term: specify the relevant search term

Search for a file by name, option: -name

cd

# Search for a **filename** by name (-name)


find . -name "doc1.txt"
find . -name "doc*.txt"
find . -name "doc*. txt"

# Search for a file name without differentiating between UPPER/lowercase (-iname)


find . -iname "DOC*. txt"

# Search for all file names except those indicated (-not)


find . -not -name "doc*.txt"

# Search and delete (BEWARE!)


find . -name "mi_fichero.txt" -delete

Search for a file by type, option: -type

# Search by file type (-type [f (normal file) d(directory) l(symbolic link)])


# Displays all system directories
find / -type d

# Displays the user's home directories


find ~ -type d

# Options can be combined


# Searches for files that meet a pattern and excludes directories and links
find ~ -type f -name "doc*"

# Search for directories that meet a pattern by excluding files and links
find ~ -type d -name "doc*"

Search by size (-size)

# The size must be accompanied by the "magnitude" -> (c:character/byte,


k:kilobytes, M:megabytes, G:gigabytes, b:512 byte blockss)

- Search by size (exact) (-size)


# Using 512 blocks and rounding
find / -size 10M

## Other searches by size


find ~ -size +5G
find ~ -size -1k
Search by owner or group (-user and -group)

ls -l

# Search by owner or group (-user and -group)


find /tmp -user miguel
find /tmp -group miguel

Search by permissions

# Search by permissions (exact).


find ~ -perm 644

# Search by permissions (minimum 644).


find ~ -perm -644

Other search options

# Some other useful options


## Search for empty files and directories
find ~ -empty

## Search for executables


find / -executable

## Search readable
find / -readable

man find

awk : An advanced tool for word processing


awk is actually an interpreted programming language. awk reads a line from the input and processes it
(processes it line by line). All the instructions we do with awk are applied sequentially to the input (to the
line read).

It is a very powerful tool, but with some complexity. Only a few common uses are shown here.

Its standard use is to filter files or output Linux commands, treating the lines to, for example, show
certain information about them

AWK can be considered to have 2 main parts: pattern and action. To see how it works we will see it
with an example.
Basic syntax

awk [option] [pattern] {action} file_name

# Display the contents of a file


cd
echo "1) John Maths 6.54" > notes.txt
echo "2) Paul Physics 8.23" >> notes.txt
echo "3) Michael Biology 7.98" >> notes.txt
echo "4) Steve Physics 5.10" >> notes.txt
echo "5) Jack History 4.68" >> notes.txt
echo "6) Richard Programming 9.05" >> notes.txt
echo "7) John Programming 8.42 - this note should be reviewed" >>
notes.txt

ls -l
cat notes.txt

# In this case there is only one "print" action of all the lines of a file
awk '{print}' notes.txt

# In this case, the lines of the file that meet the pattern are printed:
awk '/Phy/ {print}' notes.txt

- Print columns (the fields)


# \t represents a tab character, and is used to equalize the boundaries of the
output line.
# The numbers preceding the character "$" are similar to the use of Shell Scripts
arguments, but in this case they represent the positions of the columns of the
input line.
#$1 is for the first field, $2 is for the second field, etc.
#E $0 is for the entire line

awk '{print $1}' notes.txt


awk '{print $2"\t"$3}' notes.txt
awk '{print $2"\t\t"$3}' notes.txt

# Print the rows that meet a pattern


awk '/Phy/ {print}' notes.txt

# Print columns that meet a pattern


awk '/Phy/ {print $1}' notes.txt
awk '/Phy/ {print $2"\t\t"$3}' notes.txt

# Count and print the number of rows that meet a pattern


awk '/Phy/{++counter} END {print "Cuenta = ", counter}' notes.txt

# Print rows that exceed 35 characters


awk 'length($0) > 35' notes.txt

# Another character can be specified as a field separator (-F option)


# The defeco separator is the space or the TAB
echo "10;20;30;40;50;60;70" > scale.csv
cat scale.csv | awk -F ";" '{print $1}'
cat scale.csv | awk -F ";" '{print $1"\t"$2"\t"$7}'

# It is very useful to deal with the output of other commands


To print file sizes:
ls -la | awk '{print $5 "\t" $9}'

MAN AWK
info awk

sed : Editing texts from the terminal


Sed is a stream editor (stream editor). Just like awk treats every line of your entry. That is why it is usually
used to edit files or read the output of a command, replacing one text (or pattern) with another.

Basic syntax

• sed 's/original_string/new_string/' input_file


• sed 's/ original_string/new_string /' < input_file
• sed 's/ original_string/new_string /' < input_file > output_file
• command_result | sed 's/ original_string/new_string /'

# Substitution of one pattern for another pattern


echo "It's a poem by Antonio" | be 's/Antonio/Federico/'
echo "It's a poem by Antonio" | Thirst 's/nton/urel/'

# Attention! By default, sed only modifies the first occurrence of the


# pattern:
echo "one two three, one two three" > example.txt
echo "four three two one" >> example.txt
echo "one hundred" >> example.txt
cat example.txt

cat example.txt | sed 's/one/ONE/'

# To replace ALL occurrences, the global replacement option is used:


cat example.txt | sed 's/one/ONE/g'

# You can specify a delimiter other than '/'


echo 'PATH=$PATH:/usr/local/bin' > wrong_path.txt
cat wrong_path.txt

# If the string to be replaced contains '/', we have to escape each slash'/'


through a backslash '\' so that sed does not confuse them with the separators it
needs
#Vamos to change in wrong_path.txt /usr/local/bin to /opt/bin:
cat wrong_path.txt | sed 's/\/usr\/local\/bin/\/opt\/bin/'

# This can be avoided by changing the bounding symbol:


cat wrong_path.txt | sed 's_/usr/local/bin_/opt/bin_'
cat wrong_path.txt | sed 's:/usr/local/bin:/opt/bin:'
cat wrong_path.txt | sed 's|/usr/local/bin|/opt/bin|'
cat wrong_path.txt | sed 's+/usr/local/bin+/opt/bin+'

# sed is widely used to modify existing files


## Generating a new file with the changes:
sed 's/\/usr\/local\/bin/\/opt\/bin/' < wrong_path.txt > right_path.txt
cat wrong_path.txt
cat right_path.txt
## Overwriting the original file, but with the new changes: (-i)
cat wrong_path.txt
sed -i 's/\/usr\/local\/bin/\/opt\/bin/' wrong_path.txt
cat wrong_path.txt

# You can use sed to modify multiple expressions simultaneously (-e option):
cat example.txt | sed -e 's/one/ONE/' -e 's/two/TwO/'
cat example.txt | sed -e 's/one/ONE/g' -e 's/two/TwO/'
cat example.txt | sed -e 's/one/ONE/' -e 's/two/TwO/g'
cat example.txt | sed -e 's/one/ONE/g' -e 's/two/TwO/g'

# You can also modify only the lines that also contain another
# pattern:
cat example.txt | sed '/hundred/s/one/ONE/'

# You can search for patterns to replace, regardless of


# UPPERCASE:
cat example.txt | sed 's/oNe/ONE/I'

# Finally, several options can be grouped at once:


cat example.txt | sed -e 's/oNe/ONE/Ig'

# How to replace all white space with a ','


I echo "this is a test text" > file.txt
sed 's/\s/,/g' file.txt

# And if there are several blanks.... .


#\s: "White Space"
# \+: more than one space (it is necessary to "escape" the + character )
I echo "This is a test text" > file2.txt
sed 's/\s\+/,/g' fichero2.txt

man sed
info sed

tr : Translator (replacement or removal of characters).


The tr command is a "translator": it is used to replace or delete a character set with another set of
characters.

Basic syntax

tr [parameters]... set1 [set2]

Where SET1 AND SET2 are both explicitly defined character sequences or sets predefined by this
command.

Parameters:

• "-c": Uses the SET1 plug-in. This means that it defines SET1 as all characters that are not
in the definition given by the User. This parameter is useful to indicate characters that we
do not want to be affected.
• "-d": Deletes the characters defined in SET1.
• "-s": (squeeze-repeats): Eliminates the continuous sequence of repeated characters,
defined in SET1.

POSIX character sets (some of the most common):

• [:alnum:] : The letters and digits.


• [:alpha:] : Lyrics.
• [:digit:] : Digits.
• [:graph:] : Printable characters, excluding White Space.
• [:print:] : Printable characters, including White Space.
• [:lower:] : Lowercase letters.
• [:upper:] : Capital letters.
• [:punct:] : Punctuation marks.
• [:space:] : The Blanks (horizonatal and vertical).

I echo "Let's change spaces for tabs" | tr [:space:] '\t'

# As you can see, POSIX character classes can be used


I say "my name is Elena" | tr [:lower:] [:upper:]

# And it is useful, because although you can specify a complete set of characters,
it is not always elegantand
I say "my name is Elena" | tr abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ

#Incluso to encode secret messages


I echo "My name is Elena" | tr abcdefghijklmnopqrstuvwxyz
ZYWVUTSRQPONMLKJIHGFEDCBA > secret1.txt
cat secreto1.txt
cat secreto1.txt | tr ZYWVUTSRQPONMLKJIHGFEDCBA abcdefghijklmnopqrstuvwxyz

# Intervals can be used. This example also runs tr in interactive mode. Use ^C to
exit.
tr a-z A-Z
my name is Elena

# Chained fomra tr can be used


echo "my name is Elena, teacher of the subject" | tr ' ' z | tr a-z b-z | tr b-z
a-z | TR and ' '

# What if there are many spaces between words?


echo "Let's change spaces for tabs" | tr [:space:] '\t'

# We can ignore the repetitions of characters that are equal and go in a row and
leave only one (-s)
'This is a simple example!!! .... Something practical perhaps??? ' | tr-s'!.? ¿'

# We can substitute the repetitions of the characters that are equal and are
followed (-s)
# The -s flag is useful for "compressing" repeated characters in a row
echo "Let's change spaces for tabs" | tr -s [:space:] '\t'
echo "ssss BBB ss BBBBB sssss" | tr -s 's' 'a'

# More examples
echo "We compress the spaces :)" | tr -s [:space:]
echo "We compress the spaces :)" | tr -s [:space:] ' '

# Instead of the wildcard [:space:] I can also use the space character directly
echo "We compress the spaces :)" | tr -s ' '

# It also allows you to delete or delete certain characters (-d)


echo "We will erase all the errrres" | tr -d 'r'

# We'll number numbers


echo "Mi DNI es 555424242A" | tr -d [:digit:]`

# Again, intervals can be used


echo "Mi DNI es 555424242A" | TR -D 0-9

# You can use the complement of a set (-c), for example to delete
# Print only the numbers (I delete everything that is not digits)
echo "Mi DNI es 555424242A" | tr -cd [:digit:]
# I delete what are not numbers or spaces
echo "Mi DNI es 555424242A" | tr -cd [:digit:][:space:]

# To remove all repeated characters use -s with the complement (-c) of Set "":
echo 'Esteeee is oooanother simple example!!! .... ' | tr -c -s ""

# The add-in (-c) can be used to remove all non-printing characters from a file
(e.g. line breaks \n, tabs: \t, etc.)
# We first use the 'echo' command with the -e option to interpret line breaks and
tabs:
echo -e "Hola,\n\n\t\t mundo"
echo -e "Hola,\n\n\t\t mundo" | tr -cd [:print:]

# Another way to send the information to a file before


echo -e "Hola,\n\n\t\t mundo" > non_printable.txt
cat non_printable.txt

# I pass the file as input of the cmando tr


tr -cd [:print:] < non_printable.txt

# The modified file could be saved in a new file


tr -cd [:print:] < non_printable.txt > printable.txt

man tr

cut : Cut texts


The cut command allows you to select and cut fields from each line of a file or its standard entry to extract
the selected parts. To differentiate one field from another, use delimiters. The default delimiter is Tab

Basic syntaxs

cut OPTIONS... [ARCHIVE]...


• "-c": This option selects the characters of each line based on the input file
• "-d": Indicates the delimiter between fields
• "-f": This option allows you to select the columns separated by a particular delimiter

cd
"Read and learn, learn and do, do and evolve." > meme.txt
echo "Learn and code, code and share, share and evolve." >> meme.txt

# Select characters from their position on the line (-c)


cut -c 1-3 meme.txt
cut -c 7-14 meme.txt

# -f: Select fields, -d: separated by a delimiter

# Select fields (separated by ',') or select a range of fields (separated by '-'),


separated by a delimiter (-d) (-f)
cut -d "," -f 1.2 same.txt
cut -d "," -f 1.3 same.txt
cut -d "," -f 1-3 same.txt

cut -d " " -f 1.2 same.txt


cut -d " " -f 1.3 same.txt
cut -d " - f 1-3 same.txt

# Beware of the boundaries of the fields! (our file only has 3 fields separated by
',')
cut -d "," -f 1.4 same.txt
cut -d "," -f 1.5 same.txt

# Example: How to view RAM (total)


free
free | grep Mem
free | grep Mem | tr -s ' ' ','
free | grep Mem | tr -s ' ' ',' | cut -d "," -f 2

# Another way
free | grep Mem | tr -s [:space:] | sed 's/ /,/g' | cut -d , -f 2

# Another way: using thirst with a Regular Expression (between two slash)
# \s --> space
# \+ --> more than one space. It is necessary to "escape" (with backslash) the
character "+"
free | grep Mem | sed 's/\s\+/,/g'
free | grep Mem | sed 's/\s\+/,/g' | cut -d , -f2

#Usando tr
free | grep Mem | tr -s ' ' ',' | cut -d , -f2

# To specify bytes instead of characters (useful with binary files),


# you can use the option to specify the number of bytes
(-b).

man cut
info cut
wc : Word Count - Calculator in terminal
WC is a utility that allows you to count the number of lines, the number of words and the number of bytes
of a file, or the lines it receives for its input.

# Number of lines, number of words, and number ofbytes


wc /etc/bash.bashrc

# To count the number of lines


wc -l /etc/bash.bashrc

# To count the number of bytes


wc -c /etc/bash.bashrc

# To count the number of characters


wc -m /etc/bash.bashrc

# To count the number of words


wc -w /etc/bash.bashrc

# Longest line size


wc -L /etc/bash.bashrc

# Can be used in combination with other commands


ls -l /etc | wc -l

man wc

Redirects: Standard Input, Standard Output, Error Output


A shell like Bash executes commands and programs that take input data and produce output data. Typically
this I/O data is either bytes or characters.

The input data of a command can be obtained, for example, from a file, from the terminal (the keyboard),
or from the output of another command.

The output of a command can be directed to a file, to the terminal (screen), or to the input of another
command. That is, you can make redirects from the output of one command to another destination file
descriptor.

Linux shells handle three I/O streams:

• 0 - stdin - Standard input: Provides input to commands. It has file descriptor 0.


• 1 - stdout - Standard output: Displays the output of the commands. It has file
descriptor 1.
• 2 - stderr - Standard Error Output: Displays the output of command errors. It has
file descriptor 2.
Let's prepare some files to practice redirects:

cd
mkdir redir
cd redir
echo -e "1 Programming\n2 Operating Systems\n3 Statistics" > x_text1
echo -e "9\tredes\n3\testadística\n10\tprogramación" > y_text2
echo -e "4\tPatatas\n5\tHuevos\n10\tCebolla" > text3

Redirect output
As seen above, there are two operators to redirect an output to a file:

• > redirects the output of one file descriptor to another output file. Creates the output file if
it does not exist. If it already exists, the contents of the output file are overwritten, usually
without warning.

• >> redirects the output of the file descriptor to another output file. Creates the output file
if it does not exist. If it already exists, the contents are appended to the output file.

With these modes, you can separate the standard output from the error output of a command. Let's look at
some basic examples:

# We send the output data to one file and the errors to another file:
ls -laR / 1>fichero_datos_salida 2>fichero_errores

# LOs errors can be ignored: redirecting the error output (stderr) to /dev/null
ls -laR / 2>/dev/null

# If we want to ignore the command output: redirect the standard output (stdout)
to /dev/null
ls -laR / 1>/dev/null

# If we want to ignore the output and errors: first redirect the standard output
(stdout)
# to /dev/null, and then redirect the error output (stderr) to where
# Point to the standard output (which was pointed to /dev/null)
ls -laR / 1>/dev/null 2>&1

# The most common and useful way to save the output of a command (or program) and
errors is to use:
ls -laR / 1>salida_y_errores.txt 2>&1

What is /dev/null? It is a special file in Linux. It is where the information is sent to be discarded. It
represents "nothingness."

Let's continue with more examples using the files we have prepared before:

#mkdir redir
cd ~/redir
echo "Hola" > xprueba.txt
Ls X* Z*
ls x* z* 1>stdout.txt 2>stderr.txt
cat stdout.txt
cat stderr.txt

If the file descriptor is omitted, the default standard output stdout is taken. The default standard output
"stdout" has as file descriptor: 1.

ls x* z* >stdout.txt 2>stderr.txt

ls w* y*
ls w* y* >>stdout.txt 2>>stderr.txt

cat stdout.txt
cat stderr.txt

You can redirect the standard output and standard error output to the same destination, using the & > **
and ** &>> operators.

ls x* z* &>stdout_and_stderr.txt
ls w* y* &>>stdout_and_stderr.txt
The order in which outputs are redirected is important. For example:

[ls 2>&1 >salida.txt]


ls x* z* 2>&1 >salida.txt
cat exit.txt
#en the file error output is not included

It is not the same as:

[ls >salida.txt 2>&1]


ls x* z* >salida.txt 2>&1
cat exit.txt
#en this case departure.txt if you are affected by the redirect

• In the first case, stderr is redirected to the current stdout site and then stdout is redirected
to the output file.txt. But this second redirect affects only stdout, not stderr.

• In the second case, stderr is redirected to the current site of stdout (which is output.txt), so
the error appears in the file

Note that in the first command that the standard output was redirected after the standard error, therefore the
output of the standard error still goes to the terminal window.

# By default, the stdout and stderr outputs of a command are pointing by default
to the terminal/screen
#
# +----------+ (stdout)
# | |----> 1 (terminal, display)
# 0 >----| ls |
# (stdin) | |----> 2 (terminal, pantalla)
# +----------+ (stderr)
#

# Redirects stdout and stderr to the output file.txt


ls x* z* &>salida.txt
cat exit.txt
#en the screen the stderr output is NOT presented

# Redirects stdout to the output file.txt, and then redirects stderr to where
stdout points (which is output.txt)
ls x* z* >salida.txt 2>&1
cat exit.txt
#en the screen the stderr output is NOT presented

# Redirect stderr to where stdout points (terminal, screen). Then redirect stdout
to the output file.txt. But stderr had been pointed at the terminal (screen)!
ls x* z* 2>&1 >output.txt # stderr does not go out.txt
cat exit.txt
#en the screen YES, the stderr output is presented

# Now we can run the following commands to see the difference


find / -user miguel | grep -vi denegado
find / -user miguel 2>&1 | grep -vi denegado

Redirect the entry


You can redirect the standard input (stdin) as input to a command using the < operator.

Several examples have been seen before, similar to the following:

echo "This is a test" > text1


tr ' ' '\t' <text1
cat text1
cat < text1
Some shells, such as Bash, include a form of input redirection called here-document , which is
commonly used in scripts.

Here-documents are also used with the << operator.

Here are some examples of here-documents:

# We use a here-document to simulate a file as input for the 'sort' command. The
here-document is delimited by the NDT identifier.

# 'sort' is a utility that takes the file(s) that appear in its list of arguments
and sorts its lines according to parameters.

sort -k2 <<END


1 Physics
2 Programming
3 Statistics
END
# the document ends when you enter END. sort -k2 sorts alphabetically by the
second column
# It is common to create the contents of a file like this:
cat <<TORT >tortilla.txt
Potatoes
egg
ONION
Will
oil

and love :)
WRONG

cat tortilla.txt

Pipes
Pipes have been used before. Basically, they serve to redirect the output of one command to the input of
another command. Commands can thus be chained together using pipes. The pipeline operator is | (AltGr
+ 1).

ls y* x* z* u* q*

ls y* x* z* u* q* 2>&1 | sort -r
# sort command: sorts. With option -r reverses the ordern

To expand...
Operator "-"

Each command in the chain can have its options or arguments. Some commands use the - (hyphen)
operator instead of a file name, when the command input must come from a stdin (rather than a file).

# We prepare the file for the example


echo "Text 1" > text1
echo "Text 2" > text2
echo "Text 3" > text3
cat text1 text2 text3 > pipes.txt
cat pipes.txt

tar cvf pipes .tar pipes.txt


ls -l
bzip2 pipes.tar
ls -l

# Now we can see the operator - in action:


bunzip2 -c pipes.tar.bz2 | tar -xvf -

In the case of using pipes to chain several commands, and that some command needs to redirect its input
(with the < operator), that redirection of the input will be put first within the command chain.
This means that it will be the first command that takes a file as input, for example. Thus, the rest of the
commands that are chained will perform operations and treatments on the file read by the first command.

To expand further...
Shells in general, and Bash in particular, are a very complete set of tools, but they also have a certain
complexity. Mastering these tools is a matter of practice, study and use. If you want to expand on what is
stated in this topic, you can search for information about the following topics: "

• Using the -exec option for the find command.


• The xargs tool.
• The tee tool.

Some helpful resources

• Learn Linux 101


• SuperbasicOS command
• Linux Tutorial
• Linux Filesystem
• Tutorial de awk
• Thirst tutorial
• Developer Technologies - IBM
• TeamSpeak3 Server

You might also like