0% found this document useful (0 votes)
58 views8 pages

Questions PDF

The document provides instructions on how to extend the root partition and filesystem in Linux using different methods. It discusses extending the root partition size by deleting and recreating the partition with fdisk, touching a forcefsck file to trigger filesystem check on reboot, and using df to check the new size. It also covers extending the filesystem with LVM by adding a physical volume, extending the volume group and logical volume, and resizing the filesystem. Instructions are given for extending the filesystem without LVM by increasing the disk and partition size with fdisk and parted, then resizing the filesystem with resize2fs.

Uploaded by

venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views8 pages

Questions PDF

The document provides instructions on how to extend the root partition and filesystem in Linux using different methods. It discusses extending the root partition size by deleting and recreating the partition with fdisk, touching a forcefsck file to trigger filesystem check on reboot, and using df to check the new size. It also covers extending the filesystem with LVM by adding a physical volume, extending the volume group and logical volume, and resizing the filesystem. Instructions are given for extending the filesystem without LVM by increasing the disk and partition size with fdisk and parted, then resizing the filesystem with resize2fs.

Uploaded by

venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

 how to extend the root partition in linux when it is full

# df -h .
However, the disk size reports 20GB:
# fdisk -l
# fdisk /dev/xvda
Command (m for help): p
Once again please take a note of the starting sector 4096. Still in fdisk's interactive mode remove
partition:
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Next, create a new partition right on the top of the previous and ensure that you use same
starting sector:
Command (m for help): n
Make the partition 1 bootable and print new partition table:
Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.

Command (m for help): p


Confirm all new details and write new partition table:
Command (m for help): w
At this point the system needs to be rebooted in order to remount our root partition with a new
size. Force fsck on next reboot to ensure that the partition is checked before it is mounted. To do
so just create an empty file called forcefsck in the root of your / partition:
# touch /forcefsck
Reboot your system. Once the system is up again check the partition size:
#df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 644M 19G 4% /
last fsck check:
# tune2fs -l /dev/xvda1

 how to extend the filesystem in linux with lvm


# fdisk -l /dev/sda
# pvcreate /dev/sda1
# pvs
Extending Volume Group
# vgextend vg_tecmint /dev/sda1
# vgs
# pvscan
# vgdisplay
# lvextend -l +4607 /dev/vg_tecmint/LogVol01
# resize2fs /dev/vg_tecmint/LogVol01
# lvdisplay
# vgdisplay
Reducing Logical Volume (LVM)
18GB ---> 10GB
# lvs
# df -h
# umount -v /mnt/tecmint_reduce_test/
# e2fsck -ff /dev/vg_tecmint_extra/tecmint_reduce_test
# resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test 10GB
# lvreduce -L -8G /dev/vg_tecmint_extra/tecmint_reduce_test
# lvdisplay vg_tecmint_extra
# mount /dev/vg_tecmint_extra/tecmint_reduce_test /mnt/tecmint_reduce_test/
 how to extend the filesystem in linux without lvm
If your VM is not using LVM, the steps you’ll perform are:
1. Extend the size of the physical disk
2. Extend the size of the partition on the physical disk
3. Do an online resize of the filesystem to use the new space
resize of a root filesystem without LVM
[root@temeria ~]# parted
If you find that you have a swap partition (#3) after the root filesystem, there is a workaround.
Exit parted and run “swapoff -a” to turn off all swap. Then go back into parted and delete the swap partition, so if it
was number 3, you’d run parted and type “rm 3”. After this, edit /etc/fstab and remove the line pointing to the swap
partition so your system doesn’t try to mount it at the next boot. Once you’ve done this, you should be able to
continue with the rest of this process.
[root@temeria ~]# df -k
Increase disk allocation in the hypervisor
The first thing you need to do is increase the size of your existing disk.

[root@temeria ~]# parted


[root@temeria ~]# fdisk /dev/sda
Command (m for help): p
Command (m for help): d
Partition number (1-4): 2
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (39-3133, default 39):
Using default value 39
Last cylinder, +cylinders or +size{K,M,G} (39-3133, default 3133):
Using default value 3133
Command (m for help): p
Command (m for help): w
[root@temeria ~]# resize2fs /dev/sda2
[root@temeria ~]# df -h
# fsck.ext3 -fy /dev/sdb1

 SU & Sudo
Sudo runs a single command with root privileges. ... This is a key difference between su and sudo. Su
switches you to the root user account and requires the root account's password. Sudo runs a single
command with root privileges – it doesn't switch to the root user or require a separate root user password.
 What is the background process of SSH
Normal Process
Normal processes are those which have life span of a session. They are started during the session as foreground processes
and end up in certain time span or when the session gets logged out. These processes have their owner as any of the valid
user of the system, including root.
Orphan Process
Orphan processes are those which initially had a parent which created the process but after some time, the parent process
unintentionally died or crashed, making init to be the parent of that process. Such processes have init as their immediate
parent which waits on these processes until they die or end up.
Daemon Process
These are some intentionally orphaned processes, such processes which are intentionally left running on the system are
termed as daemon or intentionally orphaned processes. They are usually long-running processes which are once initiated
and then detached from any controlling terminal so that they can run in background till they do not get completed, or end up
throwing an error. Parent of such processes intentionally dies making child execute in background.

 NFS Configuration
NFS Port Number : 111

[root@nfsserver ~]# yum install nfs-utils nfs-utils-lib


[root@nfsserver ~]# yum install portmap (not required with NFSv4)
[root@nfsserver ~]# apt-get install nfs-utils nfs-utils-lib
[root@nfsserver ~]# /etc/init.d/portmap start
[root@nfsserver ~]# /etc/init.d/nfs start
[root@nfsserver ~]# chkconfig --level 35 portmap on
[root@nfsserver ~]# chkconfig --level 35 nfs on

Setting Up the NFS Server


[root@nfsserver ~]# mkdir /nfsshare
[root@nfsserver ~]# vi /etc/exports
/nfsshare 192.168.0.101(rw,sync,no_root_squash)

Setting Up the NFS Client


[root@nfsclient ~]# showmount -e 192.168.0.100
Export list for 192.168.0.100:
/nfsshare 192.168.0.101

Mount Shared NFS Directory


[root@nfsclient ~]# mount -t nfs 192.168.0.100:/nfsshare /mnt/nfsshare
[root@nfsclient ~]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.0.100:/nfsshare on /mnt type nfs (rw,addr=192.168.0.100)
[root@nfsclient ~]# vi /etc/fstab
192.168.0.100:/nfsshare /mnt nfs defaults 0 0

Test the Working of NFS Setup


[root@nfsserver ~]# cat > /nfsshare/nfstest.txt
This is a test file to test the working of NFS server setup.
[root@nfsclient]# ll /mnt/nfsshare
total 4
-rw-r--r-- 1 root root 61 Sep 21 21:44 nfstest.txt
root@nfsclient ~]# cat /mnt/nfsshare/nfstest.txt
This is a test file to test the working of NFS server setup.
Removing the NFS Mount
root@nfsclient ~]# umount /mnt/nfsshare
[root@nfsclient ~]# df -h -F nfs
 NIC Bonding
Bonding (or channel bonding) is a technology-enabled by the Linux kernel and Red Hat Enterprise Linux, that allows
administrators to combine two or more network interfaces to form a single, logical “bonded” interface for redundancy
or increased throughput. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally, they may provide link-integrity monitoring.

The two important reasons to create an interface bonding are :


1. To provide increased bandwidth
2. To provide redundancy in the face of hardware failure
Bonding modes
Fault Load
Mode Policy How it works
Tolerance balancing
packets are sequentially transmitted/received through each
0 Round Robin No Yes
interfaces one by one.
one NIC active while another NIC is asleep. If the active NIC goes
1 Active Backup down, another NIC becomes active. only supported in x86 Yes No
environments.
In this mode the, the MAC address of the slave NIC is matched up
XOR [exclusive against the incoming request’s MAC and once this connection is
2 Yes Yes
OR] established same NIC is used to transmit/receive for the destination
MAC.
3 Broadcast All transmissions are sent on all slaves Yes No
aggregated NICs act as one NIC which results in a higher
Dynamic Link throughput, but also provides failover in the case that a NIC fails.
4 Yes Yes
Aggregation Dynamic Link Aggregation requires a switch that supports IEEE
802.3ad.
The outgoing traffic is distributed depending on the current load on
Transmit Load each slave interface. Incoming traffic is received by the current
5 Yes Yes
Balancing (TLB) slave. If the receiving slave fails, another slave takes over the MAC
address of the failed slave.
Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not
Adaptive Load require any particular switch configuration. Adaptive Load Balancing
6 Yes Yes
Balancing (ALB) is only supported in x86 environments. The receiving packets are
load balanced through ARP negotiation.
Creating the Network Bonding using nmcli
1. Use the nmcli connection command without any arguments to view the existing network connections. You can
shorten the “connection” argument to “con“. Example:
# nmcli connection
2. Include the “add type bond” arguments, and any additional information to create a network bond connection.
The following example creates a bonded interface named bond0, defines the interface as bond0, sets the mode to
“active-backup“, and assigns an IP address to the bonded interface.
# nmcli con add type bond con-name bond0 ifname bond0 mode active-backup ip4 192.168.219.150/24
Connection 'bond0' (1a75eef0-f2c9-417d-81a0-fabab4a1531c) successfully added.
# nmcli connection
3. The ‘nmcli con add type bond’ command creates an interface configuration file in the
/etc/sysconfig/network-scripts directory. For example:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS=mode=active-backup
BONDING_MASTER=yes
BOOTPROTO=none
IPADDR=192.168.219.150
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=bond0
UUID=1a75eef0-f2c9-417d-81a0-fabab4a1531c
ONBOOT=yes
Creating the Slave Interfaces
# nmcli con add type bond-slave ifname ens33 master bond0
Connection 'bond-slave-ens33' (79c40960-6b2c-47ba-a417-988332affed1) successfully added.
# nmcli con add type bond-slave ifname ens37 master bond0
Connection 'bond-slave-ens37' (46222a52-f2ae-4732-bf06-ef760aea0d7b) successfully added.
# nmcli connection

# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-ens33
TYPE=Ethernet
NAME=bond-slave-ens33
UUID=79c40960-6b2c-47ba-a417-988332affed1
DEVICE=ens33
ONBOOT=yes
MASTER=bond0
SLAVE=yes

# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-ens37
TYPE=Ethernet
NAME=bond-slave-ens37
UUID=46222a52-f2ae-4732-bf06-ef760aea0d7b
DEVICE=ens37
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Activating the Bond


# nmcli connection up bond-slave-ens33
# nmcli connection up bond-slave-ens37
2. The following command brings up the bond0 interface:
# nmcli con up bond0
# ip link
# cat /proc/net/bonding/bond?

 NIC Teaming
# rpm -qa | grep teamd
teamd-1.27-4.el7.x86_64
# nmcli con show
Create the teaming interface:
# nmcli con add type team con-name myteam0 ifname team0 config '{ "runner": {"name":
"loadbalance"}}'
team0 config '{ "runner": {"name": "loadbalance"}}'
[10655.288431] IPv6: ADDRCONF(NETDEV_UP): team0: link is not ready
[10655.306955] team0: Mode changed to "loadbalance"
Connection 'myteam0' (ab0a5f7b-2547-4d4f-8fc8-834030839fc1) successfully added.
#cat /etc/sysconfig/network-scripts/ifcfg-myteam0
DEVICE=team0
TEAM_CONFIG="{ \"runner\": {\"name\": \"loadbalance\"}}"
DEVICETYPE=Team
NAME=myteam0
ONBOOT=yes

Add an IPv4 configuration:


In RHEL 7.0:
# nmcli con mod myteam0 ipv4.addresses "192.168.1.10/24 192.168.1.1"
# nmcli con mod myteam0 ipv4.method manual
From RHEL 7.1 on:
# nmcli con mod myteam0 ipv4.addresses 192.168.1.10/24
# nmcli con mod myteam0 ipv4.gateway 192.168.1.1
# nmcli con mod myteam0 ipv4.method manual
# nmcli con add type team-slave con-name team0-slave0 ifname eth0 master team0
[10707.777803] team0: Port device eth0 added
#cat /etc/sysconfig/network-scripts/ifcfg-team0-slave0
NAME=team0-slave0
DEVICE=eth0
ONBOOT=yes
TEAM_MASTER=team0
DEVICETYPE=TeamPort

# nmcli con add type team-slave con-name team0-slave1 ifname eth1 master team0
[10750.419419] team0: Port device eth1 added
#cat /etc/sysconfig/network-scripts/ifcfg-team0-slave1
NAME=team0-slave1
DEVICE=eth1
ONBOOT=yes
TEAM_MASTER=team0
DEVICETYPE=TeamPort

# nmcli con up myteam0


# nmcli con show
# teamdctl team0 state
# teamdctl team0 config dump
# teamnl team0 ports
# nmcli con reload

 UMASK
UMASK (User Mask or User file creation MASK) is the default permission or base permissions given when
a new file (even folder too, as Linux treats everything as files) is created on a Linux machine. Most of the
Linux distros give 022 (0022) as default UMASK. In other words, it is a system default permissions for newly
created files/folders in the machine.
When user create a file or directory under Linux or UNIX, she create it with a default set of permissions. In most case
the system defaults may be open or relaxed for file sharing purpose. For example, if a text file has 666 permissions, it
grants read and write permission to everyone. Similarly a directory with 777 permissions, grants read, write, and
execute permission to everyone.
Calculating The Final Permission For FILES
You can simply subtract the umask from the base permissions to determine the final permission for file as follows:
666 – 022 = 644
 File base permissions : 666
 umask value : 022
 subtract to get permissions of new file (666-022) : 644 (rw-r–r–)
Calculating The Final Permission For DIRECTORIES
You can simply subtract the umask from the base permissions to determine the final permission for directory as
follows:
777 – 022 = 755
 Directory base permissions : 777
 umask value : 022
 Subtract to get permissions of new directory (777-022) : 755 (rwxr-xr-x)

 Cron Jobs
Cron is one of Linux’s most useful tools and a developer favorite because it allows you to run automated commands
at specific periods, dates, and intervals using both general-purpose and task-specific scripts. Given that description,
you can imagine how system admins use it to automate backup tasks, directory cleaning, notifications, etc.
Cron jobs run in the background and constantly check the /etc/crontab file, and the /etc/cron.*/ and /var/spool/cron/
directories. The cron files are not supposed to be edited directly and each user has a unique crontab.
Types of cron configuration files
There are different types of configuration files:
1. The UNIX / Linux system crontab : Usually, used by system services and critical jobs that requires root
like privileges. The sixth field (see below for field description) is the name of a user for the command to run
as. This gives the system crontab the ability to run commands as any user.
2. The user crontabs: User can install their own cron jobs using the crontab command. The sixth field is the
command to run, and all commands run as the user who created the crontab
$ crontab –e
The syntax is:
1 2 3 4 5 /path/to/command arg1 arg2 ><or >< 1 2 3 4 5 /root/backup.sh
Where,
 1: Minute (0-59)
 2: Hours (0-23)
 3: Day (0-31)
 4: Month (0-12 [12 == December])
 5: Day of the week(0-7 [7 or 0 == sunday])
 /path/to/command – Script or command name to schedule
Run backup cron job script
# crontab -e
Append the following entry:
0 3 * * * /root/backup.sh
Save and close the file.
Run /path/to/unixcommand at 5 after 4 every Sunday, enter:
5 4 * * sun /path/to/unixcommand
List all your cron jobs
# crontab -l
# crontab -u username -l
To remove or erase all crontab jobs use the following command:
# Delete the current cron jobs #
crontab -r
## Delete job for specific user. Must be run as root user ##
crontab -r -u username
special string to save time
Instead of the first five fields, you can use any one of eight special strings. It will not just save your time but it will
improve readability.
Special string Meaning
@reboot Run once, at startup.
@yearly Run once a year, “0 0 1 1 *”.
@annually (same as @yearly)
@monthly Run once a month, “0 0 1 * *”.
@weekly Run once a week, “0 0 * * 0”.
@daily Run once a day, “0 0 * * *”.
@midnight (same as @daily)
@hourly Run once an hour, “0 * * * *”.
Examples
Run ntpdate command every hour: >>>>> @hourly /path/to/ntpdate
Make a backup everyday: >>>>>> @daily /path/to/backup/script.sh
How do I backup installed cron jobs entries?
Simply type the following command to backup your cronjobs to a nas server mounted at
/nas01/backup/cron/users.root.bakup directory:
# crontab -l > /nas01/backup/cron/users.root.bakup
# crontab -u userName -l > /nas01/backup/cron/users.userName.bakup
crond and cron jobs log file
You can use the cat command/grep command/tail command to view crond log file. For example,
cat /var/log/cron
tail -f /var/log/cron
grep "my-script.sh"
tail -f /var/log/cron
Find out if daily backups jobs running or not on FreeBSD Unix server:
$ sudo grep '/usr/local/bin/rsnapshot daily' /var/log/cron
On modern Linux distro one can use the systemctl command or journalctl command:
sudo systemctl status cron
sudo journalctl -u cron
sudo journalctl -u cron | grep backup-script.sh
 How to generate core dump in Linux
1) cd /etc
2)vi profile
3) remove "#" from "#ulimit -S -c 0 > /dev/null 2>&1"
4) line must be like "ulimit -S -c 0 > /dev/null 2>&1"
5) save and exit(:wx)
6) login with a userid
7) execute "cat"
8) enter "Ctrl + \" (hit Crtl and \ together
9) a core dump inside you current directory....

 Single User/Emergency Mode and Rescue Mode

Rescue mode is typically running off of a ramdisk with fewer commands available. It’s not a full copy of your installed OS;
it boots cleanly so that you can trust the utilities. That could be from a DVD, USB device, or from a different grub menu
from a known good kernel + initrd image. Most likely it’s from a boot disk. You use this when something is so messed up
that you can’t get the OS to boot at all.

Single user mode boots from your normal installation but skips all of the things that make the OS boot into multiuser. So,
no X, no network services to speak of (like nfsd or samba), potentially no network connectivity at all. You’ll have access to
the full suite of utilities, all filesystems, drivers, and can manually start any applications. You’d typically use this for
filesystem corruption on critical mount points, forgotten passwords you need to break, or problems with graphical drivers
that cause X not to start correctly.

Rescue mode = known good kernel, reduced functionality.


Single-User mode = default system kernel, full functionality, but most things don’t start automatically. Roughly analogous
to Window’s “Safe Mode”, except that most/all of the drivers still load while system services largely don’t.

You might also like