Monday 1 February 2016

Linux Basic Commands (Part-2)

Process Management
Due to multitasking behavior of Linux operating system there are running many processes at the same time. Every process has its id. Along with OS own processes running there are user’s processes are also running on the machine. If Administrator would like to get a snapshot of what is currently happening on the machine may use top command. But top shows only real time view which fits on the screen. Here is another command i.e ps stand for processes. It will shows you the processes running on current terminal but it takes aux argument will show complete system view which may be more helpful. System Admins use grep with ps usually pipe the output. Any process can be killed by kill command. Examples are mentioned below for more detail;
1.  top (shows what is currently happening with system)
2.  ps aux | less (shows the all running processes with process ids even by other users)
3.  ps aux | grep ‘firefox’ (output of ps will be piped through grep to search for specific process i.e firefox.)
4.  pstree (displays the process in tree format)
5.  kill 1234 (kills the process having id 1234)
6.  kill -9 1234 (forefully kills the process having id 1234)
Package Management
Package Management System is collection of software tools that automates the process of installing, upgrading, configuring and removing programs of OS in a consistent manner. Following are the commands to handle the various packages in linux;
1.  rpm –qa sendmail (checks currently installed package of sendmail)
2.  rpm –qi sendmail (list all the information about sendmail package)
3.  rpm –qd sendmail (lists the documentation files and their location of particular installed package)
4.  rpm –ivh /mnt/cdrom/mail/sendmail.rpm (check the particular package if not then install it from mounted cd rom)
5.  rpm –Uvh /mnt/cdrom/mail/sendmail.rpm (check the particular package installed then upgrade it to newer version.)
6.  rpm –e sendmail (To remove/uninstall the particular package)
System Services Configuration
Running linux systems have a number of background processes executing at any time. These processes are also known as services or daemon run as part of an application. OS services are like sshd and application daemons are httpd. These services are supposed to continuously to make sure that our websites, mail, databases and other applications are always running and up. Furthermore System Admins needs to continuously run these services without failing and start automatically after when system crash or reboots. A reboot can be planned restart, due to patch update or unexpected system behavior.  Crash may stop the process unexpectedly and application may be unresponsive. 
These Linux services can be made largely self-healing by changing the way they are handled by service management daemons, also known as init systems. After crash or unexpected reboot services need to start automatically via following commands;
1.  netstat –at (To check the status of the daemons listing on different ports)
2.  Chkconfig --level 3 bind on (bind service will be automatically start on run level 3 when system reboots)

Device Configuration
For hardware configuration you can use following commands;
1.  Sysreport (will generate a hardware list of your computer and save it in tar file)
2.  Kbconfig (for keyboard layout)
3.  Mouseconfig (for mouse)
4.  Printconfig (for printer configuration)
5.  Timeconfig (for time zone configuration)
Hardware information
1.  Like everything, there are commands to check hardware information of linux system;
2.  lscpu (displays cpu information)
3.  lshw (displays detailed and brief information about different hardware units.)
4.  lspci (displays details about all pci busses)
5.  lsscsi (lists the scsci/sata drives and optical devices.)
6.  lsusb (lists usb devices and details about devices connected)
7.  free –m (check the amount of ram used, free and total)
User Management and File Permissions/Ownerships
The control of users and groups is a core element of Linux System Administration. A user may be a human being or account used by specific applications identified by a unique numerical number called user ID (UID). Users within a group can have read, write and execute permissions or any combination of read/write/execute permissions of files owned by group.
A group is an organization unit tying users together for a common purpose, which can be reading, writing, or executing permissions for files owned by that group. Similar to UID, each group is associated with a group ID (GID). Each file and directory on your system is assigned access rights for the owner of the file, the members of a group of related users, and everybody else. Rights can be assigned to read a file, to write a file, and to execute a file (i.e., run the file as a program). A user who creates a file is also the owner and primary group owner of that file. Following are the commands for creating users and groups;
1.  Groupadd mailusers (create new group named mailusers)
2.  useradd –a  /bin/false –g mailusers imran (create new user named imran and make him member of group mailusers)
3.  passwd imran (to change the password of new created user imran)
4.  passwd –l imran (to lock the password of the user imran)
5.  passwd –u imran (to unlock the password of the user imran)
6.  userdel –r imran (to delete the user named imran with his home directory)
7.  chmod 770 myfile.txt (change the permissions of the file myfile.txt to read, write, and execute for owner and group)
8.  chown –R /imran:mailusers /project (change the ownernship of the directory /project to user imran and set the group permissions to group mailusers)
Note: I will cover Permissions topic in detail in my next blog separately.

Modules Management
The Linux kernel allows drivers and features to be compiled as modules rather than as part of the kernel itself. This means that users can often change features in the kernel or add drivers without recompiling, and that the Linux kernel doesn't have to carry a lot of unnecessary baggage. Here are few commands for listing, inserting and removing modules;
1.  lsmod (list the current modules loaded)
2.  modprobe modulename (add/insert a module)
3.  rmmod (remove a module)
Routing table and naming
Routing table guides incoming and outgoing traffic to and from your system towards network. Linux kernel supports multiple routing tables. Following are the commands to handle with routing tables.
1.  route –n (to check the routing table)
2.  route add default gw 192.168.1.1 (add default gateway to your system)
3.  route del default (to delete default gateway)
4.  route add –net 192.168.1.0 netmask 255.255.255.0 dev eth0 (add route to specific network)
5.  route del –net 192.168.1.0 netmask 255.255.255.0 dev eth0 (add route to specific network)
6.  traceroute 192.168.1.1 (to trace number or hops to particular host)
7.  mtr 192.168.1.1 (to trace and ping at the same time to particular host)
8.  host imran.mydomain.com (to find host and its ip address on the network)
9.  nslookup imran.mydomain.com (to find host and its ip address on the network)
10.         dig imran.mydomain.com (to find host and its ip address on the network)
11.         hostname (to find FQDN of your system)
Miscellaneous
Here are some miscellaneous commands below;
1.  Man snmp (to find help on some topic i.e snmp)
2.  Info snmp (another command to find help on some topic i.e snmp)
3.  find /usr –name “*.doc” (to find a *.doc file in /usr directory)
4.  find /usr –group mailusers (to find files and directories owned by group mailusers in /usr directory)
5.  find / -user imran –exec rm ‘{}’ ‘; (to find and delete files owning by user imran who left organization for security reasons)
6.  uptime (check the system how long it is up)
7.  who –b (to find out when system last time booted)
8.  service sendmail status (to check the status of the service)

9.  cat /etc/redhat-release (to know about redhat version)

Part 1




Linux Basic Commands (Part-1)
Guys! I am going to start basic Linux How Tos. I would like to start these How Tos from basic linux commands reference guide, just a brief description of each command not much detail. The purpose of these How Tos is to get introduce the linux i.e Red Hat, CentOS and Fedora etc. to IT professionals who use these day to day and sometimes slips out of mind. This brief guide will help those get reference of those commands.
Directory listing
To list the directory contents and information about files (current directory by default) in linux you use command ls with many switches defined below.
1.  ls –l (use a long listing format)
2.  ls –h (human readable – print the size in human readable format i.e 1k, 234MB, 2G etc).
3.  ls –a (shows all - do not hide entries starting with)
4.  ls –r (shows reverse order while sorting 
5.  ls –t (sort files by modification time)
 
Note: ls commands have many more switches but most commonly used are mentioned here.

Disk usage
The du (Disk usage) is command used to check the information of disk usage of directories and files. The du command has also many parameters which can be used to get result in different formats. Most common are mentioned below;
1.  du /home/userdirectory (shows the disk usage summary of a user’s directory tree and each of its sub directories.
2.  du –h /home/userdirecotry (shows the size in human readable format)
3.  du –sh /home/userdirectory (shows the total disk usage size of a parituclar directory.
4.  du –a /home/userdirectory (shows the disk usage of all files and directories)

Mounting / un-mounting directories
The mount is used to check that currently mounted partitions on your machine;
1.  mount (currently mounted partitions on the machine)
2.  mount /dev/cdrom /mnt/cdrom (will mount the cdrom in mount point i.e /mnt/cdrom)
3.  mount –t nfs 192.168.1.1:/usr/directoryname /mnt/cdrom (will mount a NFS Share i.e /usr/directoryname from machine 192.168.1.1 to your local mount point /mnt/cdrom
4.  cat /etc/mtab (shows the mounted file system table)
5.  unmount /mn/cdrom(this command will unmount the particular device or directory)

File copying / moving / renaming / deletion / compression and decompression
Following commands will be used while we need to copy, move, rename, delete compress or decompress the files.
1.  cp –vr /home/userdirectory/* /mnt/dir1 (copies all files including subdirectories form /home/userdirectory to /mnt/dir1)
2.  mv /home/userdirectory/* /mnt/dir1 (copies all files including subdirectories form /home/userdirectory to /mnt/dir1)
3.  rm –fr /mnt/dir1/* (forcefully deletes all files including subdirectories in /mnt/dir1)
4.  tar –cvf /tmp/mytar.tar * (creates tar file in /tmp/mytar from the files in current directory)
5.  tar –xvf /tmp/mytar.tar (untar the file /tmp/mytar.tar in current directory.
Disk Checking and Formatting
Fdisk is text based utility used in linux distributions for the management of disk partitions. Using fdisk you can create new partitions, delete or change existing partitions on machine. You can create maximum 4 primary partitions and any number of logical partitions depending upon disk size. Following are the fdisk commands used;
1.  fdisk –l (used to view all available partitions on machine)
2.  fdisk –l /dev/sda (will show specific hard drive i.e sda)
3.  fdisk –s /dev/sda (will show the size of the partition i.e sda)
4.  e2fsck /dev/sda (perform disk check)

5.  e2fsck –p /dev/sda (perform disk check and automatically repair)

Part 2

Sunday 31 January 2016

Apache vs Nginx
Apache and Nginx are the two most common open source web servers in the world. Together, they are responsible for serving over 50% of traffic on the internet. Both solutions are capable of handling diverse workloads and working with other software to provide a complete web stack.
While Apache and Nginx share many qualities, they should not be thought of as entirely interchangeable. Each excels in its own way and it is important to understand the situations where you may need to reevaluate your web server of choice. This article will be devoted to a discussion of how each server stacks up in various areas.
General Overview
Before we dive into the differences between Apache and Nginx, let's take a quick look at the background of these two projects and their general characteristics.
Apache
The Apache HTTP Server was created by Robert McCool in 1995 and has been developed under the direction of the Apache Software Foundation since 1999. Since the HTTP web server is the foundation's original project and is by far their most popular piece of software, it is often referred to simply as "Apache".
The Apache web server has been the most popular server on the internet since 1996. Because of this popularity, Apache benefits from great documentation and integrated support from other software projects.
Apache is often chosen by administrators for its flexibility, power, and widespread support. It is extensible through a dynamically loadable module system and can process a large number of interpreted languages without connecting out to separate software.
Nginx
In 2002, Igor Sysoev began work on Nginx as an answer to the C10K problem, which was a challenge for web servers to begin handling ten thousand concurrent connections as a requirement for the modern web. The initial public release was made in 2004, meeting this goal by relying on an asynchronous, events-driven architecture.
Nginx has grown in popularity since its release due to its light-weight resource utilization and its ability to scale easily on minimal hardware. Nginx excels at serving static content quickly and is designed to pass dynamic requests off to other software that is better suited for those purposes.
Nginx is often selected by administrators for its resource efficiency and responsiveness under load. Advocates welcome Nginx's focus on core web server and proxy features.


Saturday 30 January 2016

Microsoft Windows Containers

Microsoft Windows Containers, also known as Windows Server Containers, are isolated environments in Windows Server 2016 that isolate services or applications running on the same container host.
A container host can run one or more Windows Containers. Using a technique called namespace isolation, the host gives each container a virtualized namespace a that grants the container access only to resources it should see. This restricted view prevents containers from accessing or interacting with resources that aren't in its virtualized namespace and makes the container believe it is the only application running on the system. The host also controls how much of its resources can be used by individual containers. The container host can limit CPU usage to a certain percentage that applications cannot exceed and allocate the remaining percentage to other containers or to itself.
Containers are deployed from images, which cannot be modified. When a container image is created, the image can be stored in either a local, public or private repository. Containers can be interconnected to create larger applications, however, which allows for a different, more scalable way of architecting applications.
Windows Containers can integrate with existing Windows technologies like .NET andASP.NET. They can be created and managed with either PowerShell or Docker, but containers created with one tool currently can't be managed with the other. Windows Containers can also be created and managed in Azure.
Windows Containers became available for the first time in the third technical preview of Windows Server 2016 and will be integrated into the final release in 2016. Nano Server, a lightweight installation method for Windows Server, is optimized for Windows Containers and Hyper-V Containers.



Microsoft Client Hyper-V

Microsoft Client Hyper-V is a type-1 hypervisor for the Windows 8.x and Windows 10 operating systems (OSes) that allows users to run multiple operating systems inside avirtual machine (VM).

Microsoft introduced Client Hyper-V in 2012 with the release of Windows 8 as a replacement for the Type-2 hypervisor Windows Virtual PC.
Developers and IT professional can use Client Hyper-V to build a test environment. A developer can create a VM hosted on a laptop and then export it to the Windows Server production environment once it has passed inspection. Client Hyper-V can also be used to test software on multiple OSes by creating separate VMs for each.
When a user enables Client Hyper-V, Hyper-V Manager is also installed. Hyper-V Manager creates and manages VMs; it also has switch capabilities to connect a VM to an external network connection.
There are some limitations to Client Hyper-V as opposed to the server version of Hyper-V. Client Hyper-V does not support Hyper-V Replica, Virtual Fibre Channel, VM live migration, SR-IOV networking or RemoteFXcapability.
Client Hyper-V can only be enabled on 64-bit versions of Windows 10, or Windows 8.x Pro or Enterprise editions. For hardware requirements, Client Hyper-V requires a 64-bit processor with second-level address translation, the CPU must support VM Monitor Mode Extension and 4 GB of RAM.

Making the most of Hyper-V live migration

Hyper-V live migration delivers flexibility in a demanding environment, but administrators should be aware of ways to optimize the process.

One of the key benefits to virtualization with Hyper-V is the added flexibility it provides. This can be essential for an organization where important workloads cannot be down for any reason. As Microsoft's virtualization platform has matured, the company has buttressed its appeal to businesses by adding Hyper-V Replica for disaster recovery and Hyper-V live migrationto allow virtual machines to continue to operate even while moving between hosts in a cluster. 
Hyper-V live migration debuted in Windows Server 2008 and was further refined Windows Server 2012 R2 by allowing VMs to migrate to other hosts without requiring shared storage. For a growing company that wants to add faster servers but needs to keep workloads available, Hyper-V live migration provides that capability with the added benefit of avoiding the expenses associated withshared storage. For a small to medium-sized company that needs to do maintenance on a cluster of Hyper-V hosts, live migration is invaluable for shifting that workload outside the cluster to keep it running on another host. 
How well do you know Microsoft's Hyper-V migration features? This guide can help illuminate some of the advanced features and best practices associated with live migrations.
Tweaks to keep VMs from decelerating
Pushing workloads to another host using Hyper-V live migration gives the IT staff a chance to complete maintenance and other tasks, but it's also important for mission-critical VMs to maintain a consistent level of performance no matter where they run. TCP chimney offload and processor compatibility mode are two features Hyper-V uses to help smooth out lags in the migration process.
Authentication choices for live migrations
Administrators who need to perform a Hyper-V live migration have two choices to authenticate a sign-on: Kerberos or Credential Security Support Provider (CredSSP). The size of your organization -- or whether there are several administrators who require remote access -- may determine the authentication protocol you decide to use. Kerberos works remotely and can be used in conjunction with remote management tools. While CredSSP is not complex to use, it requires a local login to the server where the migration will start.
To squeeze or not to squeeze?
Administrators can adjust data transfer performance during Hyper-V live migrations by using either compression or an uncompressed TCP/IP transfer mode. The advantage of compression is there is less data -- and fewer packets -- to transmit across the network when shuffling a VM between hosts. But the work required to perform this process means servers must use their CPU resources to both shrink and then expand the data to complete the migration. Using uncompressed TCP/IP transfer mode to copy the VM's memory space directly to the destination server can slow network traffic and affect connected systems.
Weighing performance versus bandwidth needs
When bouncing VMs to different hosts around your data center, this process can affect the rest of your environment if bandwidth is not being used efficiently. There are a few ways to limit the effects on the rest of the network such as using a dedicated network segment and regulating the number of live migrations occurring at the same time. In the advanced live migration feature settings, administrators can make adjustments to optimize both workload performance and bandwidth use.
Putting PowerShell cmdlets to use
PowerShell cmdlets continue to evolve and provide another avenue for administrators to perform tasks from the command line rather than a GUI interface. For those in IT who prefer to script certain workflows, such as a Hyper-V live migration, Microsoft has developed cmdlets tailored for this purpose.
How SMB made live migrations possible
One key development in Windows Server 2012 R2 was an upgrade to the Server Message Block (SMB) protocol. Better performance and enhanced bandwidth management in SMB 3.02 paved the way for Hyper-V live migrations. The RDMA network acceleration functionality in SMB spurs VMs from one host to the next in a more timely fashion to make the process more seamless than before.

Tuesday 26 January 2016

Using PowerShell DSC to construct a Hyper-V host

Administrators versed in PowerShell can streamline the process of using a server as a Hyper-V host.

While Hyper-V lags behind VMware in market share, Microsoft's hypervisor has been slowly catching on. Organizations love the price point -- sometimes free -- and the ever-increasing Hyper-V platform feature set. These advances have spurred administrators to deploy an increasing number of Hyper-V hosts.
We can set up a Hyper-V host on an existing server running Windows Server 2012 R2 using automation to make the process repeatable. This tutorial will explain how to use Windows PowerShell, specificallyPowerShell Desired State Configuration (DSC), to execute the deployment.
What does it take to build a Hyper-V host from an existing Windows server? For our purposes, it takes three things: a couple Windows features, a directory to hold VMs and a switch to connect the network to the VMs.
Building the framework
When starting any PowerShell DSC project, begin with a single PS1 PowerShell cmdlet file and a configuration block inside. We will call the configuration HyperVBuild.
Configuration HyperVBuild {

}
A configuration is similar to a function; it has code inside that can be executed at will, but it behaves a little differently. After configuration block, what follows is typically a parameter to the configuration just like a PowerShell function. A common parameter is $NodeName; this parameter will designate the computer in which the configuration will be applied. In this case, we default to the local host because the script resides on the server where we want to build the Hyper-V host.
Add the node block and assign the value of the $NodeName parameter in there. This tells Windows to create a configuration for the computer name specified in $NodeName.
Configuration HyperVBuild {
     param(
           [string]$NodeName = 'localhost'
     )

     node $NodeName {

     }
}
Add a dash of Hyper-V
After this framework is in place, we need to import a module called xHyper-V. Since the creation of a Hyper-V switch is required, this PowerShell DSC module is needed to get that functionality. The xHyper-V module doesn't come with Windows, but it can be downloaded from Microsoft's Github repository. Place it into the C:\Program Files\WindowsPowerShell\Modules directory; it will then be available on all subsequent scripts.
Use the Import-DscResource cmdlet and specify the module name xHyper-V.
Configuration HyperVBuild {
     param(
           [string]$NodeName = 'localhost'
     )
     Import-DscResource –ModuleName xHyper-V
     node $NodeName {

     }
}
We can now begin adding the necessary resources. First, add the Hyper-V Windows feature. This uses a PowerShell DSC resource that is built into Windows. Create a line starting with the resource name of WindowsFeature followed with a label to represent the resource. In this instance, we will call it Hyper-V.
Then, inside the block, use the Ensure attribute and set that to Present to install the Windows feature and also the name of the Windows feature to install when the configuration is run.
WindowsFeature 'Hyper-V' {
Ensure='Present'
     Name='Hyper-V'
}
Next, we need to ensure the Hyper-V-PowerShell feature is installed, so we'll create another block.
WindowsFeature 'Hyper-V-Powershell' {
Ensure='Present'
     Name='Hyper-V-Powershell'
}}
Next, we need a folder before creating the VMs, so we need to ensure one is created at C:\VMs. This uses the File resources to create the folder. The File resource has a Type attribute to indicate either a file or a directory.
File VMsDirectory {
Ensure = 'Present'
Type = 'Directory'
     DestinationPath = "$($env:SystemDrive)\VMs"
}
For the last feature, use a resource from the xHyper-V module called xVMSwitch to create and configure Hyper-V switches. Make the Type Internal to create a network for the VMs that can't communicate outside of the host.
xVMSwitch LabSwitch {
DependsOn = '[WindowsFeature]Hyper-V'
Name = 'LabSwitch'
     Ensure = 'Present'
Type = 'Internal'
}
Notice the DependsOn attribute. This is a common attribute across all resources that allows you to set the order resources are executed. In this example, we ensure the Hyper-V Windows feature is installed first before attempting to create the switch.
You should now have a configuration that looks something like this:
configuration HyperVBuild
{
     param (
           [string]$NodeName = 'localhost'
     )
     Import-DscResource -ModuleName xHyper-V
     node $NodeName {
           WindowsFeature 'Hyper-V' {
                Ensure='Present'
                Name='Hyper-V'
           }
           WindowsFeature 'Hyper-V-Powershell' {
                Ensure='Present'
                Name='Hyper-V-Powershell'
           }
           File VMsDirectory
           {
                Ensure = 'Present'
                Type = 'Directory'
                DestinationPath = "$($env:SystemDrive)\VMs"
           }
           xVMSwitch LabSwitch {
                DependsOn = '[WindowsFeature]Hyper-V'
                Name = 'LabSwitch'
                Ensure = 'Present'
                Type = 'Internal'
}
}
}
Now that the configuration has been built, generate the MOF files to apply to the system when configuration starts. To do this, execute the configuration block just like a PowerShell function by calling it by the name HyperVBuild.This creates a folder of the same name that contains an MOF file called localhost.mof.
The final part is to apply the configuration to the local machine. To do that, use the Start-DscConfiguration cmdlet and use a few different parameters.
Start-DscConfiguration -Path .\HyperVBuild -Wait -Force
The first parameter is Path. This points Start-DscConfiguration to the folder with the MOF file. Next, the –Wait parameter ensures Start-DscConfiguration waits to complete before releasing control to the console. Finally, the –Force parameter tells Start-DscConfiguration to "push" the configuration rather than pull. The push/pull scenario is further explained in detail in this article.
Once Start-DscConfiguration starts, you may see some messages on the console as it progresses through the configuration. If everything goes well, a few minutes later you will have a brand new Hyper-V server ready for VMs.