Sunday 31 January 2016

Apache vs Nginx
Apache and Nginx are the two most common open source web servers in the world. Together, they are responsible for serving over 50% of traffic on the internet. Both solutions are capable of handling diverse workloads and working with other software to provide a complete web stack.
While Apache and Nginx share many qualities, they should not be thought of as entirely interchangeable. Each excels in its own way and it is important to understand the situations where you may need to reevaluate your web server of choice. This article will be devoted to a discussion of how each server stacks up in various areas.
General Overview
Before we dive into the differences between Apache and Nginx, let's take a quick look at the background of these two projects and their general characteristics.
Apache
The Apache HTTP Server was created by Robert McCool in 1995 and has been developed under the direction of the Apache Software Foundation since 1999. Since the HTTP web server is the foundation's original project and is by far their most popular piece of software, it is often referred to simply as "Apache".
The Apache web server has been the most popular server on the internet since 1996. Because of this popularity, Apache benefits from great documentation and integrated support from other software projects.
Apache is often chosen by administrators for its flexibility, power, and widespread support. It is extensible through a dynamically loadable module system and can process a large number of interpreted languages without connecting out to separate software.
Nginx
In 2002, Igor Sysoev began work on Nginx as an answer to the C10K problem, which was a challenge for web servers to begin handling ten thousand concurrent connections as a requirement for the modern web. The initial public release was made in 2004, meeting this goal by relying on an asynchronous, events-driven architecture.
Nginx has grown in popularity since its release due to its light-weight resource utilization and its ability to scale easily on minimal hardware. Nginx excels at serving static content quickly and is designed to pass dynamic requests off to other software that is better suited for those purposes.
Nginx is often selected by administrators for its resource efficiency and responsiveness under load. Advocates welcome Nginx's focus on core web server and proxy features.


Saturday 30 January 2016

Microsoft Windows Containers

Microsoft Windows Containers, also known as Windows Server Containers, are isolated environments in Windows Server 2016 that isolate services or applications running on the same container host.
A container host can run one or more Windows Containers. Using a technique called namespace isolation, the host gives each container a virtualized namespace a that grants the container access only to resources it should see. This restricted view prevents containers from accessing or interacting with resources that aren't in its virtualized namespace and makes the container believe it is the only application running on the system. The host also controls how much of its resources can be used by individual containers. The container host can limit CPU usage to a certain percentage that applications cannot exceed and allocate the remaining percentage to other containers or to itself.
Containers are deployed from images, which cannot be modified. When a container image is created, the image can be stored in either a local, public or private repository. Containers can be interconnected to create larger applications, however, which allows for a different, more scalable way of architecting applications.
Windows Containers can integrate with existing Windows technologies like .NET andASP.NET. They can be created and managed with either PowerShell or Docker, but containers created with one tool currently can't be managed with the other. Windows Containers can also be created and managed in Azure.
Windows Containers became available for the first time in the third technical preview of Windows Server 2016 and will be integrated into the final release in 2016. Nano Server, a lightweight installation method for Windows Server, is optimized for Windows Containers and Hyper-V Containers.



Microsoft Client Hyper-V

Microsoft Client Hyper-V is a type-1 hypervisor for the Windows 8.x and Windows 10 operating systems (OSes) that allows users to run multiple operating systems inside avirtual machine (VM).

Microsoft introduced Client Hyper-V in 2012 with the release of Windows 8 as a replacement for the Type-2 hypervisor Windows Virtual PC.
Developers and IT professional can use Client Hyper-V to build a test environment. A developer can create a VM hosted on a laptop and then export it to the Windows Server production environment once it has passed inspection. Client Hyper-V can also be used to test software on multiple OSes by creating separate VMs for each.
When a user enables Client Hyper-V, Hyper-V Manager is also installed. Hyper-V Manager creates and manages VMs; it also has switch capabilities to connect a VM to an external network connection.
There are some limitations to Client Hyper-V as opposed to the server version of Hyper-V. Client Hyper-V does not support Hyper-V Replica, Virtual Fibre Channel, VM live migration, SR-IOV networking or RemoteFXcapability.
Client Hyper-V can only be enabled on 64-bit versions of Windows 10, or Windows 8.x Pro or Enterprise editions. For hardware requirements, Client Hyper-V requires a 64-bit processor with second-level address translation, the CPU must support VM Monitor Mode Extension and 4 GB of RAM.

Making the most of Hyper-V live migration

Hyper-V live migration delivers flexibility in a demanding environment, but administrators should be aware of ways to optimize the process.

One of the key benefits to virtualization with Hyper-V is the added flexibility it provides. This can be essential for an organization where important workloads cannot be down for any reason. As Microsoft's virtualization platform has matured, the company has buttressed its appeal to businesses by adding Hyper-V Replica for disaster recovery and Hyper-V live migrationto allow virtual machines to continue to operate even while moving between hosts in a cluster. 
Hyper-V live migration debuted in Windows Server 2008 and was further refined Windows Server 2012 R2 by allowing VMs to migrate to other hosts without requiring shared storage. For a growing company that wants to add faster servers but needs to keep workloads available, Hyper-V live migration provides that capability with the added benefit of avoiding the expenses associated withshared storage. For a small to medium-sized company that needs to do maintenance on a cluster of Hyper-V hosts, live migration is invaluable for shifting that workload outside the cluster to keep it running on another host. 
How well do you know Microsoft's Hyper-V migration features? This guide can help illuminate some of the advanced features and best practices associated with live migrations.
Tweaks to keep VMs from decelerating
Pushing workloads to another host using Hyper-V live migration gives the IT staff a chance to complete maintenance and other tasks, but it's also important for mission-critical VMs to maintain a consistent level of performance no matter where they run. TCP chimney offload and processor compatibility mode are two features Hyper-V uses to help smooth out lags in the migration process.
Authentication choices for live migrations
Administrators who need to perform a Hyper-V live migration have two choices to authenticate a sign-on: Kerberos or Credential Security Support Provider (CredSSP). The size of your organization -- or whether there are several administrators who require remote access -- may determine the authentication protocol you decide to use. Kerberos works remotely and can be used in conjunction with remote management tools. While CredSSP is not complex to use, it requires a local login to the server where the migration will start.
To squeeze or not to squeeze?
Administrators can adjust data transfer performance during Hyper-V live migrations by using either compression or an uncompressed TCP/IP transfer mode. The advantage of compression is there is less data -- and fewer packets -- to transmit across the network when shuffling a VM between hosts. But the work required to perform this process means servers must use their CPU resources to both shrink and then expand the data to complete the migration. Using uncompressed TCP/IP transfer mode to copy the VM's memory space directly to the destination server can slow network traffic and affect connected systems.
Weighing performance versus bandwidth needs
When bouncing VMs to different hosts around your data center, this process can affect the rest of your environment if bandwidth is not being used efficiently. There are a few ways to limit the effects on the rest of the network such as using a dedicated network segment and regulating the number of live migrations occurring at the same time. In the advanced live migration feature settings, administrators can make adjustments to optimize both workload performance and bandwidth use.
Putting PowerShell cmdlets to use
PowerShell cmdlets continue to evolve and provide another avenue for administrators to perform tasks from the command line rather than a GUI interface. For those in IT who prefer to script certain workflows, such as a Hyper-V live migration, Microsoft has developed cmdlets tailored for this purpose.
How SMB made live migrations possible
One key development in Windows Server 2012 R2 was an upgrade to the Server Message Block (SMB) protocol. Better performance and enhanced bandwidth management in SMB 3.02 paved the way for Hyper-V live migrations. The RDMA network acceleration functionality in SMB spurs VMs from one host to the next in a more timely fashion to make the process more seamless than before.

Tuesday 26 January 2016

Using PowerShell DSC to construct a Hyper-V host

Administrators versed in PowerShell can streamline the process of using a server as a Hyper-V host.

While Hyper-V lags behind VMware in market share, Microsoft's hypervisor has been slowly catching on. Organizations love the price point -- sometimes free -- and the ever-increasing Hyper-V platform feature set. These advances have spurred administrators to deploy an increasing number of Hyper-V hosts.
We can set up a Hyper-V host on an existing server running Windows Server 2012 R2 using automation to make the process repeatable. This tutorial will explain how to use Windows PowerShell, specificallyPowerShell Desired State Configuration (DSC), to execute the deployment.
What does it take to build a Hyper-V host from an existing Windows server? For our purposes, it takes three things: a couple Windows features, a directory to hold VMs and a switch to connect the network to the VMs.
Building the framework
When starting any PowerShell DSC project, begin with a single PS1 PowerShell cmdlet file and a configuration block inside. We will call the configuration HyperVBuild.
Configuration HyperVBuild {

}
A configuration is similar to a function; it has code inside that can be executed at will, but it behaves a little differently. After configuration block, what follows is typically a parameter to the configuration just like a PowerShell function. A common parameter is $NodeName; this parameter will designate the computer in which the configuration will be applied. In this case, we default to the local host because the script resides on the server where we want to build the Hyper-V host.
Add the node block and assign the value of the $NodeName parameter in there. This tells Windows to create a configuration for the computer name specified in $NodeName.
Configuration HyperVBuild {
     param(
           [string]$NodeName = 'localhost'
     )

     node $NodeName {

     }
}
Add a dash of Hyper-V
After this framework is in place, we need to import a module called xHyper-V. Since the creation of a Hyper-V switch is required, this PowerShell DSC module is needed to get that functionality. The xHyper-V module doesn't come with Windows, but it can be downloaded from Microsoft's Github repository. Place it into the C:\Program Files\WindowsPowerShell\Modules directory; it will then be available on all subsequent scripts.
Use the Import-DscResource cmdlet and specify the module name xHyper-V.
Configuration HyperVBuild {
     param(
           [string]$NodeName = 'localhost'
     )
     Import-DscResource –ModuleName xHyper-V
     node $NodeName {

     }
}
We can now begin adding the necessary resources. First, add the Hyper-V Windows feature. This uses a PowerShell DSC resource that is built into Windows. Create a line starting with the resource name of WindowsFeature followed with a label to represent the resource. In this instance, we will call it Hyper-V.
Then, inside the block, use the Ensure attribute and set that to Present to install the Windows feature and also the name of the Windows feature to install when the configuration is run.
WindowsFeature 'Hyper-V' {
Ensure='Present'
     Name='Hyper-V'
}
Next, we need to ensure the Hyper-V-PowerShell feature is installed, so we'll create another block.
WindowsFeature 'Hyper-V-Powershell' {
Ensure='Present'
     Name='Hyper-V-Powershell'
}}
Next, we need a folder before creating the VMs, so we need to ensure one is created at C:\VMs. This uses the File resources to create the folder. The File resource has a Type attribute to indicate either a file or a directory.
File VMsDirectory {
Ensure = 'Present'
Type = 'Directory'
     DestinationPath = "$($env:SystemDrive)\VMs"
}
For the last feature, use a resource from the xHyper-V module called xVMSwitch to create and configure Hyper-V switches. Make the Type Internal to create a network for the VMs that can't communicate outside of the host.
xVMSwitch LabSwitch {
DependsOn = '[WindowsFeature]Hyper-V'
Name = 'LabSwitch'
     Ensure = 'Present'
Type = 'Internal'
}
Notice the DependsOn attribute. This is a common attribute across all resources that allows you to set the order resources are executed. In this example, we ensure the Hyper-V Windows feature is installed first before attempting to create the switch.
You should now have a configuration that looks something like this:
configuration HyperVBuild
{
     param (
           [string]$NodeName = 'localhost'
     )
     Import-DscResource -ModuleName xHyper-V
     node $NodeName {
           WindowsFeature 'Hyper-V' {
                Ensure='Present'
                Name='Hyper-V'
           }
           WindowsFeature 'Hyper-V-Powershell' {
                Ensure='Present'
                Name='Hyper-V-Powershell'
           }
           File VMsDirectory
           {
                Ensure = 'Present'
                Type = 'Directory'
                DestinationPath = "$($env:SystemDrive)\VMs"
           }
           xVMSwitch LabSwitch {
                DependsOn = '[WindowsFeature]Hyper-V'
                Name = 'LabSwitch'
                Ensure = 'Present'
                Type = 'Internal'
}
}
}
Now that the configuration has been built, generate the MOF files to apply to the system when configuration starts. To do this, execute the configuration block just like a PowerShell function by calling it by the name HyperVBuild.This creates a folder of the same name that contains an MOF file called localhost.mof.
The final part is to apply the configuration to the local machine. To do that, use the Start-DscConfiguration cmdlet and use a few different parameters.
Start-DscConfiguration -Path .\HyperVBuild -Wait -Force
The first parameter is Path. This points Start-DscConfiguration to the folder with the MOF file. Next, the –Wait parameter ensures Start-DscConfiguration waits to complete before releasing control to the console. Finally, the –Force parameter tells Start-DscConfiguration to "push" the configuration rather than pull. The push/pull scenario is further explained in detail in this article.
Once Start-DscConfiguration starts, you may see some messages on the console as it progresses through the configuration. If everything goes well, a few minutes later you will have a brand new Hyper-V server ready for VMs.

Wednesday 20 January 2016

Windows Server 2012 Active Directory Interview Questions
More and more companies are realizing the power of cloud services and networks. With the release of Office 365, Cloud services, and employees working away from the office, collaboration is crucial. Ensuring the networks that connect employees and allow access to the documents and projects within an organization is therefore critical to allow organizations to function efficiently. This means that the demand for good network administrators and system administrators who understand Active Directory is increasing.
1. What is Active Directory?
Active Directory (AD) is a directory service developed by Microsoft and used to store objects like User, Computer, printer, Network information, It facilitate to manage your network effectively with multiple Domain Controllers in different location with AD database, able to manage/change AD from any Domain Controllers and this will be replicated to all other DC’s, centralized Administration with multiple geographical location and authenticates users and computers in a Windows domain.
2. Define Active Directory?
Active Directory is a database that stores data pertaining to the users within a network as well as the objects within the network. Active Directory allows the compilation of networks that connect with AD, as well as the management and administration thereof.
3. What is Domain?
Active Directory Domain Services is Microsoft’s Directory Server. It provides authentication and authorization mechanisms as well as a framework within which other related services can be deployed.
4. What is Active Directory Domain Controller (DC)?
Domain Controller is the server which holds the AD database, All AD changes get replicated to other DC and vise vase.
5. What is a domain within Active Directory?
A domain represents the group of network resources that includes computers, printers, applications and other resources. Domains share a directory database. The domain is represented by address of the resources within the database. A domain address generally looks like 125.170.456. A user can log into a domain to gain access to the resources that are listed as part that domain.
6. What is the domain controller?
The server that responds to user requests for access to the domain is called the Domain Controller or DC. The Domain Controller allows a user to gain access to the resources within the domain through the use of a single username and password.
7. What is Tree?
Tree is a hierarchical arrangement of windows Domain that share a contiguous name space.
8. What is Forest?
Forest consists of multiple Domains trees. The Domain trees in a forest do not form a contiguous name space however share a common schema and global catalog (GC).
9. Explain what domain trees and forests are?
Domains that share common schemas and configurations can be linked to form a contiguous namespace. Domains within the trees are linked together by creating special relationships between the domains based on trust.
Forests consist of a number of domain trees that are linked together within AD, based on various implicit trust relationships. Forests are generally created where a server setup includes a number of root DNS addresses. Trees within the forest do not share a contiguous namespace.
10. What is Schema?
Active directory schema is the set of definitions that define the kinds of object and the type of information about those objects that can be stored in Active Directory
Active directory schema is Collection of object class and there attributes
Object Class = User
Attributes = first name, last name, email, and others
11. What is FSMO?
FSMO (flexible single master operations) is a specialized domain controller (DC) set of tasks, used where standard data transfer and update methods are inadequate. AD normally relies on multiple peer DCs, each with a copy of the AD database, being synchronized by multi-master replication.
12. Tel me about the FSMO roles?
Schema Master
Domain Naming Master
Infrastructure Master
RID Master
PDC
Schema Master
The schema is shared between every Tree and Domain in a forest and must be consistent between all objects. The schema master controls all updates and modifications to the schema.
Domain Naming Master
Domain Naming Master FSMO Role. The Domain Naming Master FSMOrole owner is the DC responsible for making changes to the forest-wide domain name space of the directory in the Partitions container.
Infrastructure Master
The Infrastructure FSMO role is one of the three "per domain" Operations Masters. The infrastructure FSMO keeps its domain's references to objects in other domains up-to-date by comparing its data with information in the Global Catalog (GC).
RID Master
This SID consists of a domain SID (the same for all SIDs created in a domain) and a relative ID (RID) that is unique for each security principal SID created in a domain. RIDs are allocated from a RID pool that is controlled by the RID Master FSMO.
Relative ID (RID) Master
Allocates RIDs to DCs within a Domain. When an object such as a user, group or computer is created in AD it is given a SID. The SID consists of a Domain SID (which is the same for all SIDs created in the domain) and a RID which is unique to the Domain.
When moving objects between domains you must start the move on the DC which is the RID master of the domain that currently holds the object.
PDC
Microsoft recommends the careful division of FSMO roles, with standby DCs ready to take over each role. The PDC emulator and the RID master should be on the same DC, if possible. The Schema Master and Domain Naming Master should also be on the same DC.
PDC Emulator
The PDC emulator acts as a Windows NT PDC for backwards compatibility, it can process updates to a BDC.It is also responsible for time synchronizing within a domain. It is also the password master (for want of a better term) for a domain. Any password change is replicated to the PDC emulator as soon as is practical. If a logon request fails due to a bad password the logon request is passed to the PDC emulator to check the password before rejecting the login request.
13. How to check which server holds which role?
Netdom query FSMO.
14. What is LDAP?
LDAP is an acronym for Lightweight Directory Access Protocol and it refers to the protocol used to access, query and modify the data stored within the AD directories. LDAP is an internet standard protocol that runs over TCP/IP.
15. Explain what intrasite and intersite replication is and how KCC facilitates replication?
The replication of DC’s inside a single site is called intrasite replication whilst the replication of DC’s on different sites is called Intersite replication. Intrasite replication occurs frequently while Intersite replication occurs mainly to ensure network bandwidth.
KCC is an acronym for the Knowledge Consistency Checker. The KCC is a process that runs on all of the Domain Controllers. The KCC allows for the replication topology of site replication within sites and between sites. Between sites, replication is done through SMTP or RPC whilst Intersite replication is done using procedure calls over IP.
16. Name a few of the tools available in Active Directory and which tool would you use to troubleshoot any replication issues?
Active Directory tools include:
• Dfsutil.exe
• Netdiag.exe
• Repadmin.exe
• Adsiedit.msc
• Netdom.exe
• Replmon.exe
Replmon.exe is a graphical tool designed to visually represent the AD replication. Due to its graphical nature, replmon.exe allows you to easily spot and deal with replication issues.
17. What tool would you use to edit AD?
Adsiedit.msc is a low level editing tool for Active Directory. Adsiedit.msc is a Microsoft Management Console snap-in with a graphical user interface that allows administrators to accomplish simple tasks like adding, editing and deleting objects with a directory service. The Adsiedit.msc uses Application Programming Interfaces to access the Active Directory. Since Adsiedit.msc is a Microsoft Management Console snap-in, it requires access MMC and a connection to an Active Directory environment to function correctly.
18. How would you manage trust relationships from the command prompt?
Netdom.exe is another program within Active Directory that allows administrators to manage the Active Directory. Netdom.exe is a command line application that allows administrators to manage trust relationship within Active Directory from the command prompt. Netdom.exe allows for batch management of trusts. It allows administrators to join computers to domains. The application also allows administrators to verify trusts and secure Active Directory channels.
19. Where is the AD database held and how would you create a backup of the database?
The database is stored within the windows NTDS directory. You could create a backup of the database by creating a backup of the System State data using the default NTBACKUP tool provided by windows or by Symantec’s Netbackup. The System State Backup will create a backup of the local registry, the Boot files, the COM+, the NTDS.DIT file as well as the SYSVOL folder.
20. What is SYSVOL, and why is it important?
SYSVOL is a folder that exists on all domain controllers. It is the repository for all of the active directory files. It stores all the important elements of the Active Directory group policy. The File Replication Service or FRS allows the replication of the SYSVOL folder among domain controllers. Logon scripts and policies are delivered to each domain user via SYSVOL.
SYSVOL stores all of the security related information of the AD.
21. Briefly explain how Active Directory authentication works?
When a user logs into the network, the user provides a username and password. The computer sends this username and password to the KDC which contains the master list of unique long term keys for each user. The KDC creates a session key and a ticket granting ticket. This data is sent to the user’s computer. The user’s computer runs the data through a one-way hashing function that converts the data into the user’s master key, which in turn enables the computer to communicate with the KDC, to access the resources of the domain.