Thursday 17 December 2015

Why do I even need software-defined technologies?

Once you've figured out what software-defined technologies are, it's time to discover what they do and what benefits they can offer.

No matter what part of the data center you're talking about, the term software-defined continues to pop up. At this point, most admins and IT staff know what software-defined is and what it does. The next hurdle to overcome is figuring out if you even need to venture down that path. When it comes to software-defined technology, there are plenty of benefits that accompany it.
Why do we even need "software-defined" technology, and what are its benefits?
Today's data centers are often plagued by static hardware configurations that are usually managed by disconnected silos of IT specialists. This complicates and slows new provisioning, sometimes introducing errors or unforeseen consequences that might need remediation before the new workload can enter production. This is an unacceptable mix of circumstances for any business seeking rapid, flexible provisioning with a high degree of automation. Many IT professionals see software-defined technologies as a means of overcoming these problems and achieving the speed and flexibility that new users (and workloads) require from IT in the role of a service provider.
Software-defined technologies promise numerous benefits. The most obvious and often-repeated benefits include provisioning speed and flexibility -- IT silos disappear and resources can be provided, changed and recovered for reuse in a matter of just a few mouse clicks. The close corollary here is the promise of automation, allowing end users to request and provision their own resources without direct IT involvement. This frees IT time for work on more strategic endeavors which can yield more benefit to the business than addressing user provisioning request.
One pixelThe myth &reality around software-defined everything
There is a strong potential for lower hardware costs. For example, a technology like software-defined networking provides a common traffic control schema for more efficient network traffic handling, yet results in simpler -- and potentially less expensive -- physical switches since all that remains is the actual data plane. For network functions virtualization, the use of virtual appliances provides software-only network devices like firewalls, WAN accelerators and so on; which can be considerably cheaper and easier to deploy than physical appliances to do the same jobs.
Integrated management can be more streamlined, with better insights into total resource availability and usage across the data center. This can help with capacity planning and ensure ample resources to meet expected demands. Since resources are abstracted from underlying hardware, there is less chance of inadvertent changes to individual device configurations which might be hard to spot or difficult to troubleshoot.
And software-defined initiatives will build momentum for more common APIs or protocols like OpenFlow. This translates to better software design and superior interoperability between different vendors' products.

What works for daily event log monitoring?

I have 80 Windows servers in the data center. What can I use for daily event log monitoring?


Windows event log files contain a treasure trove of information on server performance and operations. But they're tedious to trawl through on a regular basis, especially when you have more than a few servers to maintain in the data center.
Windows Server sorts event logsinto Application, Security and System sections and saves the event log files locally on each server by default.
There is a plethora of event log monitoring tools available, both free and paid. You'll need to decide which one best fits your needs. Whatever tool you pick, expect to do a lot of work at the beginning to clean up and remediate or ignore a lot of errors that it picks up from the log files. Once you remove the noise, what's left is a very valuable tool for maintenance and troubleshooting on Windows servers.
Here are a few options for log file monitoring, but due to the scale of offerings out there, please take this as a sampling only.
Free vs. paid log monitoring tools
At the free low-end scale, try Microsoft Windows Event Viewer's subscriptions option. You can create a central point to collect and read the event log files from multiple machines and apply filters, such as "Errors & Warnings." You can review the files on a daily basis, with errors remediated. This is as simple as you can get in log monitoring, so you will miss out real-time error alerting, or easy results management from hiding or ignoring certain errors.
Also free, but more feature-rich and complex are syslog -- a standard for message logging, with many variants, builds and add-ons -- and the ELK stack, which includes Elasticsearch, Logstash and Kibana. These will collect and collate logs from Windows Event Viewer tool as well as other sources. You can start by monitoring event logs, then collect application-specific logs from IIS, SQL or other applications from outside of Windows Event Viewer.
At the paid end, two popular examples are SolarWinds Log & Event Manager and Splunk on-premises or as a service. These products are for the higher end of the market, and are not just plug and play.
Paid or enterprise versions of event log monitoring tools provide great amounts of information and alerting around all manner of logs, including Event Viewer logs. However, they may be too complex for a small IT team to maintain.
Look for vendors like Splunk and SolarWinds that offer demos, which will give you a feel for how the tool can help in your server environment.

What are some storage monitoring tools under Windows Server 2012 R2?

For Windows administrators who need ample warning to keep storage problems at bay, there are many choices available.

As most Windows administrators know, there is no single way to monitor storage or disk faults. There are countless management tools to choose from, and policies and procedures can vary between businesses. Also, the IT expertise available to monitor storage may be small compared to the many other technical demands of a data center. But Windows Server 2012 R2organizations can utilize some common storage monitoring tools and practices, such as Storage Spaces.
First, update the tools for Storage Spaces under Windows Server 2012 R2. As an example, MicrosoftSystem Center Operations Manager (SCOM) can be updated to support Storage Spaces using a downloadable management pack. This management pack allows SCOM to watch storage enclosures, storage pools and capacity, and track storage spaces,Cluster Shared Volume (CSV) file shares, and disk failures. SCOM can also pass this monitoring data to other tools, such as Operations Manager. Other Windows Server 2012 R2 patches -- such as hotfix 2913766 -- add support for JBOD enclosure awareness using the storage management API.
Second, consider adding monitoring tools from the storage array or enclosure vendor. Vendors can provide granular tools or SCOM management packs designed to offer details about specific storage subsystems, including status, performance, disk installation and conditions. Adding new tools to the environment may not be a preferred strategy, but point solutions can provide handy diagnostics for niche storage systems, and management packs can tackle detailed monitoring through existing SCOM deployments.
Third, try diagnostic PowerShell scripts. For example, scripts such as Test-StorageHealth.ps1 from the Microsoft Script Center can check failover clusters, CSVs, Server Message Block shares, Storage Spaces and data deduplication operation. The script can report these details, collect logs and reports from storage cluster nodes, and compile everything into a single compressed file for analysis.
Remember that one size does not fit all. These options are not exclusive and can be used together in any combination needed to provide an adequate picture of storage health.


Tuesday 15 December 2015

How does the Hyper-V parent partition architecture work?

Microsoft's hypervisor relies on several modules and services to deploy and manage virtual machines. Do you know what they are and how they work?

There are several modules that operate together to build the Microsoft Hyper-V hypervisor concept. Hyper-V implements a main partition called the parent partition, which runs Hyper-V's main service called Virtual Machine Management Service. VMMS is the main module designed to control all aspects of Hyper-V server virtualization, but also uses several sub-modules as explained below.
WMI Provider: This module acts as an interface between developers and VMs running in the child partitions. The Windows Management Instrumentation (WMI) Provider component implements the necessary WMI classes for developers to execute an action on the VMs running on a Hyper-V host. Microsoft implements root\virtualization as the core WMI Provider that contains networking, VM BIOS, storage and video classes to help you interact with Hyper-V VMs.
Hyper-V VSS Writer: The backups that a Hyper-V application performs are handled by the Volume Shadow Copy Service (VSS) Writer component. The Hyper-V VSS Writer backs up VMs without any downtime. The Hyper-V VSS Writer and Hyper-V Volume Shadow Copy Requestor service running in a VM as part of Integration Services enable online backup functionality. Any requests coming for VM backups are handled by the Hyper-V VSS Writer and then sent to the Hyper-V Volume Shadow Copy Requestor service.
Virtual Machine, Worker Process and Snapshot Managers: The Virtual Machine Manager component is responsible for managing VM states. When you open the Hyper-V Manager, VMMS.exe calls the Virtual Machine Manager component to refresh VM statuses. Worker Process Manager launches a VM worker process for each VM and keeps track of all worker processes running in the parent partition. Worker Process Manager also processes snapshots or checkpoints for running VMs. On the other hand, Snapshot Manager – as the name suggests – handles snapshots or checkpoints for VMs that are offline.
Single Port Listener for RDP: Remote Desktop Protocol (RDP) is used by the Virtual Machine Connection Manager tool to connect to a VM over network port 2179. The VMMS.exe listens on network port 2179 for incoming RDP requests from the VMConnect.exe tool. When VMMS.exe receives a RDP request, it redirects the request to the Single Port Listener for RDP component, which in turn, helps to enable RDP of a VM.
Cluster Resource Control: With the help of Cluster Resource Control component, VMMS.exe enables high availability for VMs running in a Hyper-V cluster. Cluster Resource Control uses HVCLUSRES.DLL to interact with VM resources.

How does a roaming user profile work?

Although there are several newer tools available, Microsoft roaming profiles is a simple and time-tested way to manage a user's profile across physical and virtual desktop environments.


Microsoft's roaming profiles give IT administrators a basic option to provide users with their personal settings and data from any device or virtual desktop connected to the corporate network.
Windows maintains a profile for each user who logs into the OS.  The user's profile folder contains user-specific data and customizations such as application configuration data, browser history, documents, photos and much more. User profiles vary depending on which version of Windows an organization uses, but most Windows versions include a folder named C:\Users. A user's profile lives there in another folder usually titled with the user's name or an identifying number that IT assigns.
The problem with standard user profiles is that they are tied to an individual desktop. If a user logs in from a different physical desktop or virtual desktop, his profile data won't exist on that machine.
Microsoft designed roaming user profiles to solve this problem. If an organization uses Windows Server 2000 or newer, administrators can create roaming profiles, which live on a server and are accessible on any computer connected to the company network. With a roaming user profile, an employee's data follows him from device to device.
Roaming profiles work by storing the user's profile on a network server rather than on a desktop computer. Admins can configure Active Directoryso that it associates the roaming user profile with the user's account. When the employee logs in, Windows copies the user's profile from the network to the local computer. When he logs off, Windows copies any updates he made to profile data from the desktop computer to the network copy of the profile. That process ensures that the roaming user profile remains up to date the next time the employee logs into a virtual desktop or PC.
One of the problems with roaming profiles is that profiles can grow to be quite large. The logon or logoff duration increases according to the profile size because of the amount of data copied to or from the network. It has become common for organizations to use folder redirection in conjunction with roaming profiles to speed up the logon and logoff process and improve the user experience. Folder redirection allows folders such as Documents to remain centrally located -- usually on a file server -- rather than being copied to and from the desktop PC.
Roaming profiles have been a standard and cost-effective way to deliver user settings across physical and virtual desktops for more than a decade, although Microsoft also released a user experience management tool calledUser Experience Virtualization (UE-V) in 2012. Microsoft UE-V virtualizes users' operating systems and application settings from a settings store on a file server. Roaming user profiles are still a good basic option to provide the same experience across PC and virtual desktop environments, but there are also third-party user profile management tools available for companies with specific needs.


The top Active Directory tools and techniques for backup and restore

A business with a broken Active Directory can crumple without this key piece of infrastructure. Learn how to plan an Active Directory backup and restoration.


Active Directory has become one of the most ubiquitous features of Windows Server over the last 15 years. Active Directory (AD), first introduced in Windows 2000, allows administrators to manage users and computers by implementing and enforce security policies. It also provides admins with a centralized and hierarchical directory to manage all of the resources in a network.
This feature looks at several Active Directory tools and tips that can ease the backup and restoration process for this essential piece of your enterprise.  
Methods for backing up and restoring Active Directory
There is no one way to back up and restore AD, which can make it difficult for admins to know where to start. Options include backing up system state or critical volumes, performing a full server backup or a full restoration that can be either authoritative or nonauthoritative.
Guidelines for a painless Active Directory backup
Backing up AD doesn't have to be tedious. To make the process as pain-free as possible, admins should know which domain controllers need to be backed up and implement a regular backup schedule. To save storage space, admins should remove unnecessary backups.
Can Active Directory be restored to different hardware?
Restoring AD to different hardware has a few caveats. There is no way to use one server to solve a problem on another server's backup. If an AD backup needs to be restored to a different hardware platform, a full server backup is necessary.
Third-party Active Directory tools
Although Windows Server comes with a built-in backup tool, admins may need additional capabilities, such as alerting and reporting features, to back up AD. Tools from Dell and Acronis offer automated and full server backups so admins never have to worry about losing data.
Why Active Directory functional levels are important
Functional levels determine which capabilities with an Active Directory Domain Services forest or domain are available, as well as which OSes can be run on domain controllers. Higher functional levels may introduce new features that can improve the functionality of the server. 


Sunday 13 December 2015

Are there any new Windows 10 Group Policy settings?

Microsoft keeps a spreadsheet of all the Group Policy settings for the Windows OS, and you can filter it to see which ones are new in Windows 10.

Every new version of Windows brings new Group Policy settings you can use to gain tighter control over Windows endpoints, and Windows 10 is no exception. There are too many new Windows 10 Group Policy settings to list, but it's pretty easy to see what is new. Microsoft maintains spreadsheets of Windows Group Policy settings, and anyone can download them.
There are a couple things you need to know about this spreadsheet, however. First, the download link provides spreadsheets for every Windows operating system since Vista. Needless to say, you should use the spreadsheet named Windows 10 ADMX Spreadsheet.xlsx to see Windows 10 Group Policy settings. Additionally, the spreadsheet only covers the Administrative Templates included in Windows; you won't find any Group Policy extensions in the spreadsheets.
It is also worth noting that the spreadsheet can be somewhat overwhelming to use. There are thousands of Group Policy settings listed in the spreadsheet. Some are new to Windows 10, but most existed in previous versions of Windows.
To see only the new Group Policy settings, open the spreadsheet in Exceland select the Supported On column. Next, click on the Sort & Filter button, then click Filter. You will now be able to filter the spreadsheet to show only the policy settings that are new to Windows 10. Keep in mind that filtering the list in this way omits Group Policy settings related to the Edge browser. There are separate policy settings for the browser.


Learn how to configure Hyper-V Enhanced Session Mode

Windows Server 2012 R2's Hyper-V Enhanced Session Mode sets itself apart from standard sessions by offering users a richer overall experience.

One of the more underrated Hyper-V features introduced in Windows Server 2012 R2 is Enhanced Session Mode. This allows for a richer overall experience when remotely connecting to a Hyper-V virtual machine. This is made possible by the fact that the remote desktop connection can now use the Virtual Machine Bus, which is the mechanism used to enable communications between a VM and the host operating system. The end result is that the Hyper-V Manager console can behave similarly to a Remote Desktop Protocol session.
To illustrate the difference between a standard session and an enhanced session, take a look at Figure A. This screenshot shows Windows XP running in a standardHyper-V Manager console. As you can see in the figure, there is nothing particularly remarkable about the console. It provides all of the usual controls, such as those used for changing the VM's state.
Windows XP running in a standard Hyper-V Manager console
Figure A: This is what a standard Hyper-V console looks like.
By way of comparison, Figure B shows the connection dialog box that Windows displays when you attempt to open the console on a Hyper-V VM that supports Enhanced Session Mode. As you can see in the figure, you have the ability to choose your screen resolution and to span multiple monitors.
 Users can choose a display configuration when using Hyper-V Enhanced Session Mode
Figure B: This dialog box is displayed for a VM that supports Enhanced Session Mode.
Clicking the "Show Options" button, shown at the bottom of the figure above, does two things: First, it expands the dialog box, revealing a checkbox that you can use to save your display settings for use with future connections to the VM. Second, clicking "Show Options" causes a secondary tab to be revealed. This tab, which you can see in figure C, allows you to choose which local resources you want to use in your remote session.
Users may redirect local resources when running a Hyper-V Enhanced Session Mode.
Figure C: The console allows you to redirect local resources for use with your remote session.
The Connect dialog box shown in the previous two figures is displayed automatically any time you attempt to open the console for a VM that supports Enhanced Session Mode. If you want to use Enhanced Session Mode, simply configure the session and click "Connect." Otherwise, close the dialog box and Hyper-V will automatically revert to a standard console.
As previously mentioned, the advantage to using Enhanced Session Mode is that it allows you to use local resources with remote sessions. More specifically, this means that you can redirect the following resources:
·         Audio
·         Display (with resolution control)
·         The Windows clipboard
·         USB devices
·         Plug-and-play devices
·         Smart cards
·         Printers
The end result is a console session that behaves much like a local session. This, of course, raises the question of what the system requirements are for using enhanced sessions.
As previously noted, enhanced sessions were introduced with Windows Server 2012 R2, so your Hyper-V servers will need to be running that operating system, or later versions. Additionally, the Hyper-V Enhanced Session Mode Policy must be configured to allow the use of Enhanced Session Mode, as shown in Figure D.
The Enhanced Session Mode requires console configuration.
Figure D: The Hyper-V Server must allow enhanced session mode to be used.
The VMs also play a role in determining which session types may be used. You already saw that the Windows Server 2012 R2 VM supported Enhanced Session Mode, while the Windows XP VM did not.
There are four criteria that must be met by the VM in order for it to support the Hyper-V Enhanced Session Mode. The VM must run a guest operating system that supports the use of the Remote Desktop Services (RDS). For example, Windows Server 2012 R2 VMs are supported, as are VMs running Pro and Enterprise editions of Windows 8.1 and Windows 10.
Second, you must complete the Out-of-Box Experience setup process for the guest OS. In other words, you won't be able to establish an enhanced session to a VM unless the guest OS is fully installed.
Third, the RDS service must be running on the guest OS. You don't actually need to enable remote access, but the RDS does have to be running.
The fourth and final requirement is that you must log on to the VM as a user who has either local administrator permissions -- at the guest level -- or has been granted access as a Remote Desktop User.
The Hyper-V Enhanced Session Mode provides an experience that rivals that of a local session. In order to use Enhanced Session Mode, however, there are a number of requirements that must be met at both the host and guest OS level.


Biometrics definition
Biometrics is the measurement and statistical analysis of people's physical and behavioral characteristics. The technology is mainly used for identification and access control, or for identifying individuals that are under surveillance. The basic premise of biometric authentication is that everyone is unique and an individual can be identified by his or her intrinsic physical or behavioral traits. (The term "biometrics" is derived from the Greek words "bio" meaning life and "metric" meaning to measure.)
There are two main types of biometric identifiers:
1.     Physiological characteristics: The shape or composition of the body.
2.     Behavioral characteristics: The behavior of a person.
Examples of physiological characteristics used for biometric authentication includefingerprints; DNA; face, hand, retina or ear features; and odor. Behavioral characteristics are related to the pattern of the behavior of a person, such as typing rhythm, gait, gestures and voice. Certain biometric identifiers, such as monitoring keystrokes or gait in real time, can be used to provide continuous authentication instead of a single one-off authentication check.
Other areas that are being explored in the quest to improve biometric authentication include brainwave signals, electronic tattoos, and a password pill that contains a microchip powered by the acid present in the stomach. Once swallowed, it creates a unique ID radio signal that can be sensed from outside the skin, turning the entire body into a password.
Biometric verification becoming common
Authentication by biometric verification is becoming increasingly common in corporate and public security systems, consumer electronics, and point-of-sale applications. In addition to security, the driving force behind biometric verification has been convenience, as there are no passwords to remember or security tokens to carry. Measuring someone’s gait doesn’t even require a contact with the person.
Biometric devices, such as fingerprint readers, consist of:
·         A reader or scanning device.
·         Software that converts the scanned information into digital form and compares match points.
·         A database that stores the biometric data for comparison.
Accuracy of biometrics
The accuracy and cost of readers has until recently been a limiting factor in the adoption of biometric authentication solutions but the presence of high quality cameras, microphones, and fingerprint readers in many of today’s mobile devices means biometrics is likely to become a considerably more common method of authenticating users, particularly as the new FIDO specification means that two-factor authentication using biometrics is finally becoming cost effective and in a position to be rolled out to the consumer market.
The quality of biometric readers is improving all the time, but they can still produce false negatives and false positives. One problem with fingerprints is that people inadvertently leave their fingerprints on many surfaces they touch, and it’s fairly easy to copy them and create a replica in silicone. People also leave DNA everywhere they go and someone’s voice is also easily captured. Dynamic biometrics like gestures and facial expressions can change, but they can be captured by HD cameras and copied. Also, whatever biometric is being measured, if the measurement data is exposed at any point during the authentication process, there is always the possibility it can be intercepted. This is a big problem, as people can’t change their physical attributes as they can a password. While limitations in biometric authentication schemes are real, biometrics is a great improvement over passwords as a means of authenticating an individual.



How to list partition labels in Linux

This is a small post on how we can view list of assigned labels for partitions in Linux OS. We some times assign meaningful labels to partitions so that we can name them in /etc/fstab file for readability. For example most of the time we label boot partition with label as “boot”.
So how can we see available or assigned labels for partitions in Linux? It is possible, we have couple of commands like e2label to see label assigned to that partition.
Through e2label command
e2label device-name
Example:
root@linuxnix-209:/home/linuxnix# e2label /dev/sda1
boot
root@linuxnix-209:/home/taggle#
Through blkid command
blkid
output:
/dev/sda1: UUID="26299cdc-ea26-4fcd-a111-9af875d58a81" TYPE="ext4" LABEL="boot"
Through /dev folder mappings
ls -l /dev/disk/by-label/
Output:
root@linuxnix-209:/home/linuxnix# ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 10 Nov 18 13:08 boot -> ../../sda1
This /dev/disk/by-label folder and blkid are more useful to list all assigned labels.



Saturday 12 December 2015

How to troubleshoot DNS server failures

DNS server failures are some of the most serious types of failures that can occur on a Windows network. If DNS is not working, then the Active Directory will not work either. Furthermore, users may not be able to access resources on the local network or the Internet. If your clients experience these types of problems, they will most likely call on you for help. As a network solution provider, you need to be familiar with how DNS works and how to perform basic troubleshooting. In this article, I show you some simple techniques for troubleshooting a DNS server failure.
Is the DNS server really to blame?
I have fixed a number of DNS problems over the years and very few have actually been related to failures on the DNS server. More often than not, the problem existed on the machine that was trying to perform the DNS query, rather than on the DNS server itself. Fortunately, there are some quick tests that you can use to narrow down the problem.
First, confirm the DNS server's IP address and that the DNS server service is running. Once you verify these two things, you can get started with the process of troubleshooting the DNS server failure.
I like to start out by making sure that the client machine is pointed to the correct DNS server. The easiest way to do this is to open a command prompt window and enter the following command:
IPCONFIG /ALL
This command will list the computer's TCP/IP configuration. You can get the same information through the computer's network configuration screens, but I prefer to use this method because I have run into a couple of instances where the information that Windows showed did not match the configuration that Windows was actually using.
Upon displaying the machine's TCP/IP configuration, verify that the computer is pointed to the correct DNS server. For example, if you look at Figure A, you can see that my computer is pointed to a DNS server with an IP address of 147.100.100.34.
Verify that the machine's TCP/IP is configured to use the correct DNS server.
Assuming that the configuration is correct, the next thing I recommend doing is pinging the DNS server. This will verify that the client's machine is actually able to communicate with the DNS server. Keep in mind, though, that if the DNS server's firewall is configured to block ICMP traffic then the ping will not be successful.
Once you have verified that the client can communicate with the DNS server, it's time to see if the DNS server is able to resolve host names. The easiest way to do this is to test the IP address of a familiar host name. For example, I know that my website uses the IP address 24.235.10.4. Therefore, if I run the NSLOOKUP command against www.brienposey.com, my DNS server should resolve www.brienposey.com to 24.235.10.4, as shown in Figure B.
NSLOOKUP verified the IP address of my website.
One more important thing to notice in Figure B is that Windows also verifies the IP address of the DNS server that was used to resolve the domain name. This IP address should match the one that is shown in Figure A.
What happens if the NSLOOKUP command returns an incorrect IP address for the target domain? Well, there are a couple of things that could have happened. One possibility is that the domain's IP address has changed, but the change has not yet been replicated to the DNS server. Another possibility is that malware has modified the contents of the DNS cache. Once Windows has resolved a domain name to an IP address, the name resolution is cached and kept on hand for a while so that Windows does not have to repeat the query each time the domain name needs to be used. If there is an invalid entry in the cache, then Windows will not be able to access the domain correctly.
http://cdn.ttgtmedia.com/images/spacer.gif
http://cdn.ttgtmedia.com/images/spacer.gif
http://cdn.ttgtmedia.com/images/spacer.gif
Fortunately, it is easy to flush the DNS cache. To do so, just enter the IPCONFIG command followed by the /FLUSHDNS switch. If you are running Windows Vista, then this operation will require elevated privileges. You can get these privileges by right-clicking on the Command Prompt menu option and choosing Run As Administrator from the resulting shortcut menu.
Once you flush the DNS cache, try running NSLOOKUP once again. If the host name is still incorrect, then there are a couple different possibilities. For example, the DNS server may have lost connectivity to a root-level server. Another possibility is that there is an incorrect entry in the LMHOSTS file or in the Windows registry. I show you how to deal with these types of issues in part 2 of this series.