Windows
Sever 2008/R2 Interview questions Part 1
Difference between
2003 and 2008
1) 2008 is combination of vista and windows 2003r2. Some new
services are introduced in it
1. RODC one new domain controller introduced in it [Read-only Domain controllers.]
2. WDS (windows deployment services) instead of RIS in 2003 server
3. shadow copy for each and every folders
4.boot sequence is changed
5.installation is 32 bit where as 2003 it is 16 as well as 32 bit, that’s why installation of 2008 is faster
6.services are known as role in it
7. Group policy editor is a separate option in ads
1. RODC one new domain controller introduced in it [Read-only Domain controllers.]
2. WDS (windows deployment services) instead of RIS in 2003 server
3. shadow copy for each and every folders
4.boot sequence is changed
5.installation is 32 bit where as 2003 it is 16 as well as 32 bit, that’s why installation of 2008 is faster
6.services are known as role in it
7. Group policy editor is a separate option in ads
2)
The main difference between 2003 and 2008 is Virtualization, management.
2008 has more inbuilt components and updated third party drivers Microsoft introduces new feature with 2k8 that is Hyper-V Windows Server 2008 introduces Hyper-V (V for Virtualization) but only on 64bit versions. More and more companies are seeing this as a way of reducing hardware costs by running several ‘virtual’ servers on one physical machine. If you like this exciting technology, make sure that you buy an edition of Windows Server 2008 that includes Hyper-V, then launch the Server Manger, add Roles.
2008 has more inbuilt components and updated third party drivers Microsoft introduces new feature with 2k8 that is Hyper-V Windows Server 2008 introduces Hyper-V (V for Virtualization) but only on 64bit versions. More and more companies are seeing this as a way of reducing hardware costs by running several ‘virtual’ servers on one physical machine. If you like this exciting technology, make sure that you buy an edition of Windows Server 2008 that includes Hyper-V, then launch the Server Manger, add Roles.
Windows server 2008
new features
1. Virtualization with Hyper V
2. Server Core – provides the minimum installation required to carry out a
specific server role, such as for a DHCP, DNS or print server. From a security
standpoint, this is attractive. Fewer applications and services on the sever
make for a smaller attack surface. In theory, there should also be less
maintenance and management with fewer patches to install, and the whole server
could take up as little as 3Gb of disk space according to Microsoft
3. IIS 7
4. Role based installation –
rather than configuring a full server install for a particular role by
uninstalling unnecessary components (and installing needed extras), you simply
specify the role the server is to play, and Windows will install what’s
necessary — nothing more.
5. Read Only Domain Controllers
(RODC)
It’s hardly news that branch offices often lack skilled IT staff to administer their servers, but they also face another, less talked about problem. While corporate data centers are often physically secured, servers at branch offices rarely have the same physical security protecting them. This makes them a convenient launch pad for attacks back to the main corporate servers. RODC provides a way to make an Active Directory database read-only. Thus, any mischief carried out at the branch office cannot propagate its way back to poison the Active Directory system as a whole. It also reduces traffic on WAN links.
It’s hardly news that branch offices often lack skilled IT staff to administer their servers, but they also face another, less talked about problem. While corporate data centers are often physically secured, servers at branch offices rarely have the same physical security protecting them. This makes them a convenient launch pad for attacks back to the main corporate servers. RODC provides a way to make an Active Directory database read-only. Thus, any mischief carried out at the branch office cannot propagate its way back to poison the Active Directory system as a whole. It also reduces traffic on WAN links.
6. Enhanced terminal services
Terminal services has been beefed up in Server 2008 in a number of ways. TS RemoteApp enables remote users to access a centralized application (rather than an entire desktop) that appears to be running on the local computer’s hard drive. These apps can be accessed via a Web portal or directly by double-clicking on a correctly configured icon on the local machine. TS Gateway secures sessions, which are then tunnelled over https, so users don’t need to use a VPN to use RemoteApps securely over the Internet. Local printing has also been made significantly easier.
Terminal services has been beefed up in Server 2008 in a number of ways. TS RemoteApp enables remote users to access a centralized application (rather than an entire desktop) that appears to be running on the local computer’s hard drive. These apps can be accessed via a Web portal or directly by double-clicking on a correctly configured icon on the local machine. TS Gateway secures sessions, which are then tunnelled over https, so users don’t need to use a VPN to use RemoteApps securely over the Internet. Local printing has also been made significantly easier.
7. Network Access Protection
Microsoft’s system for ensuring that clients connecting to Server 2008 are patched, running a firewall and in compliance with corporate security policies — and that those that are not can be remediated — is useful. However, similar functionality has been and remains available from third parties.
Microsoft’s system for ensuring that clients connecting to Server 2008 are patched, running a firewall and in compliance with corporate security policies — and that those that are not can be remediated — is useful. However, similar functionality has been and remains available from third parties.
8. Windows PowerShell
Microsoft’s new (ish) command line
shell and scripting language has proved popular with some server
administrators, especially those used to working in Linux environments.
Included in Server 2008, PowerShell can make some jobs quicker and easier to
perform than going through the GUI. Although it might seem like a step backward
in terms of user friendly operation, it’s one of those features that once
you’ve gotten used to it; you’ll never want to give up.
Restartable Active Directory Domain
Services: You can now perform many actions,
such as offline defragmentation of the database, simply by stopping Active
Directory. This reduces the number of instances in which you must restart the
server in Directory Services Restore Mode and thereby reduces the length of
time the domain controller is unavailable to serve requests from
Enhancements to Group Policy: Microsoft has added many new policy settings. In particular,
these settings enhance the management of Windows Vista client computers. All
policy management is now handled by means of the Group Policy Management
Console (GPMC), which was an optional feature first added to Windows Server
2003 R2. In addition, Microsoft has added new auditing capabilities to Group
Policy and added a searchable database for locating policy settings from within
GPMC. In Windows Server 2008 R2, GPMC enables you to use a series of PowerShell
cmdlets to automate many of the tasks (such as maintenance and linking of GPOs)
that you would otherwise perform in the GUI. In addition, R2 adds new policy
settings that enhance the management of Windows 7 computers.
Windows Server 2008 R2
new features:
Active Directory Recycle Bin,Windows
PowerShell 2.0
Active Directory Administrative
Center (ADAC) ,Offline domain join
Active Directory health check
,Active Directory Web Services ,Active Directory Management Pack
Windows Server Migration
Tools,Managed Service Accounts
What is server core?
How do you configure and manage a windows server 2008 core installation?
The Server Core installation option
is an option that you can use for installing Windows Server 2008 or
Windows Server 2008 R2. A Server Core installation provides a minimal
environment for running specific server roles, which reduces the maintenance
and management requirements and the attack surface for those server roles. A
server running a Server Core installation of Windows Server 2008 supports
the following server roles:
- Active Directory Domain Services (AD DS)
- Active Directory Lightweight Directory Services (AD LDS)
- DHCP Server
- DNS Server
- File Services
- Hyper-V
- Print Services
- Streaming Media Services
- Web Server (IIS)
A server running a Server Core
installation of Windows Server 2008 R2 supports the following server
roles:
- Active Directory Certificate Services
- Active Directory Domain Services
- Active Directory Lightweight Directory Services (AD LDS)
- DHCP Server
- DNS Server
- File Services (including File Server Resource Manager)
- Hyper-V
- Print and Document Services
- Streaming Media Services
- Web Server (including a subset of ASP.NET)
A Server Core installation does not
include the traditional full graphical user interface. Once you have configured
the server, you can manage it locally at a command prompt or remotely using a
Terminal Server connection. You can also manage the server remotely using the Microsoft
Management Console (MMC) or command-line tools that support remote use.
Benefits of a Server
Core installation
The Server Core installation option
of Windows Server 2008 or Windows Server 2008 R2 provides the
following benefits:
- Reduced maintenance. Because the Server Core installation option installs only what is required to have a manageable server for the supported roles, less maintenance is required than on a full installation of Windows Server 2008.
- Reduced attack surface. Because Server Core installations are minimal, there are fewer applications running on the server, which decreases the attack surface.
- Reduced management. Because fewer applications and services are installed on a server running the Server Core installation, there is less to manage.
- Less disk space required. A Server Core installation requires only about 3.5 gigabytes (GB) of disk space to install and approximately 3 GB for operations after the installation.
How do you promote a
Server Core to DC?
In order to install Active Directory
DS on your server core machine you will need to perform the following tasks:
1. Configure an unattend
text file, containing the instructions for the DCPROMO process. In this example
you will create an additional DC for a domain called petrilab.local:
2. Configure the right
server core settings
After that you need to make sure the
core machine is properly configured.
1.
Perform any configuration setting
that you require (tasks such as changing computer name, changing and configure
IP address, subnet mask, default gateway, DNS address, firewall settings,
configuring remote desktop and so on).
2.
After changing the required server
configuration, make sure that for the task of creating it as a DC – you have
the following requirements in place:
- A partition formatted with NTFS (you should, it’s a server…)
- A network interface card, configure properly with the right driver
- A network cable plugged in
- The right IP address, subnet mask, default gateway
And most importantly, do not forget:
- The right DNS setting, in most cases, pointing to an existing internal DNS in your corporate network
3. Copy the unattend
file to the server core machine
Now you need to copy the unattend
file from wherever you’ve stored it. You can run it from a network location but
I prefer to have it locally on the core machine. You can use the NET USE
command on server core to map to a network path and copy the file to the local
drive. You can also use a regular server/workstation to graphically access the
core’s C$ drive (for example) and copy the file to that location.
4. Run the DCPROMO
process
Next you need to manually run
DCPROMO. To run the Active Directory Domain Services Installation Wizard in
unattended mode, use the following command at a command prompt:
Dcpromo /unattend ,Reboot the machine
In order to reboot the server core
machine type the following text in the command prompt and press Enter.
shutdown /r /t 0
What are RODCs? What
are advantages?
A read-only domain controller (RODC)
is a new type of domain controller in the Windows Server® 2008
operating system. With an RODC, organizations can easily deploy a domain controller
in locations where physical security cannot be guaranteed. An RODC hosts
read-only partitions of the Active Directory Domain Services
(AD DS) database.
Before the release of Windows
Server 2008, if users had to authenticate with a domain controller over a
wide area network (WAN), there was no real alternative. In many cases, this was
not an efficient solution. Branch offices often cannot provide the adequate
physical security that is required for a writable domain controller.
Furthermore, branch offices often have poor network bandwidth when they are
connected to a hub site. This can increase the amount of time that is required
to log on. It can also hamper access to network resources.
Beginning with Windows
Server 2008, an organization can deploy an RODC to address these problems.
As a result, users in this situation can receive the following benefits:
- Improved security
- Faster logon times
- More efficient access to resources on the network
What does an RODC do?
Inadequate physical security is the most
common reason to consider deploying an RODC. An RODC provides a way to deploy a
domain controller more securely in locations that require fast and reliable
authentication services but cannot ensure physical security for a writable
domain controller.
However, your organization may also
choose to deploy an RODC for special administrative requirements. For example,
a line-of-business (LOB) application may run successfully only if it is
installed on a domain controller. Or, the domain controller might be the only
server in the branch office, and it may have to host server applications.
In such cases, the LOB application
owner must often log on to the domain controller interactively or use Terminal
Services to configure and manage the application. This situation creates a
security risk that may be unacceptable on a writable domain controller.
An RODC provides a more secure
mechanism for deploying a domain controller in this scenario. You can grant a
non administrative domain user the right to log on to an RODC while minimizing
the security risk to the Active Directory forest.
You might also deploy an RODC in
other scenarios where local storage of all domain user passwords is a primary
threat, for example, in an extranet or application-facing role.
How do you install an
RODC?
1 Make sure you are a member of
Domain Admin group
2. Ensure that the forest functional
level is Windows Server 2003 or higher
3. Run adprep /rodcprep
3. Install a writable domain
controller that runs Windows Server 2008 – An RODC must replicate domain
updates from a writable domain controller that runs Windows Server 2008.
Before you install an RODC, be sure to install a writable domain controller
that runs Windows Server 2008 in the same domain. The domain controller
can run either a full installation or a Server Core installation of Windows
Server 2008. In Windows Server 2008, the writable domain controller
does not have to hold the primary domain controller (PDC) emulator operations
master role.
4. You can install an RODC on either
a full installation of Windows Server 2008 or on a Server Core
installation of Windows Server 2008. Follow the below steps:
- Click Start, type dcpromo, and then press ENTER to start the Active Directory Domain Services Installation Wizard.
- On the Choose a Deployment Configuration page, click Existing forest, click Add a domain controller to an existing domain
- On the Network Credentials page, type the name of a domain in the forest where you plan to install the RODC. If necessary, also type a user name and password for a member of the Domain Admins group, and then click Next.
- Select the domain for the RODC, and then click Next.
- Click the Active Directory site for the RODC and click next
- Select the Read-only domain controller check box, as shown in the following illustration. By default, the DNS server check box is also selected. To run the DNS server on the RODC, another domain controller running Windows Server 2008 must be running in the domain and hosting the DNS domain zone. An Active Directory–integrated zone on an RODC is always a read-only copy of the zone file. Updates are sent to a DNS server in a hub site instead of being made locally on the RODC.
- To use the default folders that are specified for the Active Directory database, the log files, and SYSVOL, click Next.
- Type and then confirm a Directory Services Restore Mode password, and then click Next.
- Confirm the information that appears on the Summary page, and then click Next to start the AD DS installation. You can select the Reboot on completion check box to make the rest of the installation complete automatically.
What is the minimum
requirement to install Windows 2008 server?
Talk about all the
AD-related roles in Windows Server 2008/R2.
Active Directory Domain Services
Active Directory Domain Services (AD
DS), formerly known as Active Directory Directory Services, is the central
location for configuration information, authentication requests, and
information about all of the objects that are stored within your forest. Using
Active Directory, you can efficiently manage users, computers, groups,
printers, applications, and other directory-enabled objects from one secure,
centralized location.
Benefits
- Lower costs of managing Windows networks.
- Simplify identity management by providing a single view of all user information.
- Boost security with the ability to enable multiple types of security mechanisms within a single network.
- Improve compliance by using Active Directory as a primary source for audit data.
Active Directory Rights Management
Services
Your organization’s intellectual
property needs to be safe and highly secure. Active Directory Rights Management
Services, a component of Windows Server 2008, is available to help make sure
that only those individuals who need to view a file can do so. AD RMS can
protect a file by identifying the rights that a user has to the file. Rights
can be configured to allow a user to open, modify, print, forward, or take
other actions with the rights-managed information. With AD RMS, you can now
safeguard data when it is distributed outside of your network.
Active Directory Federation Services
Active Directory Federation Services
is a highly secure, highly extensible, and Internet-scalable identity access
solution that allows organizations to authenticate users from partner
organizations. Using AD FS in Windows Server 2008, you can simply and very
securely grant external users access to your organization’s domain resources.
AD FS can also simplify integration between untrusted resources and domain
resources within your own organization.
Active Directory Certificate
Services
Most organizations use certificates
to prove the identity of users or computers, as well as to encrypt data during
transmission across unsecured network connections. Active Directory Certificate
Services (AD CS) enhances security by binding the identity of a person, device,
or service to their own private key. Storing the certificate and private key
within Active Directory helps securely protect the identity, and Active Directory
becomes the centralized location for retrieving the appropriate information
when an application places a request.
Active Directory Lightweight
Directory Services
Active Directory Lightweight
Directory Service (AD LDS), formerly known as Active Directory Application
Mode, can be used to provide directory services for directory-enabled
applications. Instead of using your organization’s AD DS database to store the
directory-enabled application data, AD LDS can be used to store the data. AD
LDS can be used in conjunction with AD DS so that you can have a central
location for security accounts (AD DS) and another location to support the
application configuration and directory data (AD LDS). Using AD LDS, you can
reduce the overhead associated with Active Directory replication, you do not
have to extend the Active Directory schema to support the application, and you
can partition the directory structure so that the AD LDS service is only
deployed to the servers that need to support the directory-enabled application.
What are the new
Domain and Forest Functional Levels in Windows Server 2008/R2?
Domain Function Levels
To activate a new domain function
level, all DCs in the domain must be running the right operating system. After
this requirement is met, the administrator can raise the domain functional
level. Here’s a list of the available domain function levels available in
Windows Server 2008:
Windows 2000 Native Mode
This is the default function level
for new Windows Server 2008 Active Directory domains.
Supported Domain controllers – Windows 2000, Windows Server 2003, Windows Server 2008.
Windows Server 2003 Mode
To activate the new domain features,
all domain controllers in the domain must be running Windows Server 2003. After
this requirement is met, the administrator can raise the domain functional
level to Windows Server 2003.
Supported Domain controllers – Windows Server 2003, Windows Server 2008.
Windows Server 2008 Mode
Supported Domain controllers – Windows Server 2008.
Windows 2008 Forest
function levels
Forest functionality activates
features across all the domains in your forest. To activate a new forest
function level, all the domain in the forest must be running the right
operating system and be set to the right domain function level. After this requirement
is met, the administrator can raise the forest functional level. Here’s a list
of the available forest function levels available in Windows Server 2008:
Windows 2000 forest function level
This is the default setting for new
Windows Server 2008 Active Directory forests.
Supported Domain controllers in all
domains in the forest – Windows 2000, Windows Server
2003, Windows Server 2008.
Windows Server 2003 forest function
level
To activate new forest-wide
features, all domain controllers in the forest must be running Windows Server
2003.
Supported Domain controllers in all
domains in the forest – Windows Server 2003, Windows
Server 2008.
Windows Server 2008 forest function
level
To activate new forest-wide
features, all domain controllers in the forest must be running Windows Server
2008.
Supported Domain controllers in all
domains in the forest – Windows Server 2008.
To activate the new domain features,
all domain controllers in the domain must be running Windows Server 2008. After
this requirement is met, the administrator can raise the domain functional
level to Windows Server 2008.
When a child domain is
created in the domain tree, what type of trust relationship exists between the
new child domain and the trees root domain?
Transitive and two way.
Which Windows Server
2008 tools make it easy to manage and configure a servers roles and features?
The Server Manager window enables
you to view the roles and features installed on a server and also to quickly
access the tools used to manage these various roles and features. The Server
Manager can be used to add and remove roles and features as needed
What is WDS? How is
WDS configured and managed on a server running Windows Server 2008?
The Windows Deployment Services is
the updated and redesigned version of Remote Installation Services (RIS).
Windows Deployment Services enables you to deploy Windows operating systems,
particularly Windows Vista. You can use it to set up new computers by
using a network-based installation. This means that you do not have to install
each operating system directly from a CD or DVD.
Benefits of Windows Deployment
Services
Windows Deployment Services provides
organizations with the following benefits:
- Allows network-based installation of Windows operating systems, which reduces the complexity and cost when compared to manual installations.
- Deploys Windows images to computers without operating systems.
- Supports mixed environments that include Windows Vista, Microsoft Windows XP and Microsoft Windows Server 2003.
- Built on standard Windows Vista setup technologies including Windows PE, .wim files, and image-based setup.
Prerequisites for installing Windows
Deployment Services
Your computing environment must meet
the following technical requirements to install Windows Deployment Services:
- Active Directory. A Windows Deployment Services server must be either a member of an Active Directory domain or a domain controller for an Active Directory domain. The Active Directory domain and forest versions are irrelevant; all domain and forest configurations support Windows Deployment Services.
- DHCP. You must have a working Dynamic Host Configuration Protocol (DHCP) server with an active scope on the network because Windows Deployment Services uses PXE, which relies on DHCP for IP addressing.
- DNS. You must have a working Dynamic Name Services (DNS) server on the network to run Windows Deployment Services.
- An NTFS partition. The server running Windows Deployment Services requires an NTFS file system volume for the image store.
- Credentials. To install the role, you must be a member of the Local Administrators group on the Windows Deployment Services server. To install an image, you must be a member of the Domain Users group.
- Windows Server 2003 SP1 or SP2 with RIS installed. RIS does not have to be configured, but must be installed.
Name some of the major
changes in GPO in Windows Server 2008.
Cost savings through power options
In Windows Server 2008, all
power options have been Group Policy enabled, providing a potentially
significant cost savings. Controlling power options through Group Policy could
save organizations a significant amount of money. You can modify specific power
options through individual Group Policy settings or build a custom power plan
that is deployable by using Group Policy.
Ability to block device installation
In Windows Server 2008, you can
centrally restrict devices from being installed on computers in your
organization. You will now be able to create policy settings to control access
to devices such as USB drives, CD-RW drives, DVD-RW drives, and other removable
media.
Improved security settings
In Windows Server 2008, the
firewall and IPsec Group Policy settings are combined to allow you to leverage
the advantages of both technologies, while eliminating the need to create and
maintain duplicate functionality. Some scenarios supported by these combined
firewall and IPsec policy settings are secure server-to-server communications
over the Internet, limiting access to domain resources based on trust
relationships or health of a computer, and protecting data communication to a
specific server to meet regulatory requirements for data privacy and security.
Expanded Internet Explorer settings
management
In Windows Server 2008, you can
open and edit Internet Explorer Group Policy settings without the risk of
inadvertently altering the state of the policy setting based on the
configuration of the administrative workstation. This change replaces earlier
behavior in which some Internet Explorer policy settings would change based on
the policy settings enabled on the administrative workstation used to view the
settings
Printer assignment based on location
The ability to assign printers based
on location in the organization or a geographic location is a new feature in
Windows Server 2008. In Windows Server 2008, you can assign printers
based on site location. When mobile users move to a different location, Group
Policy can update their printers for the new location. Mobile users returning
to their primary locations see their usual default printers.
Printer driver installation
delegated to users
In Windows Server 2008, administrators
can now delegate to users the ability to install printer drivers by using Group
Policy. This feature helps to maintain security by limiting distribution of
administrative credentials.
What is the AD Recycle
Bin? How do you use it?
Active Directory Recycle Bin
helps minimize directory service downtime by enhancing your ability to preserve
and restore accidentally deleted Active Directory objects without
restoring Active Directory data from backups, restarting
Active Directory Domain Services (AD DS), or rebooting domain
controllers.
When you enable
Active Directory Recycle Bin, all link-valued and non-link-valued
attributes of the deleted Active Directory objects are preserved and the
objects are restored in their entirety to the same consistent logical state
that they were in immediately before deletion. For example, restored user
accounts automatically regain all group memberships and corresponding access
rights that they had immediately before deletion, within and across domains.
Active Directory Recycle Bin is
functional for both AD DS and Active Directory Lightweight Directory
Services (AD LDS) environments.
By default, Active Directory
Recycle Bin in Windows Server 2008 R2 is disabled. To enable it, you must
first raise the forest functional level of your AD DS or AD LDS
environment to Windows Server 2008 R2, which in turn requires
all forest domain controllers or all servers that host instances of AD LDS
configuration sets to be running Windows Server 2008 R2.
To enable Active Directory
Recycle Bin using the Enable-ADOptionalFeature cmdlet
1. Click Start,
click Administrative Tools, right-click Active Directory Module for
Windows PowerShell, and then click Run as administrator.
1.
At the Active Directory module
for Windows PowerShell command prompt, type the following command, and
then press ENTER:
Enable-ADOptionalFeature -Identity
-Scope -Target
For example, to enable
Active Directory Recycle Bin for contoso.com, type the following command,
and then press ENTER:
Enable-ADOptionalFeature –Identity
‘CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows
NT,CN=Services,CN=Configuration,DC=contoso,DC=com’ –Scope
ForestOrConfigurationSet –Target ‘contoso.com’
What are AD Snapshots?
How do you use them?
A snapshot is a shadow copy—created
by the Volume Shadow Copy Service (VSS)—of the volumes that contain the Active
Directory database and log files. With Active Directory snapshots, you can view
the data inside such a snapshot on a domain controller without the need to
start the server in Directory Services Restore Mode.
Windows Server 2008 has a new
feature allowing administrators to create snapshots of the Active Directory
database for offline use. With AD snapshots you can mount a backup of AD DS
under a different set of ports and have read-only access to your backups
through LDAP.
There are quite a few scenarios for
using AD snapshots. For example, if someone has changed properties of AD
objects and you need to revert to their previous values, you can mount a copy
of a previous snapshot to an alternate port and easily export the required
attributes for every object that was changed. These values can then be imported
into the running instance of AD DS. You can also restore deleted objects or
simply view objects for diagnostic purposes.
It does not allow you to move or
copy items or information from the snapshot to the live database. In order to
do that you will need to manually export the relevant objects or attributes
from the snapshot, and manually import them back to the live AD database.
Steps for using Snapshot:
1. Create a snapshot:
open CMD.exe, Ntdsutil, activate
instance ntds, snapshot, create, list all.
2. Mounting an Active Directory
snapshot:
Before connecting to the snapshot we
need to mount it. By looking at the results of the List All command in above
step, identify the snapshot that you wish to mount, and note the number next to
it.
Type Ntdsutil, Snapshot, List all,
Mount 2. The snapshot gets mounted to c:\$SNAP_200901250030_VOLUMEC$. Now you
can refer this path to see the objects in these snapshots.
3. Connecting an Active Directory
snapshot:
In order to connect to the AD
snapshot you’ve mounted you will need to use the DSAMAIN command. DSAMAIN is a
command-line tool that is built into Windows Server 2008. It is available if
you have the Active Directory Domain Services (AD DS) or Active Directory
Lightweight Directory Services (AD LDS) server role installed.
After using DSAMAIN to expose the
information inside the AD snapshot, you can use any GUI tool that can connect
to the specified port, tools such as Active Directory Users and Computers
(DSA.msc), ADSIEDIT.msc, LDP.exe or others. You can also connect to it by using
command line tools such as LDIFDE or CSVDE, tools that allow you to export
information from that database.
dsamain -dbpath ”
c:\$SNAP_200901250030_VOLUMEC$\Windows\NTDS\ntds.dit” -ldapport 10289
The above command will allow you to
access the database using port 10289.
Now you can use LDP.exe tool to
connect to this mounted instance.
4. Disconnecting from the Active
Directory snapshot:
In order to disconnect from the AD
snapshot all you need to do is to type CTRL+C at the DSAMAIN command prompt
window. You’ll get a message indicating that the DS shut down successfully.
5. Unmounting the snapshot:
Run command, Ntdsutil, Snapshot,
List all, Unmount 2.
What is Offline Domain
Join? How do you use it?
You can use offline domain join to join computers to a
domain without contacting a domain controller over the network. You can join computers
to the domain when they first start up after an operating system installation.
No additional restart is necessary to complete the domain join. This helps
reduce the time and effort required to complete a large-scale computer
deployment in places such as datacenters.
For example, an organization might
need to deploy many virtual machines within a datacenter. Offine domain join
makes it possible for the virtual machines to be joined to the domain when they
initially start following the operating system installation. No additional
restart is required to complete the domain join. This can significantly reduce
the overall time required for wide-scale virtual machine deployments.
A domain join establishes a trust
relationship between a computer running a Windows operating system and an
Active Directory domain. This operation requires state changes to
AD DS and state changes on the computer that is joining the domain. To
complete a domain join in the past using previous Windows operating systems,
the computer that joined the domain had to be running and it had to have
network connectivity to contact a domain controller. Offline domain join
provides the following advantages over the previous requirements:
- The Active Directory state changes are completed without any network traffic to the computer.
- The computer state changes are completed without any network traffic to a domain controller.
- Each set of changes can be completed at a different time.
What are Fine-Grained
Passwords? How do you use them?
You can use fine-grained password policies to specify multiple
password policies within a single domain. You can use fine-grained password
policies to apply different restrictions for password and account lockout
policies to different sets of users in a domain.
For example, you can apply stricter
settings to privileged accounts and less strict settings to the accounts of
other users. In other cases, you might want to apply a special password policy
for accounts whose passwords are synchronized with other data sources.
Talk about Restartable
Active Directory Domain Services in Windows Server 2008/R2. What is this
feature good for?
Restartable AD DS is a feature
in Windows Server 2008 that you can use to perform routine maintenance
tasks on a domain controller, such as applying updates or performing offline
defragmentation, without restarting the server.
While AD DS is running, a
domain controller running Windows Server 2008 behaves the same way as a
domain controller running Microsoft® Windows® 2000 Server or
Windows Server 2003.
While AD DS is stopped, you can
continue to log on to the domain by using a domain account if other domain
controllers are available to service the logon request. You can also log on to
the domain with a domain account while the domain controller is started in
Directory Services Restore Mode (DSRM) if other domain controllers are
available to service the logon request.
If no other domain controller is
available, you can log on to the domain controller where AD DS is stopped
in Directory Services Restore Mode (DSRM) only by using the DSRM Administrator
account and password by default, as in Windows 2000 Server
Active Directory or Windows Server 2003 Active Directory.
Benefits of
restartable AD DS
Restartable AD DS reduces the
time that is required to perform offline operations such as offline
defragmentation. It also improves the availability of other services that run
on a domain controller by keeping them running when AD DS is stopped. In
combination with the Server Core installation option of Windows
Server 2008, restartable AD DS reduces the overall servicing
requirements of a domain controller.
In Windows 2000 Server
Active Directory and Windows Server 2003 Active Directory,
you must restart the domain controller in DSRM when you perform offline
defragmentation of the database or apply security updates. In contrast, you can
stop Windows Server 2008 AD DS as you stop other services that are
running locally on the server. This makes it possible to perform offline
AD DS operations more quickly than you could with Windows 2000 Server
and Windows Server 2003.
You can use Microsoft Management
Console (MMC) snap-ins, or the Net.exe command-line tool, to stop or restart
Active Directory® Domain Services (AD DS) in the
Windows Server® 2008 operating system. You can stop AD DS to
perform tasks, such as offline defragmentation of the AD DS database,
without restarting the domain controller. Other services that run on the
server, but that do not depend on AD DS to function, are available to
service client requests while AD DS is stopped. An example of such a
service is Dynamic Host Configuration Protocol (DHCP).
Top of
Form
Bottom of
Form
Windows
2008 Server Interview Questions Part II
October 19, 2011 4 Comments
1. What are the Important Windows port numbers:
RDP – 3389 – (windows rdp port number and remote desktop port number)
FTP – 21 – (file transfer protocol)
TFTP – 69 – ( tftp port number )
Telnet – 23 – ( telnet port number)
SMTP – 25 – ( SMTP port number)
DNS – 53 – ( dns port number and Domain Name System port number)
DHCP – 68 – (DHCP port number and Dynamic Host Configuration Protocol port number )
POP3 – 110 – ( post office Protocol 3 port )
HTTP – 80 – (http port number)
HTTPS – 443 – (https port number)
NNTP – 119 – ( Network News Transfer Protocol Port number )
NTP – 123 – (ntp port number and network Time Protocol and SNTP port number )
IMAP – 143 – (Internet Message Access Protocol port number)
SSMTP – 465 – ( SMTP Over SSl )
SIMAP – 993 – ( IMAP Over SSL )
SPOP3 – 995 – ( POP# Over SS L)
Time – 123 – ( ntp port number and network Time Protocol and SNTP port number )
NetBios – 137 – ( Name Service )
NetBios – 139 – ( Datagram Service )
DHCP Client – 546 – (DHCP Client port number)
DHCP Server – 547 – (DHCP Server port number)
Global Catalog – 3268 – (Global Catalog port number)
LDAP – 389 – ( LDAP port number and Lightweight Directory Access Protocol port number )
RPC – 135 – (remote procedure call Port number)
Kerberos – 88 – ( Kerberos Port Number)
SSH – 22 – ( ssh port number and Secure Shell port number)
2. How to check tombstone lifetime value in your Forest
Tombstone lifetime value different from OS to OS, for windows server 2000/2003 it’s 60 days, In Windows Server 2003 SP1, default tombstone lifetime (TSL) value has increased from 60 days to 180 days, again in Windows Server 2003 R2 TSL value has been decreased to 60 days, Windows Server 2003 R2 SP2 and windows server 2008 it’s 180 days
If you migrating windows 2003 environment to windows 2008 then its 60 day’s
you can use the below command to check/view the current tombstone lifetime value for your Domain/Forest
dsquery * “cn=directory service,cn=windows nt,cn=services,cn=configuration,dc=” –scope base –attr tombstonelifetime
Replace forestDN with your domain partition DN, for domainname.com the DN would be dc=domainname, dc=com
Source: http://technet.microsoft.com/en-us/library/cc784932(WS.10).aspx
3. How to find the domain controller that contains the lingering object
If we enable Strict Replication Consistency
Lingering objects are not present on domain controllers that log Event ID 1988. The source domain controller contains the lingering object
If we doesn’t enable Strict Replication Consistency
Lingering objects are not present on domain controllers that log Event ID 1388. Domain controller that doesn’t log Event ID 1388 and that domain controller contain the lingering object
You have a 100 Domain controllers which doesn’t enable Strict Replication Consistency, then you will get the Event ID 1388 on all the 99 Domain controllers except the one that contain the lingering object
Need to Remove Lingering Objects from the affected domain controller or decommission the domain controller
You can use Event Comb tool (Eventcombmt.exe) is a multi-threaded tool that can be used to gather specific events from the Event Viewer logs of different computers at the same time.
You can download these tools from the following location:
http://www.microsoft.com/downloads/details.aspx?FamilyID=9d467a69-57ff-4ae7-96ee-b18c4790cffd&DisplayLang=en
4. What are Active Directory ports:
List of Active Directory Ports for Active Directory replication and Active Directory authentication, this ports can be used to configure the Firewall
Active Directory replication- There is no defined port for Active Directory replication, Active Directory replication remote procedure calls (RPC) occur dynamically over an available port through RPCSS (RPC Endpoint Mapper) by using port 135
File Replication Services (FRS)- There is no defined port for FRS, FRS replication over remote procedure calls (RPCs) occurs dynamically over an available port by using RPCSS (RPC Endpoint Mapper ) on port 135
Other required ports for Active Directory
TCP 53 – DSN (DNS Download)
UDP 53 – DSN (DNS Queries)
TCP 42- WINS
UDP 42- WINS
TCP 3389- RDP (Remote Desktop)
TCP 135 – MS-RPC
TCP 1025 & 1026 – AD Login & replication
TCP 389 – LDAP
TCP 639 – LDAP over SSL/TLS
TCP 3268 -Global Catalog
TCP 3268 – Global Catalog over SSL/TSL
UDP 137 & 138 – NetBIOS related
UDP 88 – Kerberos v5
TCP 445 – SMB , Microsoft-ds
TCP 139 – SMB
5. How to do active directory health checks?
As an administrator you have to check your active directory health daily to reduce the active directory related issues, if you are not monitoring the health of your active directory what will happen
Let’s say one of the Domain Controller failed to replicate, first day you will not have any issue. If this will continue then you will have login issue and you will not find the object change and new object, that’s created and changed in other Domain Controller this will lead to other issues
If the Domain Controller is not replicated more then 60 day’s then it will lead to Lingering issue
Command to check the replication to all the DC’s(through this we can check Active Directory Health)
Repadmin /replsum /bysrc /bydest /sort:delta
You can also save the command output to text file, by using the below command
Repadmin /replsum /bysrc /bydest /sort:delta >>c:\replication_report.txt
this will list the domain controllers that are failing to replicate with the delta value
You can daily run this to check your active directory health
6. GPRESULT falied with access denied error:
Unable to get the result from gpresult on windows 2003 server, gpresult return with the access denied errors, you can able to update the group policy without issue
Run the following commands to register the userenv.dll and recompile the rsop mof file
To resolve the access denied error while doing the gpresult.
1. Open a cmd
1. re-register the userenv.dll
Regsvr32 /n /I c:\winnt\system32\userenv.dll
2. CD c:\windows\system32\wbem
3. Mofcomp scersop.mof
4. Gpupdate /force
5. Gpresult
Now you able to run the gpresult without error and even server reboot not required for this procedure
7. What is the command to find out site name for given DC
dsquery server NYDC01 -site
domain controller name = NYDC01
8. Command to find all DCs in the given site
Command to find all the Domain Controllers in the “Default-First-Site-Name” site
dsquery server -o rdn -site Default-First-Site-Name
Site name = Default-First-Site-Name
9. How many types of queries DNS does?
Iterative Query
Recursive Query
Iterative Query
In this query the client ask the name server for the best possible answer, the name server check the cache and zone for which it’s authoritative and returns the best possible answer to the client, which would be the full answer like IP address or try the other name server
Recursive Query
Client demands either a full answer or an error message (like record or domain name does not exist)
Client machine always send recursive query to the DNS server, if the DNS server does not have the requested information, DNS server send the iterative query to the other name server (through forwarders or secondary DNS server) until it gets the information, or until the name query fails
1. What are the Important Windows port numbers:
RDP – 3389 – (windows rdp port number and remote desktop port number)
FTP – 21 – (file transfer protocol)
TFTP – 69 – ( tftp port number )
Telnet – 23 – ( telnet port number)
SMTP – 25 – ( SMTP port number)
DNS – 53 – ( dns port number and Domain Name System port number)
DHCP – 68 – (DHCP port number and Dynamic Host Configuration Protocol port number )
POP3 – 110 – ( post office Protocol 3 port )
HTTP – 80 – (http port number)
HTTPS – 443 – (https port number)
NNTP – 119 – ( Network News Transfer Protocol Port number )
NTP – 123 – (ntp port number and network Time Protocol and SNTP port number )
IMAP – 143 – (Internet Message Access Protocol port number)
SSMTP – 465 – ( SMTP Over SSl )
SIMAP – 993 – ( IMAP Over SSL )
SPOP3 – 995 – ( POP# Over SS L)
Time – 123 – ( ntp port number and network Time Protocol and SNTP port number )
NetBios – 137 – ( Name Service )
NetBios – 139 – ( Datagram Service )
DHCP Client – 546 – (DHCP Client port number)
DHCP Server – 547 – (DHCP Server port number)
Global Catalog – 3268 – (Global Catalog port number)
LDAP – 389 – ( LDAP port number and Lightweight Directory Access Protocol port number )
RPC – 135 – (remote procedure call Port number)
Kerberos – 88 – ( Kerberos Port Number)
SSH – 22 – ( ssh port number and Secure Shell port number)
2. How to check tombstone lifetime value in your Forest
Tombstone lifetime value different from OS to OS, for windows server 2000/2003 it’s 60 days, In Windows Server 2003 SP1, default tombstone lifetime (TSL) value has increased from 60 days to 180 days, again in Windows Server 2003 R2 TSL value has been decreased to 60 days, Windows Server 2003 R2 SP2 and windows server 2008 it’s 180 days
If you migrating windows 2003 environment to windows 2008 then its 60 day’s
you can use the below command to check/view the current tombstone lifetime value for your Domain/Forest
dsquery * “cn=directory service,cn=windows nt,cn=services,cn=configuration,dc=” –scope base –attr tombstonelifetime
Replace forestDN with your domain partition DN, for domainname.com the DN would be dc=domainname, dc=com
Source: http://technet.microsoft.com/en-us/library/cc784932(WS.10).aspx
3. How to find the domain controller that contains the lingering object
If we enable Strict Replication Consistency
Lingering objects are not present on domain controllers that log Event ID 1988. The source domain controller contains the lingering object
If we doesn’t enable Strict Replication Consistency
Lingering objects are not present on domain controllers that log Event ID 1388. Domain controller that doesn’t log Event ID 1388 and that domain controller contain the lingering object
You have a 100 Domain controllers which doesn’t enable Strict Replication Consistency, then you will get the Event ID 1388 on all the 99 Domain controllers except the one that contain the lingering object
Need to Remove Lingering Objects from the affected domain controller or decommission the domain controller
You can use Event Comb tool (Eventcombmt.exe) is a multi-threaded tool that can be used to gather specific events from the Event Viewer logs of different computers at the same time.
You can download these tools from the following location:
http://www.microsoft.com/downloads/details.aspx?FamilyID=9d467a69-57ff-4ae7-96ee-b18c4790cffd&DisplayLang=en
4. What are Active Directory ports:
List of Active Directory Ports for Active Directory replication and Active Directory authentication, this ports can be used to configure the Firewall
Active Directory replication- There is no defined port for Active Directory replication, Active Directory replication remote procedure calls (RPC) occur dynamically over an available port through RPCSS (RPC Endpoint Mapper) by using port 135
File Replication Services (FRS)- There is no defined port for FRS, FRS replication over remote procedure calls (RPCs) occurs dynamically over an available port by using RPCSS (RPC Endpoint Mapper ) on port 135
Other required ports for Active Directory
TCP 53 – DSN (DNS Download)
UDP 53 – DSN (DNS Queries)
TCP 42- WINS
UDP 42- WINS
TCP 3389- RDP (Remote Desktop)
TCP 135 – MS-RPC
TCP 1025 & 1026 – AD Login & replication
TCP 389 – LDAP
TCP 639 – LDAP over SSL/TLS
TCP 3268 -Global Catalog
TCP 3268 – Global Catalog over SSL/TSL
UDP 137 & 138 – NetBIOS related
UDP 88 – Kerberos v5
TCP 445 – SMB , Microsoft-ds
TCP 139 – SMB
5. How to do active directory health checks?
As an administrator you have to check your active directory health daily to reduce the active directory related issues, if you are not monitoring the health of your active directory what will happen
Let’s say one of the Domain Controller failed to replicate, first day you will not have any issue. If this will continue then you will have login issue and you will not find the object change and new object, that’s created and changed in other Domain Controller this will lead to other issues
If the Domain Controller is not replicated more then 60 day’s then it will lead to Lingering issue
Command to check the replication to all the DC’s(through this we can check Active Directory Health)
Repadmin /replsum /bysrc /bydest /sort:delta
You can also save the command output to text file, by using the below command
Repadmin /replsum /bysrc /bydest /sort:delta >>c:\replication_report.txt
this will list the domain controllers that are failing to replicate with the delta value
You can daily run this to check your active directory health
6. GPRESULT falied with access denied error:
Unable to get the result from gpresult on windows 2003 server, gpresult return with the access denied errors, you can able to update the group policy without issue
Run the following commands to register the userenv.dll and recompile the rsop mof file
To resolve the access denied error while doing the gpresult.
1. Open a cmd
1. re-register the userenv.dll
Regsvr32 /n /I c:\winnt\system32\userenv.dll
2. CD c:\windows\system32\wbem
3. Mofcomp scersop.mof
4. Gpupdate /force
5. Gpresult
Now you able to run the gpresult without error and even server reboot not required for this procedure
7. What is the command to find out site name for given DC
dsquery server NYDC01 -site
domain controller name = NYDC01
8. Command to find all DCs in the given site
Command to find all the Domain Controllers in the “Default-First-Site-Name” site
dsquery server -o rdn -site Default-First-Site-Name
Site name = Default-First-Site-Name
9. How many types of queries DNS does?
Iterative Query
Recursive Query
Iterative Query
In this query the client ask the name server for the best possible answer, the name server check the cache and zone for which it’s authoritative and returns the best possible answer to the client, which would be the full answer like IP address or try the other name server
Recursive Query
Client demands either a full answer or an error message (like record or domain name does not exist)
Client machine always send recursive query to the DNS server, if the DNS server does not have the requested information, DNS server send the iterative query to the other name server (through forwarders or secondary DNS server) until it gets the information, or until the name query fails
Windows Server 2008 Active Directory
Interview Questions Part 1
October 20, 2011 15 Comments
Q. What is Active Directory?
Active Directory is the directory
service used by Windows 2000. A directory service is a centralized,
hierarchical database that contains information about users and resources on a
network. In Windows 2000, this database is called the Active Directory data
store. The Active Directory data store contains information about various types
of network objects, including printers, shared folders, user accounts, groups,
and computers. In a Windows 2000 domain, a read/write copy of the Active
Directory data store is physically located on each domain controller in the
domain.
Three primary purposes of Active
Directory are:
- · To provide user logon and authentication services
- · To enable administrators to organize and manage user accounts groups, and network resources
- · To enable authorized users to easily locate network resources, regardless of where they are located on the network
A directory service consists of two
parts—a centralized, hierarchical database that contains information about
users and resources on a network, and a service that manages the database and
enables users of computers on the network to access the database. In Windows
2008, the database is called the Active Directory data store, or sometimes just
the directory. The Active Directory data store contains information about
various types of network objects, including printers, shared folders, user
accounts, groups, and computers. Windows 2000 Server computers that have a copy
of the Active Directory data store, and that run Active Directory are called
domain controllers. In a Windows 2008 domain, a read/write copy of the Active
Directory data store is physically located on each domain controller in the
domain.
Q. What are the physical components
of active directory?
Logical Components of Active
Directory
In creating the hierarchical
database structure of Active Directory, Microsoft facilitated locating
resources such as folders and printers by name rather than by physical
location. These logical building blocks include domains, trees, forests, and
OUs. The physical location of objects within Active Directory is represented by
including all objects in a given location in its own site. Because a domain is
the basic unit on which Active Directory is built, the domain is introduced
first; followed by trees and forests (in which domains are located); and then
OUs, which are containers located within a domain.
Domain:
A domain is a logical grouping of
networked computers in which one or more of the computers has one or more
shared resources, such as a shared folder or a shared printer, and in which all
of the computers share a common central domain directory database that contains
user account security information. One distinct advantage of using a domain,
particularly on a large network, is that administration of user account
security for the entire network can be managed from a centralized location. In
a domain, a user has only one user account, which is stored in the domain
directory database. This user account enables the user to access shared
resources (that the user has permissions to access) located on any computer in
the domain
Active Directory domains can hold
millions of objects, as opposed to the Windows NT domain structure, which was
limited to approximately 40,000 objects. As in previous versions of Active
Directory, the Active Directory database file (ntds.dit) defines the domain.
Each domain has its own ntds.dit file, which is stored on (and replicated
among) all domain controllers by a process called multimaster replication. The
domain controllers manage the configuration of domain security and store the
directory services database. This arrangement permits central administration of
domain account privileges, security, and network resources. Networked devices
and users belonging to a domain validate with a domain controller at startup.
All computers that refer to a specific set of domain controllers make up the
domain. In addition, group accounts such as global groups and domain local
groups are defined on a domain-wide basis.
Trees
A tree is a group of domains that
shares a contiguous namespace. In other words, a tree consists of a parent
domain plus one or more sets of child domains whose name reflects that of a parent.
For example, a parent domain named examcram.com can include child domains with
names such as products.examcram.com, sales.examcram.com, and
manufacturing.examcram.com. Furthermore, the tree structure can contain
grandchild domains such as america.sales.examcram.com or
europe.sales.examcram.com, and so on, as shown in Figure 1-2. A domain called
que.com would not belong to the same tree. Following the inverted tree concept
originated by X.500, the tree is structured with the parent domain at the top and
child domains beneath it. All domains in a tree are linked with two-way,
transitive trust relationships; in other words, accounts in any one domain can
access resources in another domain and vice versa.
Forests
A forest is a grouping or
hierarchical arrangement of one or more separate, completely independent domain
trees. As such, forests have the following characteristics:
- All domains in a forest share a common schema.
- All domains in a forest share a common global catalog.
- All domains in a forest are linked by implicit two-way transitive trusts.
Trees in a forest have different
naming structures, according to their domains. Domains in a forest operate
independently, but the forest enables communication across the entire
organization.
Organizational Unit:
An organizational unit (OU) is a
container used to organize objects within one domain into logical
administrative groups. An OU can contain objects such as user accounts, groups,
computers, printers, applications, shared folders, and other OUs from the same
domain. OUs are represented by a folder icon with a book inside. The Domain
Controllers OU is created by default when Active Directory is installed to hold
new Microsoft Windows Server 2003 domain controllers. OUs can be added to other
OUs to form a hierarchical structure; this process is known as nesting OUs.
Each domain has its own OU structure—the OU structure within a domain is
independent of the OU structures of other domains.
There are three reasons for defining
an OU:
- To delegate administration – In the Windows Server 2003 operating system, you can delegate administration for the contents of an OU (all users, computers, or resource objects in the OU) by granting administrators specific permissions for an OU on the OU’s access control list.
- To administer Group Policy
- To hide object
Physical Components of Active
Directory
There are two physical components of
Active Directory:
- Domain Controllers
- Sites
Domain Controllers
Any server on which you have
installed Active Directory is a domain controller. These servers authenticate
all users logging on to the domain in which they are located, and they also
serve as centers from which you can administer Active Directory in Windows
Server 2008. A domain controller stores a complete copy of all objects
contained within the domain, plus the schema and configuration information
relevant to the forest in which the domain is located. Unlike Windows NT, there
are no primary or backup domain controllers. Similar to Windows 2000 and
Windows Server 2003, all domain controllers hold a master, editable copy of the
Active Directory database.
Every domain must have at least one
DC. A domain may have more than one DC; having more than one DC provides the
following benefits:
- Fault tolerance: If one domain controller goes down, another one is available to authenticate logon requests and locate resources through the directory.
- Load balancing: All domain controllers within a site participate equally in domain activities, thus spreading out the load over several servers. This configuration optimizes the speed at which requests are serviced.
Sites
By contrast to the logical grouping
of Active Directory into forests, trees, domains, and OUs, Microsoft includes
the concept of sites to group together resources within a forest according to
their physical location and/or subnet. A site is a set of one or more IP subnets,
which are connected by a high-speed, always available local area network (LAN)
link. Figure 1-5 shows an example with two sites, one located in Chicago and
the other in New York. A site can contain objects from more than one tree or
domain within a single forest, and individual trees and domains can encompass
more than one site. The use of sites enables you to control the replication of
data within the Active Directory database as well as to apply policies to all
users and computers or delegate administrative control to these objects within
a single physical location. In addition, sites enable users to be authenticated
by domain controllers in the same physical location rather than a distant
location as often as possible. You should configure a single site for all work
locations connected within a high-speed, always available LAN link and
designate additional sites for locations separated from each other by a slower
wide area network (WAN) link. Using sites permits you to configure Active
Directory replication to take advantage
of the high-speed connection. It
also enables users to connect to a domain controller using a reliable,
high-speed connection.
Q. What are the components of Active
Directory:
Object:
An object is any specific item that
can be cataloged in Active Directory. Examples of objects include users,
computers, printers, folders, and files. These items are classified by a
distinct set of characteristics, known as attributes. For example, a user can
be characterized by the username, full name, telephone number, email address, and
so on. Note that, in general, objects in the same container have the same types
of attributes but are characterized by different values of these attributes.
The Active Directory schema defines the extent of attributes that can be
specified for any object.
Classes
The Active Directory service, in
turn, classifies objects into classes. These classes are logical groupings of
similar objects, such as users. Each class is a series of attributes that
define the characteristics of the object.
Schemas
The schema is a set of rules that
define the classes of objects and their attributes that can be created in
Active Directory. It defines what attributes can be held by objects of various
types, which of the various classes can exist, and what object class can be a parent
of the current object class. For example, the User class can contain user
account objects and possess attributes such as password, group membership, home
folder, and so on.
When you first install Active
Directory on a server, a default schema is created, containing definitions of
commonly used objects and properties such as users, computers, and groups. This
default schema also contains definitions of objects and properties needed for
the functioning of Active Directory.
Global catalog
A global catalog server is a domain
controller that has an additional duty—it maintains a global catalog. A global
catalog is a master, searchable database that contains information about every
object in every domain in a forest. The global catalog contains a complete replica
of all objects in Active Directory for its host domain, and contains a partial
replica of all objects in Active Directory for every other domain in the
forest.
- A global catalog server performs two important functions:
- Provides group membership information during logon and authentication
- Helps users locate resources in Active Directory
Q. What are the protocols used by
AD?
Because Active Directory is based on
standard directory access protocols, such as Lightweight Directory Access
Protocol (LDAP) version 3, and the Name Service Provider Interface (NSPI), it
can interoperate with other directory services employing these protocols.
LDAP is the directory access
protocol used to query and retrieve information from Active Directory. Because
it is an industry-standard directory service protocol, programs can be
developed using LDAP to share Active Directory information with other directory
services that also support LDAP.
The NSPI protocol, which is used by
Microsoft Exchange 4.0 and 5.x clients, is supported by Active Directory to
provide compatibility with the Exchange directory.
Q. Minimum requirement to install
Win 2008 AD?
1.
An NTFS partition with enough free
space
2.
An Administrator’s username and
password
3.
The correct operating system version
4.
A NIC
5.
Properly configured TCP/IP (IP
address, subnet mask and – optional – default gateway)
6.
A network connection (to a hub or to
another computer via a crossover cable)
7.
An operational DNS server (which can
be installed on the DC itself)
8.
A Domain name that you want to use
Q. How do you verify whether the AD
installation is proper?
1.
Default containers: These are created
automatically when the first domain is created. Open Active Directory Users and
Computers, and then verify that the following containers are present:
Computers, Users, and ForeignSecurityPrincipals.
2.
Default domain controllers
organizational unit: Open Active Directory Users and Computers, and then verify
this organizational unit.
3.
Default-First-Site-Name
4.
Active Directory database: The
Active Directory database is your Ntds.dit file. Verify its existence in the
%Systemroot%\Ntds folder.
5.
Global catalog server: The first
domain controller becomes a global catalog server, by default. To verify this
item:
- a. Click Start, point to Programs, click Administrative Tools, and then click Active Directory Sites and Services.
- b. Double-click Sites to expand it, expand Servers, and then select your domain controller.
- c. Double-click the domain controller to expand the server contents.
- d. Below the server, an NTDS Settings object is displayed. Right-click the object, and then click Properties.
- e. On the General tab, you can observe a global catalog check box, which should be selected, by default.
Root domain: The forest root is
created when the first domain controller is installed. Verify your computer
network identification in My Computer. The Domain Name System (DNS) suffix of
your computer should match the domain name that the domain controller belongs
to. Also, ensure that your computer registers the proper computer role. To
verify this role, use the net accounts command. The computer role should say
“primary” or “backup” depending on whether it is the first domain controller in
the domain.
Shared system volume: A Windows 2000
domain controller should have a shared system volume located in the
%Systemroot%\Sysvol\Sysvol folder. To verify this item, use the net share
command. The Active Directory also creates two standard policies during the
installation process: The Default Domain policy and the Default Domain
Controllers policy (located in the %Systemroot%\Sysvol\Domain\Policies folder).
These policies are displayed as the following globally unique identifiers
(GUIDs):
{31B2F340-016D-11D2-945F-00C04FB984F9}
representing the Default Domain policy
{6AC1786C-016F-11D2-945F-00C04fB984F9} representing the Default Domain Controllers policy
{6AC1786C-016F-11D2-945F-00C04fB984F9} representing the Default Domain Controllers policy
SRV resource records: You must have
a DNS server installed and configured for Active Directory and the associated
client software to function correctly. Microsoft recommends that you use
Microsoft DNS server, which is supplied with Windows 2000 Server as your DNS
server. However, Microsoft DNS server is not required. The DNS server that you
use must support the Service Resource Record (SRV RR) Requests for Comments
(RFC) 2052, and the dynamic update protocol (RFC 2136). Use the DNS Manager
Microsoft Management Console (MMC) snap-in to verify that the appropriate zones
and resource records are created for each DNS zone. Active Directory creates
its SRV RRs in the following folders:
- _Msdcs/Dc/_Sites/Default-first-site-name/_Tcp
- _Msdcs/Dc/_Tcp
In these locations, an SRV RR is
displayed for the following services:
- o _kerberos
- o _ldap
Q. What is LDAP?
Short for Lightweight Directory
Access Protocol, a set of protocols for accessing information directories. LDAP
is based on the standards contained within the X.500 standard, but is
significantly simpler. And unlike X.500, LDAP supports TCP/IP, which is
necessary for any type of Internet access. Because it’s a simpler version of
X.500, LDAP is sometimes called X.500-lite.
Q. What is FRS (File replication
services)?
The File Replication Service (FRS)
replicates specific files using the same multi-master model that Active
Directory uses. It is used by the Distributed File System for replication of
DFS trees that are designated as domain root replicas. It is also used by
Active Directory to synchronize content of the SYSVOL volume automatically
across domain controllers. The reason the FRS service replicates contents of
the SYSVOL folder is so clients will always get a consistent logon environment when
logging on to the domain, no matter which domain controller actually handles
the request. When a client submits a logon request, he or she submits that
request for authentication to the SYSVOL directory. A subfolder of this
directory, called \scripts, is shared on the network as the netlogon share. Any
logon scripts contained in the netlogon share are processed at logon time.
Therefore, the FRS is responsible for all domain controllers providing the same
logon directory structure to clients throughout the domain.
Q. Can you connect Active Directory
to other 3rd-party Directory Services? Name a few options.
Yes you can Connect Active Directory
to other 3rd -party Directory Services such as dictonaries used by SAP, Domino
etc with the help of MIIS ( Microsoft Identity Integration Server )
you can use dirXML or LDAP to connect to other directories (ie. E-directory from Novell).
you can use dirXML or LDAP to connect to other directories (ie. E-directory from Novell).
Q. Where is the AD database held?
What other folders are related to AD?
AD Database is saved in
%systemroot%/ntds. You can see other files also in this folder. These are the
main files controlling the AD structure
- ntds.dit
- edb.log
- res1.log
- res2.log
- edb.chk
When a change is made to the Win2K
database, triggering a write operation, Win2K records the transaction in the
log file (edb.log). Once written to the log file, the change is then written to
the AD database. System performance determines how fast the system writes the
data to the AD database from the log file. Any time the system is shut down,
all transactions are saved to the database.
During the installation of AD,
Windows creates two files: res1.log and res2.log. The initial size of each is
10MB. These files are used to ensure that changes can be written to disk should
the system run out of free disk space. The checkpoint file (edb.chk) records
transactions committed to the AD database (ntds.dit). During shutdown, a
“shutdown” statement is written to the edb.chk file. Then, during a reboot, AD
determines that all transactions in the edb.log file have been committed to the
AD database. If, for some reason, the edb.chk file doesn’t exist on reboot or
the shutdown statement isn’t present, AD will use the edb.log file to update
the AD database.
The last file in our list of files
to know is the AD database itself, ntds.dit. By default, the file is located
in\NTDS, along with the other files we’ve discussed.
Q. What is the SYSVOL folder?
The SYSVOL folder is critical
because it contains the domain’s public files. This directory is shared out (as
SYSVOL), and any files kept in the SYSVOL folder are replicated to all other
domain controllers in the domain using the File Replication Service (FRS)—and
yes, that’s important to know on the exam.
The SYSVOL folder also contains the
following items:
- The NETLOGON share, which is the location where domain logon requests are submitted for processing, and where logon scripts can be stored for client processing at logon time.
- Windows Group Policies
- FRS folders and files that must be available and synchronized between domain controllers if the FRS is in use. Distributed File System (DFS), for example, uses the FRS to keep shared data consistent between replicas.
You can go to SYSVOL folder by
typing : %systemroot%/sysvol on DC.
Q. Name the AD NCs and replication
issues for each NC
*Schema NC, *Configuration NC, *
Domain NC
Schema NC: This NC is replicated to
every other domain controller in the forest. It contains information about the
Active Directory schema, which in turn defines the different object classes and
attributes within Active Directory.
Configuration NC: Also replicated to
every other DC in the forest, this NC contains forest-wide configuration
information pertaining to the physical layout of Active Directory, as well as
information about display specifiers and forest-wide Active Directory quotas.
Domain NC: This NC is replicated to
every other DC within a single Active Directory domain. This is the NC that
contains the most commonly-accessed Active Directory data: the actual users,
groups, computers, and other objects that reside within a particular Active
Directory domain.
Q. What are application partitions?
When do I use them?
A1) Application Directory Partition
is a partition space in Active Directory which an application can use to store
that application specific data. This partition is then replicated only to some
specific domain controllers.
The application directory partition
can contain any type of data except security principles (users, computers,
groups).
**A2) These are specific to Windows
Server 2003 domains.
An application directory partition is a directory partition that is replicated only to specific domain controllers. A domain controller that participates in the replication of a particular application directory partition hosts a replica of that partition. Only domain controllers running Windows Server 2003 can host a replica of an application directory partition.
An application directory partition is a directory partition that is replicated only to specific domain controllers. A domain controller that participates in the replication of a particular application directory partition hosts a replica of that partition. Only domain controllers running Windows Server 2003 can host a replica of an application directory partition.
Q. How do you create a new
application partition?
The DnsCmd command is used to create
a new application directory partition. Ex. to create a partition named
“NewPartition” on the domain controller DC1.contoso.com, log on to the domain
controller and type following command.
DnsCmd DC1/createdirectorypartition
NewPartition.contoso.com
Q. How do you view replication
properties for AD partitions and DCs?
By using replication monitor
go to start > run > type
replmon
Q. What is the Global Catalog?
The global catalog is the central
repository of information about objects in a tree or forest. By default, a
global catalog is created automatically on the initial domain controller in the
first domain in the forest. A domain controller that holds a copy of the global
catalog is called a global catalog server. You can designate any domain
controller in the forest as a global catalog server. Active Directory uses
multimaster replication to replicate the global catalog information between
global catalog servers in other domains. It stores a full replica of all object
attributes in the directory for its host domain and a partial replica of all
object attributes contained in the directory for every domain in the forest.
The partial replica stores attributes most frequently used in search operations
(such as a user’s first and last names, logon name, and so on). Attributes are
marked or unmarked for replication in the global catalog when they are defined
in the Active Directory schema. Object attributes replicated to the global
catalog inherit the same permissions as in source domains, ensuring that data
in the global catalog is secure.
Another Definition of Global
Catalog:
Global Catalog Server
A global catalog server is a domain
controller that has an additional duty—it maintains a global catalog. A global
catalog is a master, searchable database that contains information about every
object in every domain in a forest. The global catalog contains a complete
replica of all objects in Active Directory for its host domain, and contains a
partial replica of all objects in Active Directory for every other domain in
the forest.
- A global catalog server performs two important functions:
- Provides group membership information during logon and authentication
- Helps users locate resources in Active Directory
Q. What is schema?
The Active Directory schema defines
objects that can be stored in Active Directory. The schema is a list of
definitions that determines the kinds of objects and the types of information
about those objects that can be stored in Active Directory. Because the schema
definitions themselves are stored as objects, they can be administered in the
same manner as the rest of the objects in Active Directory. The schema is
defined by two types of objects: schema class objects (also referred to as
schema classes) and schema attribute objects (also referred to as schema
attributes).
Q. GC and infrastructure master
should not be on same server, why?
Unless your domain consists of only
one domain controller, the infrastructure master should not be assigned to a
domain controller that’s also a Global Catalog server. If the infrastructure
master and Global Catalog are stored on the same domain controller, the
infrastructure master will not function because it will never find data that is
out of date. It therefore won’t ever replicate changes to the other domain
controllers in the domain. There are two exceptions:
- If all your domain controllers are Global Catalog servers, it won’t matter because all servers will have the latest changes to the Global Catalog.
- If you are implementing a single Active Directory domain, no other domains exist in the forest to keep track of, so in effect, the infrastructure master is out of a job
Q. Why not make all DCs in a large
forest as GCs?
When all the DC become a GC
replication traffic will get increased and we could not keep the Infrastructure
master and GC on the same domain ,so atlease one dc should be act without
holding the GC role .
Q. Trying to look at the Schema, how
can I do that?
Register the schmmgmt.dll with the
command regsvr32
Q. What are the Support Tools? Why
do I need them?
Support Tools are the tools that are
used for performing the complicated tasks easily. These can also be the third
party tools. Some of the Support tools include DebugViewer, DependencyViewer,
RegistryMonitor, etc.
Q. What is LDP? What is REPLMON?
What is ADSIEDIT? What is NETDOM? What is REPADMIN?
LDP – Label Distribution Protocol
(LDP) is often used to establish MPLS LSPs when traffic engineering is not
required. It establishes LSPs that follow the existing IP routing, and is
particularly well suited for establishing a full mesh of LSPs between all of
the routers on the network.
Replmon – Replmon displays
information about Active Directory Replication.
ADSIEDIT – ADSIEdit is a Microsoft
Management Console (MMC) snap-in that acts as a low-level editor for Active
Directory. It is a Graphical User Interface (GUI) tool. Network administrators
can use it for common administrative tasks such as adding, deleting, and moving
objects with a directory service. The attributes for each object can be edited
or deleted by using this tool. ADSIEdit uses the ADSI application programming
interfaces (APIs) to access Active Directory. The following are the required
files for using this tool: ADSIEDIT.DLL ADSIEDIT.MSC
NETDOM - NETDOM is a command-line
tool that allows management of Windows domains and trust relationships. It is
used for batch management of trusts, joining computers to domains, verifying
trusts, and secure channels.
REPADMIN – REPADMIN is a built-in
Windows diagnostic command-line utility that works at the Active Directory
level. Although specific to Windows, it is also useful for diagnosing some
Exchange replication problems, since Exchange Server is Active Directory based.
REPADMIN doesn’t actually fix replication problems for you. But, you can use it
to help determine the source of a malfunction.
Q. What are the Naming Conventions
used in AD?
Within Active Directory, each object
has a name. When you create an object in Active Directory, such as a user or a
computer, you assign the object a name. This name must be unique within the
domain—you can’t assign an object the same name as any other object (regardless
of its type) in that domain.
At the same time that you create an
object, not only do you assign a name to the object, but Active Directory also
assigns identifiers to the object. Active Directory assigns every object a
globally unique identifier (GUID), and assigns many objects a security
identifier (SID). A GUID is typically a 32-digit hexadecimal number that
uniquely identifies an object within Active Directory. A SID is a unique number
created by the Windows 2000 Security subsystem that is assigned only to
security principal objects (users, groups, and computers) when they are
created.Windows 2000 uses SIDs to grant or deny a security principal object
access to other objects and network resources.
Active Directory uses a hierarchical
naming convention that is based on Lightweight Directory Access Protocol (LDAP)
and DNS standards.
Objects in Active Directory can be
referenced by using one of three Active Directory name types:
- Relative distinguished name (RDN)
- Distinguished name (DN)
- User principal name (UPN)
A relative distinguished name (RDN)
is the name that is assigned to the object by the administrator when the object
is created. For example, when
I create a user named AlanC, the RDN
of that user is AlanC. The RDN only identifies an object—it doesn’t identify
the object’s location within Active Directory. The RDN is the simplest of the
three Active Directory name types, and is sometimes called the common name of
the object.
A distinguished name (DN) consists
of an object’s RDN, plus the object’s location in Active Directory. The DN
supplies the complete path to the object. An object’s DN includes its RDN, the
name of the organizational unit(s) that contains the object (if any), and the
FQDN of the domain. For example, suppose that I create a user named AlanC in an
organizational unit called US in a domain named Exportsinc.com. The DN of this
user would be: AlanC@US.Exportsinc.com
A user principal name (UPN) is a
shortened version of the DN that is typically used for logon and e-mail
purposes. A UPN consists of the RDN plus the FQDN of the domain. Using my
previous example, the UPN for the user named AlanC would be:
AlanC@Exportsinc.com
Another way you can think of a UPN
is as a DN stripped of all organizational unit references.
Q. What are sites? What are they
used for?
A site consists of one or more
TCP/IP subnets, which are specified by an administrator. Additionally, if a
site contains more than one subnet, the subnets should be connected by
high-speed, reliable links. Sites do not correspond to domains:You can have two
or more sites within a single domain, or you can have multiple domains in a
single site.A site is solely a grouping based on IP addresses. Figure 2-7 shows
two sites connected by a slow WAN link.
The purpose of sites is to enable
servers that regularly copy data to other servers (such as Active Directory
replication data) to distinguish between servers in their own site (which are
connected by high-speed links) and servers in another site (which are connected
by slower-speed WAN links). Replication between domain controllers in the same
site is fast, and typically administrators can permit Windows 2000 to
automatically perform this task. Replication between a domain controller in one
site and domain controllers in other sites is slower (because it takes place
over a slow WAN link) and often should be scheduled by the administrator so
that use of network bandwidth for replication is minimized during the network’s
peak-activity hours.
Sites and Active Directory
replication can be configured by using Active Directory Sites and Services.
Uses of site:
Sites are primarily used to control
replication traffic. Domain controllers within a site are pretty much free to
replicate changes to the Active Directory database whenever changes are made.
Domain controllers in different sites compress the replication traffic and
operate based on a defined schedule, both of which are intended to cut down on
network traffic.
More specifically, sites are used to
control the following:
- Workstation logon traffic
- Replication traffic
- Distributed File System (DFS)
What’s the difference between a site
link’s schedule and interval?
Site Link is a physical connection
object on which the replication transport mechanism depends on. Basically to
speak it is the type of communication mechanism used to transfer the data
between different sites. Site Link Schedule is nothing but when the replication
process has to be takes place and the interval is nothing but how many times
the replication has to be takes place in a give time period i.e Site Link
Schedule.
Q. What is replication? How it
occurs in AD? What is KCC and ISTG
Each domain controller stores a
complete copy of all Active domain controllers in the same domain. Domain
controllers in a domain automatically replicate directory information for all
objects in the domain to each other. When you perform an action that causes an
update to Active Directory, you are actually making the change at one of the
domain controllers. That domain controller then replicates the change to all
other domain controllers within the domain. You can control replication of
traffic between domain controllers in the network by specifying how often
replication occurs and the amount of data that each domain controller
replicates at one time. Domain controllers immediately replicate certain
important updates, such as the disabling of a user account.
Active Directory uses multimaster
replication, in which no one domain controller is the master domain controller.
Instead, all domain controllers within a domain are peers, and each domain
controller contains a copy of the directory database that can be written to.
Domain controllers can hold different information for short periods of time
until all domain controllers have synchronized changes to Active Directory.
Although Active Directory supports
multimaster replication, some changes are impractical to perform in multimaster
fashion. One or more domain controllers can be assigned to perform
single-master replication (operations not permitted to occur at different
places in a network at the same time). Operations master roles are special
roles assigned to one or more domain controllers in a domain to perform
single-master replication.
Domain controllers detect
collisions, which can occur when an attribute is modified on a domain
controller before a change to the same attribute on another domain controller
is completely propagated. Collisions are detected by comparing each attribute’s
property version number, a number specific to an attribute that is initialized
upon creation of the attribute. Active Directory resolves the collision by
replicating the changed attribute with the higher property version number.
Q. What can you do to promote a
server to DC if you’re in a remote location with slow WAN link?
Install from Media In Windows Server
2003 a new feature has been added, and this time it’s one that will actually
make our lives easier… You can promote a domain controller using files backed
up from a source domain controller!!!
This feature is called “Install from
Media” and it’s available by running DCPROMO with the /adv switch. It’s not a
replacement for network replication, we still need network connectivity, but
now we can use an old System State copy from another Windows Server 2003, copy
it to our future DC, and have the first and basic replication take place from
the media, instead of across the network, this saving valuable time and network
resources.
What you basically have to do is to
back up the systems data of an existing domain controller, restore that backup
to your replica candidate, use DCPromo /Adv to tell it to source from local
media, rather than a network source.
This also works for global catalogs.
If we perform a backup of a global catalog server, then we can create a new
global catalog server by performing DCPromo from that restored media.
IFM Limitations
It only works for the same domain,
so you cannot back up a domain controller in domain A and create a new domain B
using that media.
It’s only useful up to the tombstone
lifetime with a default of 60 days. So if you have an old backup, then you
cannot create a new domain controller using that, because you’ll run into the
problem of reanimating deleted objects.
Q. How can you forcibly remove AD
from a server, and what do you do later?
Demoting Windows Server 2003 DCs:
DCPROMO (Active Directory Installation Wizard) is a toggle switch, which allows
you to either install or remove Active Directory DCs. To forcibly demote a
Windows Server 2003 DC, run the following command either at the Start, Run, or
at the command prompt:
dcpromo /forceremoval
Note: If you’re running Certificate
Services on the DC, you must first remove Certificate Services before
continuing. If you specify the /forceremoval switch on a server that doesn’t
have Active Directory installed, the switch is ignored and the wizard pretends
that you want to install Active Directory on that server.
Once the wizard starts, you will be
prompted for the Administrator password that you want to assign to the local
administrator in the SAM database. If you have Windows Server 2003 Service Pack
1 installed on the DC, you’ll benefit from a few enhancements. The wizard will
automatically run certain checks and will prompt you to take appropriate
actions. For example, if the DC is a Global Catalog server or a DNS server, you
will be prompted. You will also be prompted to take an action if your DC is
hosting any of the operations master roles.
Demoting Windows 2000 DCs: On a
Windows 2000 domain controller, forced demotion is supported with Service Pack
2 and later. The rest of the procedure is similar to the procedure I described
for Windows Server 2003. Just make sure that while running the wizard, you
clear the “This server is the last domain controller in the domain” check box.
On Windows 2000 Servers you won’t benefit from the enhancements in Windows
Server 2003 SP1, so if the DC you are demoting is a Global Catalog server, you
may have to manually promote some other DC to a Global Catalog server.
Cleaning the Metadata on a Surviving
DC : Once you’ve successfully demoted the DC, your job is not quite done yet.
Now you must clean up the Active Directory metadata. You may be wondering why I
need to clean the metadata manually. The metadata for the demoted DC is not
deleted from the surviving DCs because you forced the demotion. When you force
a demotion, Active Directory basically ignores other DCs and does its own
thing. Because the other DCs are not aware that you removed the demoted DC from
the domain, the references to the demoted DC need to be removed from the
domain.
Although Active Directory has made
numerous improvements over the years, one of the biggest criticisms of Active
Directory is that it doesn’t clean up the mess very well. This is obvious in
most cases but, in other cases, you won’t know it unless you start digging deep
into Active Directory database.
To clean up the metadata you use
NTDSUTIL. The following procedure describes how to clean up metadata on a
Windows Server 2003 SP1. According to Microsoft, the version of NTDSUTIL in SP1
has been enhanced considerably and does a much better job of clean-up, which
obviously means that the earlier versions didn’t do a very good job. For
Windows 2000 DCs, you might want to check out Microsoft Knowledge Base article
216498, “How to remove data in Active Directory after an unsuccessful domain
controller demotion.”
Here’s the step-by-step procedure
for cleaning metadata on Windows Server 2003 DCs:
1.
Logon to the DC as a Domain
Administrator.
2.
At the command prompt, type
ntdsutil.
3.
Type metadata cleanup.
4.
Type connections.
5.
Type connect to server servername,
where servername is the name of the server you want to connect to.
6.
Type quit or q to go one level up.
You should be at the Metadata Cleanup prompt.
7.
Type select operation target.
8.
Type list domains. You will see a
list of domains in the forest, each with a different number.
9.
Type select domain number, where
number is the number associated with the domain of your server
10.
Type list sites.
11.
Type select site number, where number
is the number associated with the site of your server.
12.
Type list servers in site.
13.
Type select server number, where
number is the number associated with the server you want to remove.
14.
Type quit to go to Metadata Cleanup
prompt.
15.
Type remove selected server. You
should see a confirmation that the removal completed successfully.
16.
Type quit to exit ntdsutil.
You might also want to cleanup DNS
database by deleting all DNS records related to the server.
In general, you will have better
luck using forced promotion on Windows Server 2003, because the naming contexts
and other objects don’t get cleaned as quickly on Windows 2000 Global Catalog
servers, especially servers running Windows 2000 SP3 or earlier. Due to the
nature of forced demotion and the fact that it’s meant to be used only as a
last resort, there are additional things that you should know about forced
demotion.
Even after you’ve used NTDSUTIL to
clean the metadata, you may still need to do additional cleaning manually using
ADSIEdit or other such tools
Q. Can I get user passwords from the
AD database?
As of my Knowledge there is no way
to extract the password from AD Database. By the way there is a tool called
cache dump. Using it we can extract the cached passwords from Windows XP
machine which is joined to a Domain.
Q. Name some OU design
considerations.
- Design OU structure based on Active Directory business requirements
- NT Resource domains may fold up into OUs
- Create nested OUs to hide objects
- Objects easily moved between OUs
- Departments , Geographic Region, Job Function, Object Type
Q. What is tombstone lifetime
attribute?
The number of days before a deleted
object is removed from the directory services. This assists in removing objects
from replicated servers and preventing restores from reintroducing a deleted
object. This value is in the Directory Service object in the configuration NC.
Q. How would you find all users that
have not logged on since last month?
If you are using windows 2003 domain
environment, then goto Active Directory Users and Computers, select the Saved
Queries, right click it and select new query, then using the custom common
queries and define query there is one which shows days since last logon
Q. What are the DS* commands?
What’s the difference between LDIFDE
and CSVDE? Usage considerations?
CSVDE is a command that can be used
to import and export objects to and from the AD into a CSV-formatted file. A
CSV (Comma Separated Value) file is a file easily readable in Excel. I will not
go to length into this powerful command, but I will show you some basic samples
of how to import a large number of users into your AD. Of course, as with the
DSADD command, CSVDE can do more than just import users. Consult your help file
for more info. Like CSVDE, LDIFDE is a command that can be used to import and
export objects to and from the AD into a LDIF-formatted file. A LDIF (LDAP Data
Interchange Format) file is a file easily readable in any text editor; however
it is not readable in programs like Excel. The major difference between CSVDE
and LDIFDE (besides the file format) is the fact that LDIFDE can be used to
edit and delete existing AD objects (not just users), while CSVDE can only
import and export objects
What is DFS?
The Distributed File System is used
to build a hierarchical view of multiple file servers and shares on the
network. Instead of having to think of a specific machine name for each set of
files, the user will only have to remember one name; which will be the ‘key’ to
a list of shares found on multiple servers on the network. Think of it as the
home of all file shares with links that point to one or more servers that
actually host those shares.
DFS has the capability of routing a
client to the closest available file server by using Active Directory site
metrics. It can also be installed on a cluster for even better performance and
reliability.
It is important to understand the
new concepts that are part of DFS. Below is an definition of each of them.
Dfs root: You can think of this as a
share that is visible on the network, and in this share you can have additional
files and folders.
Dfs link: A link is another share
somewhere on the network that goes under the root. When a user opens this link
they will be redirected to a shared folder.
Dfs target (or replica): This can be
referred to as either a root or a link. If you have two identical shares,
normally stored on different servers, you can group them together as Dfs
Targets under the same link.
The image below shows the actual folder structure of what the user sees when using DFS and load balancing.
The image below shows the actual folder structure of what the user sees when using DFS and load balancing.
Q. What are the types of replication
in DFS?
There are two types of replication:
- Automatic – which is only available for Domain DFS
- Manual – which is available for stand alone, DFS and requires all files to be replicated manually.
Q. Which service is responsible for
replicating files in SYSVOL folder?
File Replication Service (FRS)
October
31, 2011 1
Comment
Introduction
A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.
Windows Server Failover Clustering (WSFC) is a feature that can help ensure that an organization’s critical applications and services, such as e-mail, databases, or line-of-business applications, are available whenever they are needed. Clustering can help build redundancy into an infrastructure and eliminate single points of failure. This, in turn, helps reduce downtime, guards against data loss, and increases the return on investment.
Failover clusters provide support for mission-critical applications—such as databases, messaging systems, file and print services, and virtualized workloads—that require high availability, scalability, and reliability.
What is a Cluster?
Cluster is a group of machines acting as a single entity to provide resources and services to the network. In time of failure, a failover will occur to a system in that group that will maintain availability of those resources to the network.
Introduction
A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.
Windows Server Failover Clustering (WSFC) is a feature that can help ensure that an organization’s critical applications and services, such as e-mail, databases, or line-of-business applications, are available whenever they are needed. Clustering can help build redundancy into an infrastructure and eliminate single points of failure. This, in turn, helps reduce downtime, guards against data loss, and increases the return on investment.
Failover clusters provide support for mission-critical applications—such as databases, messaging systems, file and print services, and virtualized workloads—that require high availability, scalability, and reliability.
What is a Cluster?
Cluster is a group of machines acting as a single entity to provide resources and services to the network. In time of failure, a failover will occur to a system in that group that will maintain availability of those resources to the network.
How Failover Clusters Work?
A failover cluster is a group of
independent computers, or nodes, that are physically connected by a local-area
network (LAN) or a wide-area network (WAN) and that are programmatically
connected by cluster software. The group of nodes is managed as a single system
and shares a common namespace. The group usually includes multiple network
connections and data storage connected to the nodes via storage area networks
(SANs). The failover cluster operates by moving resources between nodes to
provide service if system components fail.
Normally, if a server that is running a particular application crashes, the application will be unavailable until the server is fixed. Failover clustering addresses this situation by detecting hardware or software faults and immediately restarting the application on another node without requiring administrative intervention—a process known as failover. Users can continue to access the service and may be completely unaware that it is now being provided from a different server

Figure . Failover clustering
Normally, if a server that is running a particular application crashes, the application will be unavailable until the server is fixed. Failover clustering addresses this situation by detecting hardware or software faults and immediately restarting the application on another node without requiring administrative intervention—a process known as failover. Users can continue to access the service and may be completely unaware that it is now being provided from a different server

Figure . Failover clustering
Failover Clustering Terminology
1.
Failover and Failback Clustering
Failover is the act of another server in the cluster group taking over where the failed server left off. An example of a failover system can be seen in below Figure. If you have a two-node cluster for file access and one fails, the service will failover to another server in the cluster. Failback is the capability of the failed server to come back online and take the load back from the node the original server failed over to.

2. Active/Passive cluster model:
Active/Passive is defined as a cluster group where one server is handling the entire load and, in case of failure and disaster, a Passive node is standing by waiting for failover.
· One node in the failover cluster typically sits idle until a failover occurs. After a failover, this passive node becomes active and provides services to clients. Because it was passive, it presumably has enough capacity to serve the failed-over application without performance degradation.

3. Active/Active failover cluster model
All nodes in the failover cluster are functioning and serving clients. If a node fails, the resource will move to another node and continue to function normally, assuming that the new server has enough capacity to handle the additional workload.

4. Resource. A hardware or software component in a failover cluster (such as a disk, an IP address, or a network name).
5. Resource group.
A combination of resources that are managed as a unit of failover. Resource groups are logical collections of cluster resources. Typically a resource group is made up of logically related resources such as applications and their associated peripherals and data. However, resource groups can contain cluster entities that are related only by administrative needs, such as an administrative collection of virtual server names and IP addresses. A resource group can be owned by only one node at a time and individual resources within a group must exist on the node that currently owns the group. At any given instance, different servers in the cluster cannot own different resources in the same resource group.
6. Dependency. An alliance between two or more resources in the cluster architecture.

7. Heartbeat.
The cluster’s health-monitoring mechanism between cluster nodes. This health checking allows nodes to detect failures of other servers in the failover cluster by sending packets to each other’s network interfaces. The heartbeat exchange enables each node to check the availability of other nodes and their applications. If a server fails to respond to a heartbeat exchange, the surviving servers initiate failover processes including ownership arbitration for resources and applications owned by the failed server.
The heartbeat is simply packets sent from the Passive node to the Active node. When the Passive node doesn’t see the Active node anymore, it comes up online

8. Membership. The orderly addition and removal of nodes to and from the cluster.
9. Global update. The propagation of cluster configuration changes to all cluster members.
10. Cluster registry. The cluster database, stored on each node and on the quorum resource, maintains configuration information (including resources and parameters) for each member of the cluster.
11. Virtual server.
A combination of configuration information and cluster resources, such as an IP address, a network name, and application resources.
Applications and services running on a server cluster can be exposed to users and workstations as virtual servers. To users and clients, connecting to an application or service running as a clustered virtual server appears to be the same process as connecting to a single, physical server. In fact, the connection to a virtual server can be hosted by any node in the cluster. The user or client application will not know which node is actually hosting the virtual server.


12. Shared storage.
All nodes in the failover cluster must be able to access data on shared storage. The highly available workloads write their data to this shared storage. Therefore, if a node fails, when the resource is restarted on another node, the new node can read the same data from the shared storage that the previous node was accessing. Shared storage can be created with iSCSI, Serial Attached SCSI, or Fibre Channel, provided that it supports persistent reservations.
![clip_image001[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image022.gif)
13. LUN
LUN stands for Logical Unit Number. A LUN is used to identify a disk or a disk volume that is presented to a host server or multiple hosts by a shared storage array or a SAN. LUNs provided by shared storage arrays and SANs must meet many requirements before they can be used with failover clusters but when they do, all active nodes in the cluster must have exclusive access to these LUNs.
Storage volumes or logical unit numbers (LUNs) exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. The following diagram illustrates this.

14. Services and Applications group
Cluster resources are contained within a cluster in a logical set called a Services and Applications group or historically referred to as a cluster group. Services and Applications groups are the units of failover within the cluster. When a cluster resource fails and cannot be restarted automatically, the Services and Applications group this resource is a part of will be taken offline, moved to another node in the cluster, and the group will be brought back online.
15. Quorum
The cluster quorum maintains the definitive cluster configuration data and the current state of each node, each Services and Applications group, and each resource and network in the cluster. Furthermore, when each node reads the quorum data, depending on the information retrieved, the node determines if it should remain available, shut down the cluster, or activate any particular Services and Applications groups on the local node. To extend this even further, failover clusters can be configured to use one of four different cluster quorum models and essentially the quorum type chosen for a cluster defines the cluster. For example, a cluster that utilizes the Node and Disk Majority Quorum can be called a Node and Disk Majority cluster.
A quorum is simply a configuration database for Microsoft Cluster Service, and is stored in the quorum log file. A standard quorum uses a quorum log file that is located on a disk hosted on a shared storage interconnect that is accessible by all members of the cluster
Why quorum is necessary
When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network, but might not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.
To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.
For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5 are a minority and stop running as a cluster, which prevents the problems of a “split” situation. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.
There are four quorum modes:
Failover is the act of another server in the cluster group taking over where the failed server left off. An example of a failover system can be seen in below Figure. If you have a two-node cluster for file access and one fails, the service will failover to another server in the cluster. Failback is the capability of the failed server to come back online and take the load back from the node the original server failed over to.

2. Active/Passive cluster model:
Active/Passive is defined as a cluster group where one server is handling the entire load and, in case of failure and disaster, a Passive node is standing by waiting for failover.
· One node in the failover cluster typically sits idle until a failover occurs. After a failover, this passive node becomes active and provides services to clients. Because it was passive, it presumably has enough capacity to serve the failed-over application without performance degradation.

3. Active/Active failover cluster model
All nodes in the failover cluster are functioning and serving clients. If a node fails, the resource will move to another node and continue to function normally, assuming that the new server has enough capacity to handle the additional workload.

4. Resource. A hardware or software component in a failover cluster (such as a disk, an IP address, or a network name).
5. Resource group.
A combination of resources that are managed as a unit of failover. Resource groups are logical collections of cluster resources. Typically a resource group is made up of logically related resources such as applications and their associated peripherals and data. However, resource groups can contain cluster entities that are related only by administrative needs, such as an administrative collection of virtual server names and IP addresses. A resource group can be owned by only one node at a time and individual resources within a group must exist on the node that currently owns the group. At any given instance, different servers in the cluster cannot own different resources in the same resource group.
6. Dependency. An alliance between two or more resources in the cluster architecture.

7. Heartbeat.
The cluster’s health-monitoring mechanism between cluster nodes. This health checking allows nodes to detect failures of other servers in the failover cluster by sending packets to each other’s network interfaces. The heartbeat exchange enables each node to check the availability of other nodes and their applications. If a server fails to respond to a heartbeat exchange, the surviving servers initiate failover processes including ownership arbitration for resources and applications owned by the failed server.
The heartbeat is simply packets sent from the Passive node to the Active node. When the Passive node doesn’t see the Active node anymore, it comes up online

8. Membership. The orderly addition and removal of nodes to and from the cluster.
9. Global update. The propagation of cluster configuration changes to all cluster members.
10. Cluster registry. The cluster database, stored on each node and on the quorum resource, maintains configuration information (including resources and parameters) for each member of the cluster.
11. Virtual server.
A combination of configuration information and cluster resources, such as an IP address, a network name, and application resources.
Applications and services running on a server cluster can be exposed to users and workstations as virtual servers. To users and clients, connecting to an application or service running as a clustered virtual server appears to be the same process as connecting to a single, physical server. In fact, the connection to a virtual server can be hosted by any node in the cluster. The user or client application will not know which node is actually hosting the virtual server.


12. Shared storage.
All nodes in the failover cluster must be able to access data on shared storage. The highly available workloads write their data to this shared storage. Therefore, if a node fails, when the resource is restarted on another node, the new node can read the same data from the shared storage that the previous node was accessing. Shared storage can be created with iSCSI, Serial Attached SCSI, or Fibre Channel, provided that it supports persistent reservations.
![clip_image001[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image022.gif)
13. LUN
LUN stands for Logical Unit Number. A LUN is used to identify a disk or a disk volume that is presented to a host server or multiple hosts by a shared storage array or a SAN. LUNs provided by shared storage arrays and SANs must meet many requirements before they can be used with failover clusters but when they do, all active nodes in the cluster must have exclusive access to these LUNs.
Storage volumes or logical unit numbers (LUNs) exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. The following diagram illustrates this.

14. Services and Applications group
Cluster resources are contained within a cluster in a logical set called a Services and Applications group or historically referred to as a cluster group. Services and Applications groups are the units of failover within the cluster. When a cluster resource fails and cannot be restarted automatically, the Services and Applications group this resource is a part of will be taken offline, moved to another node in the cluster, and the group will be brought back online.
15. Quorum
The cluster quorum maintains the definitive cluster configuration data and the current state of each node, each Services and Applications group, and each resource and network in the cluster. Furthermore, when each node reads the quorum data, depending on the information retrieved, the node determines if it should remain available, shut down the cluster, or activate any particular Services and Applications groups on the local node. To extend this even further, failover clusters can be configured to use one of four different cluster quorum models and essentially the quorum type chosen for a cluster defines the cluster. For example, a cluster that utilizes the Node and Disk Majority Quorum can be called a Node and Disk Majority cluster.
A quorum is simply a configuration database for Microsoft Cluster Service, and is stored in the quorum log file. A standard quorum uses a quorum log file that is located on a disk hosted on a shared storage interconnect that is accessible by all members of the cluster
![clip_image001[4]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image024.gif)
When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network, but might not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.
To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.
For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5 are a minority and stop running as a cluster, which prevents the problems of a “split” situation. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.
There are four quorum modes:
- Node Majority: Each node that is available and in communication can vote. The cluster functions only with a majority of the votes, that is, more than half.
- Node and Disk Majority: Each node plus a designated disk in the cluster storage (the “disk witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
- Node and File Share Majority: Each node plus a designated file share created by the administrator (the “file share witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
- No Majority: Disk Only. The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage. Only the nodes that are also in communication with that disk can join the cluster. This is equivalent to the quorum disk in Windows Server 2003. The disk is a single point of failure, so only select scenarios should implement this quorum mode.
16. Witness Disk – The witness disk is a disk in the cluster storage that is
designated to hold a copy of the cluster configuration database. (A witness
disk is part of some, not all, quorum configurations.)
Configuration
of two node Failover Cluster and Quorum Configuration:
Multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.
Multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.
Which
editions include failover clustering?
The failover cluster feature is
available in Windows Server 2008 R2 Enterprise and Windows Server 2008 R2
Datacenter. The feature is not available in Windows Web Server 2008 R2 or
Windows Server 2008 R2 Standard
Network Considerations
All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity
Quorum model:
For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum
Step –1 Configure the Cluster
Add the Failover Clustering feature to both nodes of your cluster. Follow the below steps:
1. Click Start, click Administrative Tools, and then click Server Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
2. In Server Manager, under Features Summary, click Add Features. Select Failover Clustering, and then click Install

3. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.
4. Repeat the process for each server that you want to include in the cluster.
5. Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.
Go to properties of Cluster (or private) network and check out register the connection’s addresses in DNS.

6. Next, go to Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network (LAN) is first in the list:

7. Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Step 2 – Validate the Cluster Configuration:
1. Open up the Failover Cluster Manager and click on Validate a Configuration.

2. The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

3. we need this cluster to be supported so we must run all the needed tests

4. Select run all tests.

5. Click next till it gives the report like below

When you click on view report, it will display the report similar as below:

Step 2 – Create a Cluster:
In the Failover Cluster Manager, click on Create a Cluster.

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.
Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

Confirm your choices and click Next.

Click Next till finish, it will create the cluster by name MYCLUSTER.
Step 3 – Implementing a Node and File Share Majority quorum
First, we need to identify the server that will hold our File Share witness. This File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named NYDC01
.
The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. You will need to make sure you give the cluster computer account read/write permissions in both shared and NTFS for MYCLUSTER share.


Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

On the next screen choose Node and File Share Majority and click Next.

In this screen, enter the path to the file share you previously created and click Next.

Confirm that the information is correct and click Next till summary page and click Finish.
Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster
Network Considerations
All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity
Quorum model:
For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum
Step –1 Configure the Cluster
Add the Failover Clustering feature to both nodes of your cluster. Follow the below steps:
1. Click Start, click Administrative Tools, and then click Server Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)
2. In Server Manager, under Features Summary, click Add Features. Select Failover Clustering, and then click Install

3. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.
4. Repeat the process for each server that you want to include in the cluster.
5. Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.
Go to properties of Cluster (or private) network and check out register the connection’s addresses in DNS.

6. Next, go to Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network (LAN) is first in the list:

7. Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Step 2 – Validate the Cluster Configuration:
1. Open up the Failover Cluster Manager and click on Validate a Configuration.

2. The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

3. we need this cluster to be supported so we must run all the needed tests

4. Select run all tests.

5. Click next till it gives the report like below

When you click on view report, it will display the report similar as below:

Step 2 – Create a Cluster:
In the Failover Cluster Manager, click on Create a Cluster.

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.
Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

Confirm your choices and click Next.

Click Next till finish, it will create the cluster by name MYCLUSTER.
Step 3 – Implementing a Node and File Share Majority quorum
First, we need to identify the server that will hold our File Share witness. This File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named NYDC01
.

The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. You will need to make sure you give the cluster computer account read/write permissions in both shared and NTFS for MYCLUSTER share.


Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

On the next screen choose Node and File Share Majority and click Next.

In this screen, enter the path to the file share you previously created and click Next.

Confirm that the information is correct and click Next till summary page and click Finish.
Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster
October
28, 2011 Leave a
comment
What are NLB clusters?
A single computer running Windows can provide a limited level of server reliability and scalable performance. However, by combining the resources of two or more computers running one of the products in Windows Server 2008 into a single virtual cluster, NLB can deliver the reliability and performance that Web servers and other mission-critical servers need.
Each host runs a separate copy of the desired server applications (such as applications for Web, FTP, and Telnet servers). NLB distributes incoming client requests across the hosts in the cluster. The load weight to be handled by each host can be configured as necessary. You can also add hosts dynamically to the cluster to handle increased load. In addition, NLB can direct all traffic to a designated single host, which is called the default host.
NLB allows all of the computers in the cluster to be addressed by the same set of cluster IP addresses, and it maintains a set of unique, dedicated IP addresses for each host. For load-balanced applications, when a host fails or goes offline, the load is automatically redistributed among the computers that are still operating. When a computer fails or goes offline unexpectedly, active connections to the failed or offline server are lost.
Network Load Balancing is a way to configure a pool of machines so they take turns responding to requests. It’s most commonly seen implemented in server farms: identically configured machines that spread out the load for a web site, or maybe a Terminal Server farm. You could also use it for a firewall(ISA) farm, VPN access points, really, any time you have TCP/IP traffic that has become too much load for a single machine, but you still want it to appear as a single machine for access purposes.
Below is a scenario to distribute the load of IIS servers using NLB, here when an user will hit http://www.nuggetlab.com, the request will go to cluster and cluster will use one of the IIS servers which has free resource.

Where to use NLB?
Front-end Web servers, virtual private networks (VPNs), File Transfer Protocol (FTP) servers, and firewall and proxy servers typically use Network Load Balancing. NLB is not recommended for services which use changing data such as SQL, File server etc as there is chances to lose data. In such scenarios Failover clustering can be used.
How does it work?
It’s pretty straightforward. After you install NLB on a server, you add two or more machines to a NLB Cluster. The machines are configured with 2 IP addresses: their own private unique one, and a second one that is shared by all the machines in the cluster. The machines all run an algorithm that determines whose turn is next at responding to requests. They also exchange heartbeats with one another, so they all know if one server goes down and won’t allocate any more requests to him. You can have up to 32 machines in a cluster.
All of the servers within an NLB cluster communicate with each other using heartbeat and convergence.
What are NLB clusters?
A single computer running Windows can provide a limited level of server reliability and scalable performance. However, by combining the resources of two or more computers running one of the products in Windows Server 2008 into a single virtual cluster, NLB can deliver the reliability and performance that Web servers and other mission-critical servers need.
Each host runs a separate copy of the desired server applications (such as applications for Web, FTP, and Telnet servers). NLB distributes incoming client requests across the hosts in the cluster. The load weight to be handled by each host can be configured as necessary. You can also add hosts dynamically to the cluster to handle increased load. In addition, NLB can direct all traffic to a designated single host, which is called the default host.
NLB allows all of the computers in the cluster to be addressed by the same set of cluster IP addresses, and it maintains a set of unique, dedicated IP addresses for each host. For load-balanced applications, when a host fails or goes offline, the load is automatically redistributed among the computers that are still operating. When a computer fails or goes offline unexpectedly, active connections to the failed or offline server are lost.
Network Load Balancing is a way to configure a pool of machines so they take turns responding to requests. It’s most commonly seen implemented in server farms: identically configured machines that spread out the load for a web site, or maybe a Terminal Server farm. You could also use it for a firewall(ISA) farm, VPN access points, really, any time you have TCP/IP traffic that has become too much load for a single machine, but you still want it to appear as a single machine for access purposes.
Below is a scenario to distribute the load of IIS servers using NLB, here when an user will hit http://www.nuggetlab.com, the request will go to cluster and cluster will use one of the IIS servers which has free resource.

Where to use NLB?
Front-end Web servers, virtual private networks (VPNs), File Transfer Protocol (FTP) servers, and firewall and proxy servers typically use Network Load Balancing. NLB is not recommended for services which use changing data such as SQL, File server etc as there is chances to lose data. In such scenarios Failover clustering can be used.
How does it work?
It’s pretty straightforward. After you install NLB on a server, you add two or more machines to a NLB Cluster. The machines are configured with 2 IP addresses: their own private unique one, and a second one that is shared by all the machines in the cluster. The machines all run an algorithm that determines whose turn is next at responding to requests. They also exchange heartbeats with one another, so they all know if one server goes down and won’t allocate any more requests to him. You can have up to 32 machines in a cluster.
All of the servers within an NLB cluster communicate with each other using heartbeat and convergence.
Convergence
The process of stabilizing a system
after changes occur in the network. For routing, if a route becomes
unavailable, routers send update messages throughout the network,
reestablishing information about preferred routes.
For Network Load Balancing, a process by which hosts exchange messages to determine a new, consistent state of the cluster and to elect the default host. During convergence, a new load distribution is determined for hosts that share the handling of network traffic for specific Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) ports.
For Network Load Balancing, a process by which hosts exchange messages to determine a new, consistent state of the cluster and to elect the default host. During convergence, a new load distribution is determined for hosts that share the handling of network traffic for specific Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) ports.
Heartbeat
A message that is sent at regular
intervals by one computer on a Network Load Balancing cluster or server cluster
to another computer within the cluster to detect communication failures.
Network Load Balancing initiates convergence when it fails to receive heartbeat messages from another host or when it receives a heartbeat message from a new host.
Hardware and software considerations for NLB clusters:
Network Load Balancing initiates convergence when it fails to receive heartbeat messages from another host or when it receives a heartbeat message from a new host.
Hardware and software considerations for NLB clusters:
- NLB is installed as a standard Windows networking driver component.
- NLB requires no hardware changes to enable and run.
- NLB Manager enables you to create new NLB clusters and to configure and manage clusters and all of the cluster’s hosts from a single remote or local computer.
- NLB lets clients access the cluster by using a single, logical Internet name and virtual IP address—known as the cluster IP address (it retains individual names for each computer). NLB allows multiple virtual IP addresses for multihomed servers
Installing
the NLB feature:
- From Server Manager, just click Add Feature and then select Network Load Balancing
- From a command line, type “ocsetup NetworkLoadBalancingFullServer”
- Use ServerManagerCmd! From a command line, type “servermanagercmd –install nlb”
Managing
NLB:
Server roles and features are managed by using Microsoft Management Console (MMC) snap-ins. To open the Network Load Balancing Manager snap-in, click Start, click Administrative Tools, and then click Network Load Balancing Manager. You can also open Network Load Balancing Manager by typing Nlbmgr at a command prompt.
How to create an NLB Cluster:
There are three key steps to creating an NLB cluster in Windows Server 2008:
Server roles and features are managed by using Microsoft Management Console (MMC) snap-ins. To open the Network Load Balancing Manager snap-in, click Start, click Administrative Tools, and then click Network Load Balancing Manager. You can also open Network Load Balancing Manager by typing Nlbmgr at a command prompt.
How to create an NLB Cluster:
There are three key steps to creating an NLB cluster in Windows Server 2008:
1.
Install the NLB Feature into Server
2008 on each host that you will add to an NLB cluster.
2.
Use the New Cluster wizard to create
the cluster and add the first host.
3.
Use the Add Host wizard to add one
ore more nodes to your cluster
Important – In this example, we are
creating a cluster named NLB.gktrain.net and we will add two servers
nlb1.gktrain.net and nlb2.gktrain.net as part of this cluster.
Before creating this cluster, we need to add DNS records, add the following records:
1. CNAME record – Let’s say users will access the application/site by typing http://www.gktrain.net as users don’t know what is NLB. In this case, go to DNS manager and add a new CNAME record as below

Next, add host A record for Cluster IP address, in this case it is 172.16.100.210 as below:

Now, we are ready to install NLB feature and configure Cluster.
Installing NLB in Server 2008:
This is extremely easy. First you run Server Manager, and select Features from the tree and click on Add Features:

From the Add Features wizard just select Network Load Balancing and click Next:

Finally from the Confirm Installation Selections click Install:

After a few seconds, NLB will be installed on the first host. You need to repeat this installation on each host you plan to add to your NLB Cluster. Installing the NLB feature does NOT require a reboot but removing does.
Creating NLB Cluster:
Once NLB is installed, you create your cluster by first bringing up the NLB Cluster Manager from Administrative tool.

The right click the Network Load Balancing cluster node in the left pane, and select New Cluster:

This brings up the first page of the New Cluster Wizard – here you select the first member of your cluster by adding the IP address or DNS name into the Host box:

This shows you the interfaces on this host that you can use for configuring a new cluster. Chose the interface and click Next:

This brings up the Host Parameters page, where you enter the IP address to be used for the cluster for the cluster, then you click next:

This brings up the Cluster IP address page. Here you add the address used by clients to connect to nodes in the cluster:

After adding the IP address, and clicking next, the New Cluster wizard displays the Cluster Parameters where you specify the cluster IP configuration, Cluster Name and the cluster operation mode (and click next). See bottom of this article to know about difference between Unicast and Multicast.

Finally, the wizard displays the port rules. In this case, we specify the cluster shoudl handle TCP Port 80 (and click on Finish):

The Wizard then does the necessary configuration, resulting in a single node NLB cluster, shown in the NLB manager like this:

Adding Additional Nodes
A single node cluster is not much value, so you next need to add an additional node (or nodes). To add a node, you right click your newly created cluster and select Add Host to Cluster:

This brings up the Add Host to Cluster: Connect page where you specify the host to add, and the interface used in the cluster:

Next you specify the new host’s parameters:

Then you get to update, in needed the cluster’s port rules:

Clicking next completes the wizard, resulting in a 2nd host in the cluster. As seen by NLB Manager:

Testing
Before creating this cluster, we need to add DNS records, add the following records:
1. CNAME record – Let’s say users will access the application/site by typing http://www.gktrain.net as users don’t know what is NLB. In this case, go to DNS manager and add a new CNAME record as below

Next, add host A record for Cluster IP address, in this case it is 172.16.100.210 as below:

Now, we are ready to install NLB feature and configure Cluster.
Installing NLB in Server 2008:
This is extremely easy. First you run Server Manager, and select Features from the tree and click on Add Features:

From the Add Features wizard just select Network Load Balancing and click Next:

Finally from the Confirm Installation Selections click Install:

After a few seconds, NLB will be installed on the first host. You need to repeat this installation on each host you plan to add to your NLB Cluster. Installing the NLB feature does NOT require a reboot but removing does.
Creating NLB Cluster:
Once NLB is installed, you create your cluster by first bringing up the NLB Cluster Manager from Administrative tool.

The right click the Network Load Balancing cluster node in the left pane, and select New Cluster:

This brings up the first page of the New Cluster Wizard – here you select the first member of your cluster by adding the IP address or DNS name into the Host box:

This shows you the interfaces on this host that you can use for configuring a new cluster. Chose the interface and click Next:

This brings up the Host Parameters page, where you enter the IP address to be used for the cluster for the cluster, then you click next:

This brings up the Cluster IP address page. Here you add the address used by clients to connect to nodes in the cluster:

After adding the IP address, and clicking next, the New Cluster wizard displays the Cluster Parameters where you specify the cluster IP configuration, Cluster Name and the cluster operation mode (and click next). See bottom of this article to know about difference between Unicast and Multicast.

Finally, the wizard displays the port rules. In this case, we specify the cluster shoudl handle TCP Port 80 (and click on Finish):

The Wizard then does the necessary configuration, resulting in a single node NLB cluster, shown in the NLB manager like this:

Adding Additional Nodes
A single node cluster is not much value, so you next need to add an additional node (or nodes). To add a node, you right click your newly created cluster and select Add Host to Cluster:

This brings up the Add Host to Cluster: Connect page where you specify the host to add, and the interface used in the cluster:

Next you specify the new host’s parameters:

Then you get to update, in needed the cluster’s port rules:

Clicking next completes the wizard, resulting in a 2nd host in the cluster. As seen by NLB Manager:

Testing
- Go to the command prompt and type “wlbs query”, if you see NLB 1 and NLB 2 converged successfully on the cluster. This means things are working well.
- Ping each server locally and remotely
- Ping the virtual IP locally and remotely – you should do this three times from each location. If you cannot ping remotely you may need to add a static ARP entry in your switches and/or routers where the host machines reside
Testing from user’s side:
Since we are assuming that NLB1 and NLB2 are IIS servers, when user will type http://www.gktrain.net in browser, they will get same web page and load will be manged on either NLB1 or NLB2 whichever is free.
Unicast vs Multicast
Unicast/Multicast is the way the MAC address for the Virtual IP is presented to the routers. In my experience I have almost always used Multicast, which if you use you should enter a persistent ARP entry on all upstream switches or you will not be able to ping the servers remotely.
In the unicast method:
Since we are assuming that NLB1 and NLB2 are IIS servers, when user will type http://www.gktrain.net in browser, they will get same web page and load will be manged on either NLB1 or NLB2 whichever is free.
Unicast vs Multicast
Unicast/Multicast is the way the MAC address for the Virtual IP is presented to the routers. In my experience I have almost always used Multicast, which if you use you should enter a persistent ARP entry on all upstream switches or you will not be able to ping the servers remotely.
In the unicast method:
- The cluster adapters for all cluster hosts are assigned the same unicast MAC address.
- The outgoing MAC address for each packet is modified, based on the cluster host’s priority setting, to prevent upstream switches from discovering that all cluster hosts have the same MAC address.
In the multicast method:
- The cluster adapter for each cluster host retains the original hardware unicast MAC address (as specified by the hardware manufacture of the network adapter).
- The cluster adapters for all cluster hosts are assigned a multicast MAC address.
- The multicast MAC is derived from the cluster’s IP address.
- Communication between cluster hosts is not affected, because each cluster host retains a unique MAC address.
Selecting
the Unicast or Multicast Method of Distributing Incoming Requests
http://technet.microsoft.com/en-us/library/cc782694.aspx
In the next article i will discuss about Windows Clustering and also i will show configuration.
http://technet.microsoft.com/en-us/library/cc782694.aspx
In the next article i will discuss about Windows Clustering and also i will show configuration.
- High availability uses a combination of redundancy and fault tolerance in order to provide a level of operational continuity.
- Redundancy means that there is more than one instance of resources available.
- FT means resources will be available in case of a hardware failure.
- For redundancy, Network Load Balancing (NLB) is used in Windows Server 2008
- For FT, Server Clustering is used.
- DNS round robin and NLB are used for services and applications which maintain an internal data store. Front-end Web servers, virtual private networks (VPNs), File Transfer Protocol (FTP) servers, and firewall and proxy servers typically use Network Load Balancing
- Failover clustering is used for applications which use an external and /or shared data store. For ex, SQL server or exchange server.
In this Article, we will discuss
about Clustering with DNS Round Robin
DNS Round Robin:
Round robin is a local balancing mechanism used by DNS servers to share and distribute network resource loads. Round robin DNS is usually used for balancing the load of geographically distributed Web servers. For example, a company has one domain name and three identical home pages residing on three servers with three different IP addresses. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.
DNS Round Robin:
Round robin is a local balancing mechanism used by DNS servers to share and distribute network resource loads. Round robin DNS is usually used for balancing the load of geographically distributed Web servers. For example, a company has one domain name and three identical home pages residing on three servers with three different IP addresses. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.
- DNS round robin is used to provide more than one IP address to a single hostname
- Each IP address represents a different physical host and request will be sent to each of the hosts in a rotation order
- Netmask ordering can be used to help send requests from clients to the host closet to them
- This is suitable for a smaller environment, for a large environment NLB is preferred.
- By default Round robin is enabled in DNS.
To configure round robin, just go to
DNS, add A host record for a DNS server and point the IP to another server.
- In the DNS Mgmt application on your DNS server, right click the server name in the tree in the left pane, select Properties, select the Advanced tab, ensure that ‘Enable round robin’ is selected.
- Add HOST(A) records in the appropriate forward lookup zone, pointing to the servers to be covered.
- If you want a little poor-man’s fault tolerance, ensure that the TTL of each record is set to a short period of time, i.e. 15 seconds. This ensures that if one of the servers fails, repeated attempts to connect will soon hit another server.
For example:

In next article, I will discuss about Network Load Balancing (NLB) and also I will show how to configure NLB in detail.
October
27, 2011 1
Comment
Windows Server 2008 Backup tools



How do you backup AD?
Active Directory is backed up as part of system state, a collection of system components that depend on each other. You must back up and restore system state components together.
Components that comprise the system state on a domain controller include:
Windows Server 2008 Backup tools



How do you backup AD?
Active Directory is backed up as part of system state, a collection of system components that depend on each other. You must back up and restore system state components together.
Components that comprise the system state on a domain controller include:
- System Start-up Files (boot files). These are the files required for Windows 2000 Server to start.
- System registry.
- Class registration database of Component Services. The Component Object Model (COM) is a binary standard for writing component software in a distributed systems environment.
- SYSVOL. The system volume provides a default Active Directory location for files that must be shared for common access throughout a domain. The SYSVOL folder on a domain controller contains:
- NETLOGON shared folders. These usually host user logon scripts and Group Policy objects (GPOs) for non-Windows 2000based network clients.
- User logon scripts for Windows 2000 Professional based clients and clients that are running Windows 95, Windows 98, or Windows NT 4.0.
- Windows 2000 GPOs.
- File system junctions.
- File Replication service (FRS) staging directories and files that are required to be available and synchronized between domain controllers.
- Active Directory. Active Directory includes:
- Ntds.dit: The Active Directory database.( schema table,link table,data table)
- Edb.chk: The checkpoint file.
- Edb*.log: The transaction logs, each 10 megabytes (MB) in size.
- Res1.log and Res2.log: Reserved transaction logs.
Note:
If you use Active Directory-integrated DNS, then the zone data is backed up as
part of the Active Directory database. If you do not use Active
Directory-integrated DNS, you must explicitly back up the zone files. However,
if you back up the system disk along with the system state, zone data is backed
up as part of the system disk.If you installed Windows Clustering or
Certificate Services on your domain controller, they are also backed up as part
of system state.
Difference
between Authoritative Vs non-authoritative restore.
The term “authoritative” is used to describe a restore in which the domain controller being restored has the master, or authoritative, copy of Active Directory. A non-authoritative restore is a domain controller being restored that does not have an authoritative copy of Active Directory. When a domain controller is started, replication occurs during the boot phase, and Active Directory is synchronized. Whether the restore is authoritative or non-authoritative then specifies the direction of replication. An authoritative restore pushes Active Directory out to other domain controllers, and a non-authoritative restore synchronizes changes to the domain controller being booted.
NOTE Domain controllers use Universal Sequence Numbers (USNs) to keep track of Active Directory data and to determine if an update is available. Each domain controller keeps its own USN, and checks its USN with the USN of other domain controllers on a regular basis. If the USN of the other domain controller is higher, that indicates an update is available, and replication is started. If the USN of the other domain controller is the same or lower, replication is not started. Using USNs is a more accurate method than using time stamps.
To explain further, let’s suppose that a domain controller fails due to hardware failure. It takes several days to obtain a replacement part for the machine and to repair the domain controller. During this time, other domain controllers have continued to function normally, and various changes in the network and Active Directory have taken place. When the failed domain controller is started for the first time after completing the recovery process, replication occurs and the changes in Active Directory are replicated to the previously failed computer. The domain controller is brought up to date with the rest of the network. This is a non-authoritative restore. Now let’s suppose that the failure you suffered was due to human error, and an administrator deletes significant portions of Active Directory. If you follow the normal procedure of restoring Active Directory from yesterday’s backup and rebooting the server, replication will occur, and all the changes and deletions made by the administrator will be replicated back to the domain controller. Performing a normal restore would not bring back the deleted objects. To recover your lost users and OUs, you must perform an authoritative restore and specify the objects that you want to replicate to the rest of the network.
![clip_image002[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image068.gif)
How to run a non-authoritative restore:
just go to Windows server backup and click recover. Use the most recent backup file set that was created before the deletion.
This restore is useful in a scenario let’s say a disk failed and once we restore the entire backup after new disk replacement, the entire AD database will be replicated with other domain partners.
If there was an accidently user or OU deleted, go ahead with Authoritative restore. The reason is if you do a normal restore, the USN of an object will increase by 10,000 and other domain controllers will treat this server as updated server and this information will be replicated to all domain controllers.
How to run authoritative restore:
Let’s assume, an OU was deleted from AD database. Perform the below steps to recover the OU. You must have a system state backup before performing below steps.
1. Restart the DC into directory services recovery mode (Hit F8)
![clip_image004[5]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image069.gif)
2. Login with ./administrator and the domain recovery mode password you set up while running Dcpromo
![clip_image006[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image070.gif)
3. Type wbadmin get versions from a command prompt

4. This will find out all backups available and Figure out which version you want to restore

5. Type wbadmin start systemrecovery -version:ID – backuptarget:backuplocation

In the above command, since backup is stored locally on disk, we haven’t specified the network location but if the backup is on a SAN or on another server, we need to specify UNC in backuptarget switch.
6. After the restore, type ntdsutil activate instance NTDS
7. Type authoritative restore to get into the right NTDSUTIL context

8. Type restore object “distinguishedName” for a single account or restore subtree “distinguishedName” if you are restoring an entire OU.


9. Reboot normally
The term “authoritative” is used to describe a restore in which the domain controller being restored has the master, or authoritative, copy of Active Directory. A non-authoritative restore is a domain controller being restored that does not have an authoritative copy of Active Directory. When a domain controller is started, replication occurs during the boot phase, and Active Directory is synchronized. Whether the restore is authoritative or non-authoritative then specifies the direction of replication. An authoritative restore pushes Active Directory out to other domain controllers, and a non-authoritative restore synchronizes changes to the domain controller being booted.
NOTE Domain controllers use Universal Sequence Numbers (USNs) to keep track of Active Directory data and to determine if an update is available. Each domain controller keeps its own USN, and checks its USN with the USN of other domain controllers on a regular basis. If the USN of the other domain controller is higher, that indicates an update is available, and replication is started. If the USN of the other domain controller is the same or lower, replication is not started. Using USNs is a more accurate method than using time stamps.
To explain further, let’s suppose that a domain controller fails due to hardware failure. It takes several days to obtain a replacement part for the machine and to repair the domain controller. During this time, other domain controllers have continued to function normally, and various changes in the network and Active Directory have taken place. When the failed domain controller is started for the first time after completing the recovery process, replication occurs and the changes in Active Directory are replicated to the previously failed computer. The domain controller is brought up to date with the rest of the network. This is a non-authoritative restore. Now let’s suppose that the failure you suffered was due to human error, and an administrator deletes significant portions of Active Directory. If you follow the normal procedure of restoring Active Directory from yesterday’s backup and rebooting the server, replication will occur, and all the changes and deletions made by the administrator will be replicated back to the domain controller. Performing a normal restore would not bring back the deleted objects. To recover your lost users and OUs, you must perform an authoritative restore and specify the objects that you want to replicate to the rest of the network.
![clip_image002[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image068.gif)
How to run a non-authoritative restore:
just go to Windows server backup and click recover. Use the most recent backup file set that was created before the deletion.
This restore is useful in a scenario let’s say a disk failed and once we restore the entire backup after new disk replacement, the entire AD database will be replicated with other domain partners.
If there was an accidently user or OU deleted, go ahead with Authoritative restore. The reason is if you do a normal restore, the USN of an object will increase by 10,000 and other domain controllers will treat this server as updated server and this information will be replicated to all domain controllers.
How to run authoritative restore:
Let’s assume, an OU was deleted from AD database. Perform the below steps to recover the OU. You must have a system state backup before performing below steps.
1. Restart the DC into directory services recovery mode (Hit F8)
![clip_image004[5]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image069.gif)
2. Login with ./administrator and the domain recovery mode password you set up while running Dcpromo
![clip_image006[6]](file:///C:\Users\sbasappa\AppData\Local\Temp\msohtmlclip1\01\clip_image070.gif)
3. Type wbadmin get versions from a command prompt

4. This will find out all backups available and Figure out which version you want to restore

5. Type wbadmin start systemrecovery -version:ID – backuptarget:backuplocation

In the above command, since backup is stored locally on disk, we haven’t specified the network location but if the backup is on a SAN or on another server, we need to specify UNC in backuptarget switch.
6. After the restore, type ntdsutil activate instance NTDS
7. Type authoritative restore to get into the right NTDSUTIL context

8. Type restore object “distinguishedName” for a single account or restore subtree “distinguishedName” if you are restoring an entire OU.


9. Reboot normally
October
27, 2011 1
Comment
Terminal Services:
Terminal Services is an extension of Remote Desktop Services. Using TS, a client can access a session on a Terminal server using the Remote Desktop Client.
The main difference between terminal services and remote desktop services is licensing. Remote desktop services has limited no of connection available where as terminal services have unlimited no of licenses based on CAL.
Remote Desktop and TS Similarities:
Terminal Services:
Terminal Services is an extension of Remote Desktop Services. Using TS, a client can access a session on a Terminal server using the Remote Desktop Client.
The main difference between terminal services and remote desktop services is licensing. Remote desktop services has limited no of connection available where as terminal services have unlimited no of licenses based on CAL.
Remote Desktop and TS Similarities:
- Full access to desktop
- Permission required
- Use port 3389
- Uses same remote desktop client
- Client configuration is same.
Remote
Desktop and TS differences:
Remote Desktop
|
Terminal Services
|
Can have max 2 connections
|
Unlimited connections unless we
specify
|
Full desktop only
|
Full desktop and can add remote
applications
|
RDP client only
|
RDP client and TS web access
|
Limited to a single computer
|
Multiple servers hosting terminal
services and these can be available using techniques such as round robin,
NLB, TS session broker
|
Firewall, VPN issues
|
TS gateway
|
No extra license required
|
CALs required, can be used for 120
days without CALs
|

Below are the new name for Terminal services in Windows 2008 R2:

Different ways to configure terminal server sessions behavior:
1. on the user properties –> sessions tab under active directory users and computers, these settings will be applicable only to specific users where we are configuring settings.
2. Through group policy – if we configure session settings at default domain policy level, it will be applicable to users in entire domain. Group policy path is user configuration –> policies–>Windows components–>Terminal services –> Terminal server –>session time limits
How to override user session settings which was configured under user properties:
Go to below console on treminal server:

Redirecting terminal services user profile:
Any user who logs in to a terminal settings, a profile will be created under docs and settings and configuration will be stored under ntuser.dat file.
To re-direct profile folder other than terminal server:
1. Go to active directory users and computers, select the user account and go to terminal service profile as below:

Here, the profiles will be saved on a shared folder named profiles on server fileserver1
2. Use group policy – We can apply policy to default domain policy so that all user’s profile will be redirect to a file server instead of storing on terminal server.
Computer configuration –> policies–>administrative templates –>Windows components–>terminal services–>terminal server –> profiles

Terminal server licensing:
When installing TS first time, it can be used as free for 120 days. Before completion of 120 days, admin has to obtain appropriate CAL.
Two types of CALs – Per device and per user
Eg, if there are 10 computers and there are 100 factory employees who use these 10 computers when they get time. In this case, per device licensing can be used.
On other hand, in an organization, each user has a laptop computer and a desktop computer, which means a single user has two devices, in this case per user license can be used where one user can make 2 connections.
In order to use terminal server licensing, first activate the terminal server licensing server. There is no charge for activating the licensing server.
After activation, purchased TS CAL license can be installed from licensing manager.
Go to terminal services configuration open terminal services licensing mode:

From the licensing tab, select per device or per user.
Terminal Service Remote App and Gateway:
Terminal Services applications fact:
- Applications must be TS compatible
- Must be multi-user for example MS office
To install
an application on a terminal server:
- change user /query
- change user /install
- change user /execute
Alternatively, go to control panel
and select install application on Terminal Server using GUI.
What
TS Remote Apps does:
Users can run applications from terminal server by using web page, RDP.
Benefits:
Users can run applications from terminal server by using web page, RDP.
Benefits:
- Useful for roaming users who switch between one desktop to other desktops. For example factory floor.
- Client HW insufficient or OS incompatible
- No IT support in branch
- Minimize software deployment cost
How
to distribute Remote Apps to users:
1. Configure and provide RDP file to users to use remote apps – RDP file can be distributed using SMS, SCCM, place on file server, e-mail etc.
Can specify port 3389 or TS gateway
2. Distribute apps using MSI file – using this way, user will have to install .MSI and they must have local admin privileges.
MSI files can also be installed to users using GPO without admin privileges.
How to access remote Apps using Web browser:
By default remote app web will be installed as part of IIS. After installing an application in TS, open an IE browser and type
http:///ts
In Windows 2008 R2, https:///RDWeb

How to create RDP file to distribute:
From Remote App manager, go to remote app programs in bottom, right click on the app and select create .RDP file

Once file is created, give it to users.
In order to avoid security warning, attach certificate while creating RDP file.
How to create Windows installer package (MSI):
From Remote App manager, go to remote app programs in bottom, right click on the app and select create windows installer package.

Select the certificate and below options:

Once the MSI file is created, publish or assign using GPO. This MSI will place a shortcut of application on User’s desktop.
TS Gateway:
1. Configure and provide RDP file to users to use remote apps – RDP file can be distributed using SMS, SCCM, place on file server, e-mail etc.
Can specify port 3389 or TS gateway
2. Distribute apps using MSI file – using this way, user will have to install .MSI and they must have local admin privileges.
MSI files can also be installed to users using GPO without admin privileges.
How to access remote Apps using Web browser:
By default remote app web will be installed as part of IIS. After installing an application in TS, open an IE browser and type
http:///ts
In Windows 2008 R2, https:///RDWeb

How to create RDP file to distribute:
From Remote App manager, go to remote app programs in bottom, right click on the app and select create .RDP file

Once file is created, give it to users.
In order to avoid security warning, attach certificate while creating RDP file.
How to create Windows installer package (MSI):
From Remote App manager, go to remote app programs in bottom, right click on the app and select create windows installer package.

Select the certificate and below options:

Once the MSI file is created, publish or assign using GPO. This MSI will place a shortcut of application on User’s desktop.
TS Gateway:
- It allows to connect to a terminal server behind a firewall
- It uses only port 443 and allows to use SSL and HTTPS
- No need for VPN
- Allows secured and encrypted connection VIA SSL
- Gateway runs on IIS

If a user try to access terminal server from outside the network, he has to provide TS gateway info in his RDP settings such as:

This connection will use TCP port 443 to get connected with TS gateway. TS gateway will receive this request in form of HTTPS, unwrap them and send request to terminal server using port 3389.
Terminal Services Gateway Components:
Certificates – Trusted 3rd party, self-signed (testing in lab purpose)or trusted local CA (within an org)
TS connection authorization policy (TSCAP) – Identifies who can use TS gateway
TS Resource authorization policy (TSRAP) – Identifies which terminal server we can use
Monitor with TS gateway manager – monitor gateway connections.
Configure RDP clients
October
26, 2011 Leave
a comment
ILO is a very cool feature that allows one to manage the server box remotely. Idea is physically you don’t have to be in the data center to manage servers. ILO interface provides exact same interface as you will see when you have connected monitor, keyboard and mouse to each individual server.
Having the ability to remotely access HP servers from POST to OS is an invaluable tool. Standard ILO features include remote shutdown and startup, virtual media, text mode console redirect and access to hardware logs, status and diagnostic tools. Full graphical remote console redirection is available with the advanced license.

How to configure and access ILO on a fresh out the box Proliant ML350 G5 server:
First, connect the ILO designated network port to your switch or management network.

Most brand new HP servers come with an information tag attached. Printed on the tag is the server serial number and Integrated Lights Out access information including factory set username and password.

The easiest way to access the ILO configuration utility is during the POST by pressing F8 when prompted.

The menu is straightforward and self explanatory. Use the arrow keys to navigate. Select Enter while the Set Defaults option is highlighted to revert back to factory settings.
First, access the Network menu, disable DHCP and change the DNS name

Then configure your static ip settings

Next, set the Administrator password or create new user.

Note that the username and password are both case sensitive. Select Exit to save and reset ILO with the new settings. Test access to the ILO web interface.

Checking DHCP leases and configuration from the server OS are some alternate setup options if your server is already in production and the ILO settings were not configured beforehand. If DHCP is accessible from the ILO interface connected network then check the leases for the DNS name printed on the tag. Use the leased ip to access the web interface and login with the factory username and password. All the same settings from the POST utility can be configured through the ILO web interface
How to recover the ILO password
In a worst case scenario where you forgot the user id/ password for ILO login, then only way to reset the password is by connecting physically to the box. Make sure monitor and keyboard is connected to box and boot the machine.
Press F8 to enter into ILO Configuration. Then go to Users -> Modify user and change the ILO admin password which can help to get back to use ILO again.

Troubleshooting ILO DNS Name
But for some reasons if you haven’t configured ILO dns correctly, then you may not be able to access the ILO web interface. In this case to debug the configuration, you need to connect that machine physically.
Following are steps:
ILO is a very cool feature that allows one to manage the server box remotely. Idea is physically you don’t have to be in the data center to manage servers. ILO interface provides exact same interface as you will see when you have connected monitor, keyboard and mouse to each individual server.
Having the ability to remotely access HP servers from POST to OS is an invaluable tool. Standard ILO features include remote shutdown and startup, virtual media, text mode console redirect and access to hardware logs, status and diagnostic tools. Full graphical remote console redirection is available with the advanced license.

How to configure and access ILO on a fresh out the box Proliant ML350 G5 server:
First, connect the ILO designated network port to your switch or management network.

Most brand new HP servers come with an information tag attached. Printed on the tag is the server serial number and Integrated Lights Out access information including factory set username and password.

The easiest way to access the ILO configuration utility is during the POST by pressing F8 when prompted.

The menu is straightforward and self explanatory. Use the arrow keys to navigate. Select Enter while the Set Defaults option is highlighted to revert back to factory settings.
First, access the Network menu, disable DHCP and change the DNS name

Then configure your static ip settings

Next, set the Administrator password or create new user.

Note that the username and password are both case sensitive. Select Exit to save and reset ILO with the new settings. Test access to the ILO web interface.

Checking DHCP leases and configuration from the server OS are some alternate setup options if your server is already in production and the ILO settings were not configured beforehand. If DHCP is accessible from the ILO interface connected network then check the leases for the DNS name printed on the tag. Use the leased ip to access the web interface and login with the factory username and password. All the same settings from the POST utility can be configured through the ILO web interface
How to recover the ILO password
In a worst case scenario where you forgot the user id/ password for ILO login, then only way to reset the password is by connecting physically to the box. Make sure monitor and keyboard is connected to box and boot the machine.
Press F8 to enter into ILO Configuration. Then go to Users -> Modify user and change the ILO admin password which can help to get back to use ILO again.

Troubleshooting ILO DNS Name
But for some reasons if you haven’t configured ILO dns correctly, then you may not be able to access the ILO web interface. In this case to debug the configuration, you need to connect that machine physically.
Following are steps:
- Connect Monitor to this machine (connect to front port, it’s easy!) and have keyboard connection at the back.
- Power on the machine. Once system start booting… You will see white screen display “HP Proliant Servers ….” After this keep on pressing F8 key to get you into ILO configuration Screen.
- Now go to Network -> DHCP menu
* Make sure DHCP is set to OFF (use spacebar to change the settings)
* Verify ILO name has correct value. - Now go to Menu Network -> TCP/IP Go to IP Address selection You can’t change these settings if DHCP is ON Update IP Address to new ILP DNS entry. Also Enter correct values for subnet and default getway.
Save the settings (F10) and exit.
Now you should be able to login to ILO interface using new dns name like http://newdnsname
December 23, 2011 Leave a comment
Installation
You can install a DNS server from
the Control Panel or when promoting a member server to a domain controller (DC)
(Figure A). During the promotion, if a DNS server is not found, you will
have the option of installing it.
Figure A

To install a DNS server from the
Control Panel, follow these steps:
- From the Start menu, select | Control Panel | Administrative Tools | Server Manager.
- Expand and click Roles (Figure B).
- Choose Add Roles and follow the wizard by selecting the DNS role (Figure C).
- Click Install to install DNS in Windows Server 2008 (Figure D).
Figure B

Expand and click Roles
Figure C

Select DNS role
Figure D

Install DNS
DNS console and
configuration
After installing DNS, you can find
the DNS console from Start | All Programs | Administrative Tools | DNS. Windows
2008 provides a wizard to help configure DNS.
When configuring your DNS server,
you must be familiar with the following concepts:
- Forward lookup zone
- Reverse lookup zone
- Zone types
A forward lookup zone is simply a
way to resolve host names to IP addresses. A reverse lookup zone allows a DNS
server to discover the DNS name of the host. Basically, it is the exact
opposite of a forward lookup zone. A reverse lookup zone is not required, but
it is easy to configure and will allow for your Windows Server 2008 Server to
have full DNS functionality.
When selecting a DNS zone type, you
have the following options: Active Directory (AD) Integrated, Standard Primary,
and Standard Secondary. AD Integrated stores the database information in AD and
allows for secure updates to the database file. This option will appear only if
AD is configured. If it is configured and you select this option, AD will store
and replicate your zone files.
A Standard Primary zone stores the
database in a text file. This text file can be shared with other DNS servers
that store their information in a text file. Finally, a Standard Secondary zone
simply creates a copy of the existing database from another DNS server. This is
primarily used for load balancing.
To open the DNS server configuration
tool:
- Select DNS from the Administrative Tools folder to open the DNS console.
- Highlight your computer name and choose Action | Configure a DNS Server… to launch the Configure DNS Server Wizard.
- Click Next and choose to configure the following: forward lookup zone, forward and reverse lookup zone, root hints only (Figure E).
- Click Next and then click Yes to create a forward lookup zone (Figure F).
- Select the appropriate radio button to install the desired Zone Type (Figure G).
- Click Next and type the name of the zone you are creating.
- Click Next and then click Yes to create a reverse lookup zone.
- Repeat Step 5.
- Choose whether you want an IPv4 or IPv6 Reverse Lookup Zone (Figure H).
- Click Next and enter the information to identify the reverse lookup zone (Figure I).
- You can choose to create a new file or use an existing DNS file (Figure J).
- On the Dynamic Update window, specify how DNS accepts secure, nonsecure, or no dynamic updates.
- If you need to apply a DNS forwarder, you can apply it on the Forwarders window. (Figure K).
- Click Finish (Figure L).
Figure E

Configure
Figure F

Forward lookup zone
Figure G

Desired zone
Figure H

IPv4 or IPv6
Figure I

Reverse lookup zone
Figure J

Choose new or existing DNS file
Figure K

Forwarders window
Figure L

Finish
Managing DNS records
You have now installed and
configured your first DNS server, and you’re ready to add records to the
zone(s) you created. There are various types of DNS records available. Many of
them you will never use. We’ll be looking at these commonly used DNS records:
- Start of Authority (SOA)
- Name Servers
- Host (A)
- Pointer (PTR)
- Canonical Name (CNAME) or Alias
- Mail Exchange (MX)
Start of Authority (SOA) record
The Start of Authority (SOA)
resource record is always first in any standard zone. The Start of Authority
(SOA) tab allows you to make any adjustments necessary. You can change the
primary server that holds the SOA record, and you can change the person
responsible for managing the SOA. Finally, one of the most important features
of Windows 2000 is that you can change your DNS server configuration without
deleting your zones and having to re-create the wheel (Figure M).
Figure M

Change configuration
Name Servers
Name Servers specify all name
servers for a particular domain. You set up all primary and secondary name
servers through this record.
To create a Name Server, follow
these steps:
1.
Select DNS from the Administrative
Tools folder to open the DNS console.
2.
Expand the Forward Lookup Zone.
3.
Right-click on the appropriate
domain and choose Properties (Figure N).
4.
Select the Name Servers tab and
click Add.
5.
Enter the appropriate FQDN Server
name and IP address of the DNS server you want to add.
Figure N

Host (A) records
A Host (A) record maps a host name
to an IP address. These records help you easily identify another server in a
forward lookup zone. Host records improve query performance in multiple-zone
environments, and you can also create a Pointer (PTR) record at the same time.
A PTR record resolves an IP address to a host name.
To create a Host record:
1.
Select DNS from the Administrative
Tools folder to open the DNS console.
2.
Expand the Forward Lookup Zone and
click on the folder representing your domain.
3.
From the Action menu, select New
Host.
4.
Enter the Name and IP Address of the
host you are creating (Figure O).
5.
Select the Create Associated Pointer
(PTR) Record check box if you want to create the PTR record at the same time.
Otherwise, you can create it later.
6.
Click the Add Host button.
Figure O

A Host (A) record
Pointer (PTR) records
A Pointer (PTR) record creates the
appropriate entry in the reverse lookup zone for reverse queries. As you saw in
Figure H, you have the option of creating a PTR record when creating a Host
record. If you did not choose to create your PTR record at that time, you can
do it at any point.
To create a PTR record:
1.
Select DNS from the Administrative
Tools folder to open the DNS console.
2.
Choose the reverse lookup zone where
you want your PTR record created.
3.
From the Action menu, select New
Pointer (Figure P).
4.
Enter the Host IP Number and Host
Name.
5.
Click OK.
Figure P

New Pointer
Canonical Name (CNAME) or Alias
records
A Canonical Name (CNAME) or Alias
record allows a DNS server to have multiple names for a single host. For
example, an Alias record can have several records that point to a single server
in your environment. This is a common approach if you have both your Web server
and your mail server running on the same machine.
To create a DNS Alias:
1.
Select DNS from the Administrative
Tools folder to open the DNS console.
2.
Expand the Forward Lookup Zone and
highlight the folder representing your domain.
3.
From the Action menu, select New
Alias.
4.
Enter your Alias Name (Figure Q).
5.
Enter the fully qualified domain
name (FQDN).
6.
Click OK.
Figure Q

Mail Exchange (MX) records
Mail Exchange records help you
identify mail servers within a zone in your DNS database. With this feature,
you can prioritize which mail servers will receive the highest priority.
Creating MX records will help you keep track of the location of all of your
mail servers.
To create a Mail Exchange (MX)
record:
1.
Select DNS from the Administrative
Tools folder to open the DNS console.
2.
Expand the Forward Lookup Zone and
highlight the folder representing your domain.
3.
From the Action menu, select New
Mail Exchanger.
4.
Enter the Host Or Domain (Figure
R).
5.
Enter the Mail Server and Mail
Server Priority.
6.
Click OK.
Figure R

Host or Domain
Other new records
You can create many other types of
records. For a complete description, choose Action | Other New Records from the
DNS console (Figure S). Select the record of your choice and view the
description.
Figure S

Create records from the DNS console
Troubleshooting DNS
servers
When troubleshooting DNS servers,
the nslookup utility will become your best friend. This utility is easy
to use and very versatile. It’s a command-line utility that is included within
Windows 2008. With nslookup, you can perform query testing of your DNS servers.
This information is useful in troubleshooting name resolution problems and
debugging other server-related problems. You can access nslookup (Figure T)
right from the DNS console.
Figure T

December
22, 2011 Leave a
comment
What is DNS:
DNS provides name registration and name to address resolution capabilities. And DNS drastically lowers the need to remember numeric IP addresses when accessing hosts on the Internet or any other TCP/IP-based network.
Before DNS, the practice of mapping friendly host or computer names to IP addresses was handled via host files. Host files are easy to understand. These are static ASCII text files that simply map a host name to an IP address in a table-like format. Windows ships with a HOSTS file in the \winnt\system32\drivers\etc subdirectory
The fundamental problem with the host files was that these files were labor intensive. A host file is manually modified, and it is typically centrally administrated.
The DNS system consists of three components: DNS data (called resource records), servers (called name servers), and Internet protocols for fetching data from the servers.
What is DNS namespace?
A DNS name consists of two or more parts separated by periods, or "dots" (.). The last (rightmost) part of the name is called the top-level domain (TLD). Other parts of the name are subdomains of the TLD or another subdomain. The names of the TLDs are either functional or geographical. Subdomains usually refer to the organization that owns the domain name.

What is DNS:
DNS provides name registration and name to address resolution capabilities. And DNS drastically lowers the need to remember numeric IP addresses when accessing hosts on the Internet or any other TCP/IP-based network.
Before DNS, the practice of mapping friendly host or computer names to IP addresses was handled via host files. Host files are easy to understand. These are static ASCII text files that simply map a host name to an IP address in a table-like format. Windows ships with a HOSTS file in the \winnt\system32\drivers\etc subdirectory
The fundamental problem with the host files was that these files were labor intensive. A host file is manually modified, and it is typically centrally administrated.
The DNS system consists of three components: DNS data (called resource records), servers (called name servers), and Internet protocols for fetching data from the servers.
What is DNS namespace?
A DNS name consists of two or more parts separated by periods, or "dots" (.). The last (rightmost) part of the name is called the top-level domain (TLD). Other parts of the name are subdomains of the TLD or another subdomain. The names of the TLDs are either functional or geographical. Subdomains usually refer to the organization that owns the domain name.

Functional TLD
|
Typically used by …
|
.com
|
Commercial entities, such as
corporations, to register DNS domain names
|
.edu
|
Educational institutions, such as
colleges, and public and private schools
|
.gov
|
Government entities, such as
federal, state, and local governments
|
.net
|
Organizations that provide
Internet services, such as Internet service providers (ISPs)
|
.org
|
Private, nonprofit organizations
|
How
DNS resolves a host name to IP address:
DNS clients called resolvers submit queries to DNS servers to be resolved into IP addresses. Assuming, for example, that you want to connect to http://www.microsoft.com, www is the host name (or an alias to a different host name), and Microsoft.com is the domain name. The resolver on your client computer prepares a DNS query for http://www.microsoft.com and submits it to the DNS server identified in your client computer’s TCP/IP settings, which in this case we assume is a DNS server on your LAN. The DNS server checks its local cache (which stores results of previous queries) and database and finds that it has no records for http://www.microsoft.com. Therefore, the DNS server submits a query to the root server for the .com domain. The root server looks up the Microsoft.com domain and responds with the IP address(es) of the name servers for the domain. Your DNS server then submits a query to the specified DNS server for Microsoft.com, which responds with the IP address of the host www. Your DNS server in turn provides this information to your resolver, which passes the data to your client application (in this case, a Web browser), and suddenly the http://www.microsoft.com site pops up on your browser. Mapping a host name or alias to its address in this way is called forward lookup.

What is a Zone:
In most cases, a given name server manages all the records for some portion of the DNS namespace called a zone. The terms ‘‘zone’’ and ‘‘domain’’ are generally synonymous, but not always. A zone comprises all the data for a domain, with the exception of parts of the domain delegated to other name servers. A zone is the part of the domain hosted on a particular name server. The domain comprises the whole of the domain, wherever its components reside. Whenever the entire domain resides on a single name server, zone and domain are synonymous.
Each zone contains records that define hosts and other elements of the domain or a portion of the domain contained within the zone. These records are stored collectively in a zone file on the DNS server. A zone file is a text file that uses a special format to store DNS records. The default name for a zone file is domain.dns, where domain is the name of the domain hosted by the zone, such as mcity.us.dns. Windows Server 2008 stores zone files in
%systemroot%\System32\Dns and provides an MMC console to enable you to manage the contents of the zone files with a graphical interface.
What is Authoritative Server:
A name server that has full information about a given zone is said to be authoritative or has authority for the zone. A given name server can be authoritative for any number of zones and can be both authoritative for some and nonauthoritative for others.
What are DNS Zone types:
Each DNS server provides for several different types of zones, including primary, secondary, stub, and Active Directory–integrated. You can have forward and reverse lookup zones in each of these zone types. A forward lookup zone resolves a computer’s fully qualified domain name (FQDN) to its IP address, whereas a reverse lookup zone resolves an IP address to the corresponding FQDN.
Primary Zones
A name server can be either a primary master or a secondary master. A primary master maintains locally the records for those domains for which it is authoritative. The system administrator for a primary master can add new records, modify existing records, and so on, on the primary master.
A primary zone is a master copy of zone data hosted on a DNS server that is the primary source of information for records found in this zone. This server is considered to be authoritative for this zone, and you can update zone data directly on this server. It is also known as a master server. If the zone data is not integrated with AD DS, the server holds this data in a local file named <zone_name.dns> that is located in the %systemroot%\system32\DNS folder.
Secondary Zones
A secondary master for a zone pulls its records for the zone from a primary master through a process called a zone transfer. The secondary master maintains the zone records as a read-only copy and periodically performs zone transfers to refresh the data from the primary master. You control the frequency of the zone transfers according to the requirements of the domain
Active Directory–Integrated Zones
An Active Directory–integrated zone stores its data in one or more application directory partitions that are replicated along with other AD DS directory partitions. This helps to ensure that zone data remains up-to-date on all domain controllers hosting DNS in the domain. Using Active Directory–integrated zones also provides the following benefits:
DNS clients called resolvers submit queries to DNS servers to be resolved into IP addresses. Assuming, for example, that you want to connect to http://www.microsoft.com, www is the host name (or an alias to a different host name), and Microsoft.com is the domain name. The resolver on your client computer prepares a DNS query for http://www.microsoft.com and submits it to the DNS server identified in your client computer’s TCP/IP settings, which in this case we assume is a DNS server on your LAN. The DNS server checks its local cache (which stores results of previous queries) and database and finds that it has no records for http://www.microsoft.com. Therefore, the DNS server submits a query to the root server for the .com domain. The root server looks up the Microsoft.com domain and responds with the IP address(es) of the name servers for the domain. Your DNS server then submits a query to the specified DNS server for Microsoft.com, which responds with the IP address of the host www. Your DNS server in turn provides this information to your resolver, which passes the data to your client application (in this case, a Web browser), and suddenly the http://www.microsoft.com site pops up on your browser. Mapping a host name or alias to its address in this way is called forward lookup.

What is a Zone:
In most cases, a given name server manages all the records for some portion of the DNS namespace called a zone. The terms ‘‘zone’’ and ‘‘domain’’ are generally synonymous, but not always. A zone comprises all the data for a domain, with the exception of parts of the domain delegated to other name servers. A zone is the part of the domain hosted on a particular name server. The domain comprises the whole of the domain, wherever its components reside. Whenever the entire domain resides on a single name server, zone and domain are synonymous.
Each zone contains records that define hosts and other elements of the domain or a portion of the domain contained within the zone. These records are stored collectively in a zone file on the DNS server. A zone file is a text file that uses a special format to store DNS records. The default name for a zone file is domain.dns, where domain is the name of the domain hosted by the zone, such as mcity.us.dns. Windows Server 2008 stores zone files in
%systemroot%\System32\Dns and provides an MMC console to enable you to manage the contents of the zone files with a graphical interface.
What is Authoritative Server:
A name server that has full information about a given zone is said to be authoritative or has authority for the zone. A given name server can be authoritative for any number of zones and can be both authoritative for some and nonauthoritative for others.
What are DNS Zone types:
Each DNS server provides for several different types of zones, including primary, secondary, stub, and Active Directory–integrated. You can have forward and reverse lookup zones in each of these zone types. A forward lookup zone resolves a computer’s fully qualified domain name (FQDN) to its IP address, whereas a reverse lookup zone resolves an IP address to the corresponding FQDN.
Primary Zones
A name server can be either a primary master or a secondary master. A primary master maintains locally the records for those domains for which it is authoritative. The system administrator for a primary master can add new records, modify existing records, and so on, on the primary master.
A primary zone is a master copy of zone data hosted on a DNS server that is the primary source of information for records found in this zone. This server is considered to be authoritative for this zone, and you can update zone data directly on this server. It is also known as a master server. If the zone data is not integrated with AD DS, the server holds this data in a local file named <zone_name.dns> that is located in the %systemroot%\system32\DNS folder.
Secondary Zones
A secondary master for a zone pulls its records for the zone from a primary master through a process called a zone transfer. The secondary master maintains the zone records as a read-only copy and periodically performs zone transfers to refresh the data from the primary master. You control the frequency of the zone transfers according to the requirements of the domain
Active Directory–Integrated Zones
An Active Directory–integrated zone stores its data in one or more application directory partitions that are replicated along with other AD DS directory partitions. This helps to ensure that zone data remains up-to-date on all domain controllers hosting DNS in the domain. Using Active Directory–integrated zones also provides the following benefits:
- It promotes fault tolerance because data is always available and can always be updated even if one of the servers fails. If a DNS server hosting a primary zone outside of AD DS fails, it is not possible to update its data because no mechanism exists for promoting a secondary DNS zone to primary.
- Each writable domain controller on which DNS is installed acts as a master server and allows updates to the zones in which they are authoritative; no separate DNS zone transfer topology is needed.
- Security is enhanced because you can configure dynamic updates to be secured; by contrast, zone data not integrated with AD DS is stored in plain-text files that unauthorized users could access, modify, or delete. Either primary or stub zones can be integrated with AD DS. It is not possible to create an Active Directory–integrated secondary zone.
Stub Zone – A stub zone is a copy of the primary zone that only
contains resource records for the authoritative DNS servers for that zone. A
server hosting a stub zone must download the zone data and ongoing updates to
the data from another server hosting the same zone. When properly implemented
stub zones can improve name resolution efficiency by allowing DNS servers to
complete recursive queries without having to query the Internet or internal
root servers. Stub zones also tend to be less processor intensive than
conditional forwarding.
SOA,A,NS—Reduces DNS traffic---UDP protocol--500
A stub zone consists of:
SOA,A,NS—Reduces DNS traffic---UDP protocol--500
A stub zone consists of:
•
|
The start of authority (SOA)
resource record, name server (NS) resource records, and the glue A resource
records for the delegated zone.
|
•
|
The IP address of one or more
master servers that can be used to update the stub zone.
|
What are DNS Resolvers:
A DNS resolver is a service that uses the DNS protocol to query for information from DNS servers. DNS resolvers communicate with either remote DNS servers or the DNS server program running on the local computer. In Windows Server 2003, the function of the DNS resolver is performed by the DNS Client service. Besides acting as a DNS resolver, the DNS Client service provides the added function of caching DNS mappings.
What are Resource Records:
Resource records are DNS database entries that are used to answer DNS client queries. Each DNS server contains the resource records it needs to answer queries for its portion of the DNS namespace. Resource records are each described as a specific record type, such as host address (A), alias (CNAME), and mail exchanger (MX).
A DNS resolver is a service that uses the DNS protocol to query for information from DNS servers. DNS resolvers communicate with either remote DNS servers or the DNS server program running on the local computer. In Windows Server 2003, the function of the DNS resolver is performed by the DNS Client service. Besides acting as a DNS resolver, the DNS Client service provides the added function of caching DNS mappings.
What are Resource Records:
Resource records are DNS database entries that are used to answer DNS client queries. Each DNS server contains the resource records it needs to answer queries for its portion of the DNS namespace. Resource records are each described as a specific record type, such as host address (A), alias (CNAME), and mail exchanger (MX).
- Host (A) resource records: this type of record maps a hostname to a 32-bit IPv4 address.
- AAAA resource records: these map a hostname to a 128-bit IPv6 address.
- Name Service (NS) records: this kind of record maps a domain name to a list of DNS servers authoritative for the domain.
- Service location (SRV) resource records: this type maps a DNS domain name to a list of computers that provide a service, for example, an SRV RR is required for computers to locate Active Directory domain controllers.
- Mail exchange (MX) resource records: this kind of record maps a DNS domain name to the name of a mail exchange computer for the domain.
- Alias (CNAME) resource records: also called canonical name records, these allow you to configure multiple DNS names to resolve to a single host.
- Pointer (PTR) resource records: this type of record is used for the reverse lookup process
What
is Reverse Lookup zone:
Forward lookup maps names to addresses, enabling a resolver to query a name server with a host name and receive an address in response. A reverse query, also called reverse lookup, does just the opposite — it maps an IP address to a name. The client knows the IP address but needs to know the host name associated with that IP address. Reverse lookup is most commonly used to apply security based on the connecting host name, but it is also useful if you’re working with a range of IP addresses and gathering information about them.
What is Recursion:
If the queried name does not find a matched answer at its preferred server—either from its cache or zone information—the query process continues in a manner dependent on the DNS server configuration. In the default configuration, the DNS server performs recursion to resolve the name. In general, recursion in DNS refers to the process of a DNS server querying other DNS servers on behalf of an original querying client. This process, in effect, turns the original DNS server into a DNS client.
If recursion is disabled on the DNS server, the client performs iterative queries by using root hint referrals from the DNS server. Iteration refers to the process of a DNS client making repeated queries to different DNS servers.
Forward lookup maps names to addresses, enabling a resolver to query a name server with a host name and receive an address in response. A reverse query, also called reverse lookup, does just the opposite — it maps an IP address to a name. The client knows the IP address but needs to know the host name associated with that IP address. Reverse lookup is most commonly used to apply security based on the connecting host name, but it is also useful if you’re working with a range of IP addresses and gathering information about them.
What is Recursion:
If the queried name does not find a matched answer at its preferred server—either from its cache or zone information—the query process continues in a manner dependent on the DNS server configuration. In the default configuration, the DNS server performs recursion to resolve the name. In general, recursion in DNS refers to the process of a DNS server querying other DNS servers on behalf of an original querying client. This process, in effect, turns the original DNS server into a DNS client.
If recursion is disabled on the DNS server, the client performs iterative queries by using root hint referrals from the DNS server. Iteration refers to the process of a DNS client making repeated queries to different DNS servers.
What is Server Forwarding:
A forwarder is a DNS queries for
external DNS names to DNS servers outside of the network. You use forwarders to
manage DNS traffic sent from your internal network to the Internet. Conditional
forwarders forward queries for specific domain names do certain servers, for example,
you may want to configure conditional forwarding to more quickly resolve
hostnames for your organization’s most important business partners.
To configure forwarders you configure the network’s firewalls to block outbound DNS traffic from all DNS servers except the forwarders. Then you specify the IP addresses of the forwarders on the other DNS servers in your network. You define the list of forwarders in DNS Manager from the Forwarders tab in the Properties dialog box for the DNS server by clicking Edit and entering the list of IP address in the Edit Forwarders dialog box. To define a conditional forwarder select a DNS domain name before entering the IP address of the DNS server.
To configure forwarders you configure the network’s firewalls to block outbound DNS traffic from all DNS servers except the forwarders. Then you specify the IP addresses of the forwarders on the other DNS servers in your network. You define the list of forwarders in DNS Manager from the Forwarders tab in the Properties dialog box for the DNS server by clicking Edit and entering the list of IP address in the Edit Forwarders dialog box. To define a conditional forwarder select a DNS domain name before entering the IP address of the DNS server.
What are Root Hints:
As discussed previously, DNS servers
use the list of root hint servers to located authoritative name servers for
domains at a higher level or in other subtrees of the DNS namespace. When you
add the DNS server role a file called cache.dns is written to
%systemroot%\System32\dns, this file includes the NS and A resource records for
the Internet’s root servers. If you are using DNS in a network that is
not connected to the Internet you may wish to replace this list of root hints
with your own. You can modify the list in DNS Manager by doing the following:
1.
Right-click on the server and select
Properties.
2.
Click the Root Hints tab.
3.
Modify the list as appropriate, as
shown in figure 10:
1.
Click Add… to create a new
record.
2.
Select a record and click Edit…
to modify an existing record.
3.
Select a record and click Remove
to delete an existing record.
4.
Click Copy from Server and
then specify the IP address to retrieve the list of root hints from another DNS
server. This action will not overwrite any existing root hints.

In next article we will discuss step by step installation and configuration of DNS in Windows Server 2008.
How to resolve Windows Blue Screen Errors
December
18, 2011 Leave a
comment
The blue screen (or blue screen of death, blue screen of doom, or BSOD) is properly known as a “Windows Stop Message”. It is displayed when the Windows kernel or a driver running in kernel mode encounters an error which cannot be handled. This error could be something like a process or driver trying to access a memory address which it did not have permission to access, or trying to write to a section of memory which is marked read-only.
More to the point, Stop messages don’t occur without a reason; they are an indication that the system has a problem somewhere – hardware, software, or device drivers can all be the cause of the fault. Often a simple reboot will get the system up and running again, but if the underlying problem is not solved, the blue screen will probably come back again.
Let’s look into some methodical approach to troubleshooting stop messages, with a few simple steps which can take a lot of the guesswork out, and could get your system back up and running more quickly and easily than reinstalling the operating system.
The blue screen (or blue screen of death, blue screen of doom, or BSOD) is properly known as a “Windows Stop Message”. It is displayed when the Windows kernel or a driver running in kernel mode encounters an error which cannot be handled. This error could be something like a process or driver trying to access a memory address which it did not have permission to access, or trying to write to a section of memory which is marked read-only.
More to the point, Stop messages don’t occur without a reason; they are an indication that the system has a problem somewhere – hardware, software, or device drivers can all be the cause of the fault. Often a simple reboot will get the system up and running again, but if the underlying problem is not solved, the blue screen will probably come back again.
Let’s look into some methodical approach to troubleshooting stop messages, with a few simple steps which can take a lot of the guesswork out, and could get your system back up and running more quickly and easily than reinstalling the operating system.
Step 1 – Read the message
It may sound obvious, but the first
step is simply to read the message displayed on screen. Often there is
enough information displayed to point you to the cause – if the stop error is
caused by a kernel-mode driver, the driver image name is generally shown in the
message.

Figure 1 – Windows Stop Message Caused By myfault.sys
Figure 1 is an example of a fairly common stop message – “DRIVER_IRQL_NOT_LESS_OR_EQUAL”. This stop error is caused when a kernel mode driver attempts an illegal memory access. The “Technical information” section shows the STOP code, and also lists the specific driver which caused the fault – in this case it’s “myfault.sys”, which is the driver installed by the Sysinternals utility NotMyFault.exe. In a real-world crash, the driver image name could be any kernel-mode driver installed on the system, but once you know the name of the driver it can be located on disk, and the vendor found by checking the file properties.

Figure 2 –Properties of myfault.sys Driver File
In terms of finding quick solutions to the problem, the vendor may have an updated driver you can try, or could have a knowledge base you can search for a resolution. However, not every stop message will make it that easy – sometimes there is little more than a STOP code.

Figure 3 – Windows Stop Message 0x0000007B
Although it looks fairly Spartan, there is still some useful information in this message – the “Technical Information” section includes the STOP code (0x0000007B in Figure 3), and occasionally that can be enough to get started with troubleshooting. However, unless you already know what error the stop code translates to, this is where we move to step 2: Searching.

Figure 1 – Windows Stop Message Caused By myfault.sys
Figure 1 is an example of a fairly common stop message – “DRIVER_IRQL_NOT_LESS_OR_EQUAL”. This stop error is caused when a kernel mode driver attempts an illegal memory access. The “Technical information” section shows the STOP code, and also lists the specific driver which caused the fault – in this case it’s “myfault.sys”, which is the driver installed by the Sysinternals utility NotMyFault.exe. In a real-world crash, the driver image name could be any kernel-mode driver installed on the system, but once you know the name of the driver it can be located on disk, and the vendor found by checking the file properties.

Figure 2 –Properties of myfault.sys Driver File
In terms of finding quick solutions to the problem, the vendor may have an updated driver you can try, or could have a knowledge base you can search for a resolution. However, not every stop message will make it that easy – sometimes there is little more than a STOP code.

Figure 3 – Windows Stop Message 0x0000007B
Although it looks fairly Spartan, there is still some useful information in this message – the “Technical Information” section includes the STOP code (0x0000007B in Figure 3), and occasionally that can be enough to get started with troubleshooting. However, unless you already know what error the stop code translates to, this is where we move to step 2: Searching.
Step 2 – Search
If the stop message hasn’t given
enough information to start troubleshooting, the next step is to search for
more details. Again, this may sound obvious, but in my interviews I was
also surprised by the number of people who did not mention that they would use
the Microsoft Support knowledge base, Microsoft TechNet, MSDN, or some other
on-line resources when troubleshooting blue screen errors.
For example, a quick search of MSDN or TechNet will reveal that the stop code shown in Figure 3, 0x0000007B, translates as INACCESSIBLE_BOOT_DEVICE, which means that the operating system failed to initialize the storage device it is attempting to boot from during the I/O system initialization. This generally indicates a storage driver problem, and knowing that the problem is caused by the storage subsystem helps to focus troubleshooting to a specific area, which should make the error easier to diagnose.
There are many, many websites offering help with troubleshooting stop errors. My preference is always to start with Microsoft sites or hardware vendor sites, then broaden my searching to other sites and forums if I can’t find what I need. In most cases, someone else will have experienced the same problem, and there may be documented solutions or workarounds offered.
Of course, both steps one and two rely on one crucial thing – that you’ve witnessed and/or recorded the stop message. If you haven’t seen the stop message occur, then you can find the stop error and parameters in the System event log, but unfortunately there are no additional details such as the stack trace. Nevertheless, even with the details of the stop message, there still may not be enough information for a conclusive diagnosis, and at this point we need to move on to step three: Crash dump analysis.
For example, a quick search of MSDN or TechNet will reveal that the stop code shown in Figure 3, 0x0000007B, translates as INACCESSIBLE_BOOT_DEVICE, which means that the operating system failed to initialize the storage device it is attempting to boot from during the I/O system initialization. This generally indicates a storage driver problem, and knowing that the problem is caused by the storage subsystem helps to focus troubleshooting to a specific area, which should make the error easier to diagnose.
There are many, many websites offering help with troubleshooting stop errors. My preference is always to start with Microsoft sites or hardware vendor sites, then broaden my searching to other sites and forums if I can’t find what I need. In most cases, someone else will have experienced the same problem, and there may be documented solutions or workarounds offered.
Of course, both steps one and two rely on one crucial thing – that you’ve witnessed and/or recorded the stop message. If you haven’t seen the stop message occur, then you can find the stop error and parameters in the System event log, but unfortunately there are no additional details such as the stack trace. Nevertheless, even with the details of the stop message, there still may not be enough information for a conclusive diagnosis, and at this point we need to move on to step three: Crash dump analysis.
Step 3 – Analyze
The third and final method in my
approach is to perform basic analysis on the crash dump file, which all Windows
systems are configured by default to create. There are three types of
crash dump file, and the settings for controlling which type of files are
created can be found on the Advanced tab in the System Properties
dialogue box.

Figure 4 – Startup and Recovery options

Figure 4 – Startup and Recovery options
Complete Memory Dump(HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
A complete memory dump contains all
the data which was in physical memory at the time of the crash. Complete
dump files require that a page file exists on the system volume, and that it is
at least the size of physical memory plus 1MB. Because complete
memory dumps can be very large, they are automatically hidden from the UI on
systems with more than 2GB of physical RAM, although this can be overridden
with a registry change (which I won’t discuss here).
Kernel
Memory Dump(1.5 TIMES OF ram) records running process
A kernel memory dump contains the
kernel-mode read/write pages which were in physical memory at the time of the
crash. The dump file also contains a list of running processes, the stack
of the current thread, and the list of loaded device drivers. Kernel
memory dumps are the default on Windows Server 2008 and Windows 7.
Small
Memory Dump
A small memory dump (sometimes also
called a mini-dump) contains the stop error code and parameters as well as a
list of loaded device drivers, and a small amount of other data. Small
memory dumps must be analysed on a system which has access to exactly the same
images as the system which generated the dump file, meaning that it can be
difficult to analyse the dump file on a system other than the one on which it
was created.
For basic crash analysis, a kernel memory dump is usually adequate and, as shown in Figure 4, the default location for its creation is %SystemRoot%\MEMORY.DMP. The tool required for analysing the crash dump file is WinDbg, the Microsoft Windows Debugger, which can be downloaded from Microsoft’s website.
After installation, WinDbg needs to be configured to use the Microsoft Symbol Server. Once symbols are configured, click the File menu, choose Open Crash Dump, and select the crash dump file you want to analyze. The output from WinDbg will look like this:

Figure 5 – Windows Debugger Analysis
The second to last line, which starts “Probably caused by” indicates the debugger’s best guess at the cause of the crash. In the example in Figure 5 the debugger is correct – this crash was caused by NotMyFault. Other information in the analysis indicates that the crash dump file is a kernel memory dump, and that symbol files could not be loaded for myfault.sys (because it is a third party driver, and the symbols are not available on the Microsoft Symbol Server).
More information can be gleaned from the dump file by executing verbose analysis, using the debugger command !analyze -v.

Figure 5 – Verbose Windows Debugger Analysis
The verbose output shows the description of the stop message, which will save you having to search for it, and will also include the stack trace of the thread which was executing at the time of the crash, which could also prove useful if further debugging is needed.
Basic crash dump analysis is easy, the tools are readily available, and a lot of information about the crash can be found in just a few seconds. If basic analysis doesn’t help to solve the problem, there are many excellent resources available which give much more detailed information about the Windows Debugger and its use, and can provide in-depth guides on how to extract and interpret the data using advanced analysis technique
For basic crash analysis, a kernel memory dump is usually adequate and, as shown in Figure 4, the default location for its creation is %SystemRoot%\MEMORY.DMP. The tool required for analysing the crash dump file is WinDbg, the Microsoft Windows Debugger, which can be downloaded from Microsoft’s website.
After installation, WinDbg needs to be configured to use the Microsoft Symbol Server. Once symbols are configured, click the File menu, choose Open Crash Dump, and select the crash dump file you want to analyze. The output from WinDbg will look like this:

Figure 5 – Windows Debugger Analysis
The second to last line, which starts “Probably caused by” indicates the debugger’s best guess at the cause of the crash. In the example in Figure 5 the debugger is correct – this crash was caused by NotMyFault. Other information in the analysis indicates that the crash dump file is a kernel memory dump, and that symbol files could not be loaded for myfault.sys (because it is a third party driver, and the symbols are not available on the Microsoft Symbol Server).
More information can be gleaned from the dump file by executing verbose analysis, using the debugger command !analyze -v.

Figure 5 – Verbose Windows Debugger Analysis
The verbose output shows the description of the stop message, which will save you having to search for it, and will also include the stack trace of the thread which was executing at the time of the crash, which could also prove useful if further debugging is needed.
Basic crash dump analysis is easy, the tools are readily available, and a lot of information about the crash can be found in just a few seconds. If basic analysis doesn’t help to solve the problem, there are many excellent resources available which give much more detailed information about the Windows Debugger and its use, and can provide in-depth guides on how to extract and interpret the data using advanced analysis technique
Comments
Post a Comment