Data centers are under relentless pressure to improve application
availability, and are striving for 100 percent availability for mission-critical
applications. Due to the increasing cost of downtime, organizations are focusing
on the various causes of outages and adopt a systematic approach to reduce the
risks of downtime
Not only do the typical infrastructure issues need to be addressed, but the
people and process issues also have to be addressed with a plan in place to
quickly recover from unforeseen disasters. Disasters such as 9/11, hurricane
Katrina, and the tsunami in Indonesia have caused many organizations to
reconsider their approach to risk management and disaster planning.
Organizations recognize that disasters and disruptions will keep occurring,
but focus has changed from disaster avoidance to disaster recovery. There are
two categories of application downÂtime planned and unplanned.
Improved application availaÂÂbility and speedy disaster recovery requires a
storage architecture that protects against all planned and unplanned downtime,
and allows quick recovery when downtime does occur. All storage vendors deliver
availability, but not all focus on delivering solutions for the most frequent
causes of downtime: application and operational failure.
Site and natural disasters are less likely than operator error, but they can
have a much greater impact. Data centers require a flexible and cost-effective
disaster recovery (DR) solution, making it affordable to cover all application
tiers under a single DR plan and a solution that puts the DR site to active
business use.
![]() |
Data centers need to implement a plan to ensure that their data storage and applications are highly resilient and optimized |
Application and database administrators need application-integrated solutions
that perform frequent and non-disruptive backups in a matter of seconds to
ensure recovery time objectives and recovery point objectives are met.
Unified storage
A unified pool of storage would have higher storage utilization, a single
data recovery solution, a single data management model, and greater leverage of
IT staff and skills. A unified storage platform would use just one set of
software and processes across all tiers of storage. Unified storage systems
should abstract and virtualizes the specifics of SAN and NAS into a common form
that can be allocated and managed using the same set of tools and skills. All of
the internal workings required dealing with the specifics of each networked
storage approach (FC SAN, NAS, and IP SAN) need to be transparent to the user.
For data centers in which three (or more) different management paradigms are
at play-for example, NAS, midrange IP SAN storage, and high-end FC SAN
storage-the ability to combine all approaches under a single unified storage
architecture is becoming extremely attractive.
Security
The advantages of networked data storage technologies such as NAS and SAN
are well established, but storing an organization's data on a network creates
significant security risks. Aggregated storage is not designed to
compartmentalize the data it contains, and data from different departments or
divisions becomes co-mingled.
Data replication, backup, off-site mirroring, and other disaster recovery
techniques increase the risk of unauthorized access from people both inside and
outside the enterprise. Partner access through firewalls and other legitimate
business needs also create undesirable security risks.
With storage networks, a single security breach can threaten the data assets
of an entire organization. TechnoÂlogies such as firewalls, Intrusion Detection
Systems (IDS), and Virtual Private Networks (VPN) seek to secure data assets by
protecting the perimeter of the network. While important in their own right,
these targeted approaches do not adequately secure storage. Consequently, they
leave data at the core dangerously open to both internal and external attacks.
Once these barriers are breached-via stolen passwords, uncaught viruses, or
simple misconÂfiguration-data assets are fully exposed.
Providing wire-speed encrypÂÂtion and protecting data at rest with secure
access controls, authentication, and secure logging simplify the security model
for networked storage. Security appliances must also be deployed transparently
within the data center without changes to applications, servers, desktops, and
storage.
Integrated data management
In traditional data center IT organizations, application, database, system,
and storage administrators, each focus narrowly on only a part of the
data/storage manageÂÂment probÂlem. Each has different and distinct areas of
responsibility and accountability.
As a result, end-to-end data management depends upon manual communication
between the data administrator and storage administrator and manual mapping of
data to storage. This is a disruptive and potentially error-prone process that
results in critical errors and lost productivity.
Traditional approaches have left a gap between the manageÂment of data and
the management of storage. This has resulted in inefficient operations, with
considerable duplication of effort and frequent interruptions to the activities
of highly interdepÂendent administrative groups.
An integrated data manage-ment approach would simplify the data management
that encompasses both the management of storage devices and the data that
resides on those devices. By creating linkages between application requirements
and storage management processes in a controlled environment, system,
application, and database administrators could control their data in a language
that they understand, without the need for extensive storage management skills.
Because the data owners can perform certain data management tasks, their
ability to respond to changing business conditions is enhanced. In addition, the
use of process automation, role-based access, and policy-based management
enables business-centric control of data and reduces the interdependencies
between storage and data administrators to deliver dramatic productivity and
flexibility gains.
The storage and system administrators still have all of the tools and
capabilities they always did, but can now create policies that control capacity
allocation, protection levels, performance requirements, and replicas. For
example, storage managers can set-up policies for different classes of
appliÂcation. The tier-1 application can have up to 2TB of capacity, snapshot
policies of once a day, remote replication to a specific data center, and a
nightly backup to a VTL target. A tier-2 application that requires a lot of
capacity can have up to 10TB of capaÂcity, snapshot policies of once a week, no
remote replication, and monthly backups to tape.
Tight integration with popular data center business applications, allowing
appliÂcation and server administrators to manage data without having special
storage management skills and freeing storage administrators from help-desk mode
requests, will allow data centers to be more efficient and cost-effective.
Summary
Continued fiber channel dominance in the data center, implementing tiers of
storage across multiple storage systems, increasing storage utilization with
thin provisioning, improving storage system resiliency and application
availability, comÂbining SAN and NAS under a unified storage
architecture, encrypting sensitive data, and creating linkages between
application requirÂÂÂements and storage manageÂment are the key trends
addressing the challenges data centers face today.
With the explosion of data storage capacity and the increasing importance of
data to organizations, data centers need to implement a plan to ensure that
their data storage and applications are highly resilient and optimized.
An organization's health depends on instant data availability, infallible
data security, and the ability to quickly respond to changes. Simplifying data
and storage management is a key priority for data centers now and in the future.
Courtesy: Mike McNamara
Network Appliance Inc