Customers have become increasingly intolerant of computer failures while
placing ever-greater demands on the technology they use. Given the complexities
of technology, solution providers and IT managers are under pressure to make
decisions in seconds.
You hear it from customers and experience it yourself every day-computing
is becoming too complicated. The growing size and complexity of information
technology makes troubleshooting a computer problem like finding a needle in a
haystack. And the time spent finding that problem can cost an enterprise
customer thousands and even lakhs of rupees, due to downtime.
For instance, an international bank can lose as much as Rs 1 crore per hour
due to downtime resulting from security incidents, and brokerage firms can lose
double this amount per hour for the same problem, according to a Yankee Group
report. Â Troubleshooting computer crashes and other problems can tie up
systems and staff for days while negatively impacting business performance. And
that's not because operators aren't well trained or don't have the right
capabilities. Â
It's because the complexities of technology are difficult to figure out and
solution providers and IT managers are under pressure to make decisions in
seconds. At the same time, customers have become increasingly intolerant of
computer failures while placing ever-greater demands on the technology they use.
SELF-HEALING COMPUTERS
Since no one is close to building defect-free software or hardware, what we
need are computers capable of running themselves with far greater levels of
intelligence built right into the technology itself.
The implications of this 'autonomic' approach are immediately evident. A
network of organized, smart computing components that give us what we need, when
we need it, without a conscious mental or even physical effort.
Today, companies rely on new technology to remain competitive and respond to
customers and partners more quickly. Yet, each time they add new hardware,
applications and devices, it actually reduces the productivity of their IT
staff, which now has a more complex infrastructure to manage.
TECHNOLOGY THAT BOOSTS PRODUCTIVITY
Businesses can't just roll in processors and storage fast enough to avoid
meltdowns when usage spikes, or fend off viruses and hacker attacks, or manage
the different operating systems that must plug in and access information. These
include laptops, palmtops, desktops, whatever. Â The workload and complexity
becomes overwhelming.
Take logs, for example. Today, even a simple e-business solution may contain
as many as 25 to 40 different log files. These log files contain a variety of
content in differing formats because solutions are built using disparate pieces
and parts, often with products from multiple vendors.
Most of the logging done today is product-centric, focusing on reporting data
that a product developer considers important for debugging the problem, rather
than providing data to debug a solution. This disparity in both the format and
the content that is made available by products makes it more difficult to write
management tools that might ease the complexity issues.
WHY IT MAKES SENSE
Customers want compute power on demand; they want storage on demand; they
want instant access to computing services. They want their business policies
translated into IT policies with a minimum of buttons to press and knobs to
turn.
The good news is the industry is already moving in the right direction when
it comes to improving problem determination. New technologies are being
developed that will enable systems to detect, analyze and resolve problems -
and automatically diagnose the root cause. Another key to success is adopting
open, non-proprietary computing standards.
Today, when a problem occurs, there isn't an industry standard approach for
classifying it. This means that whoever is responsible for resolving the problem
must understand specific, individual messages from each of the products involved
in the problem.
Often, a team of specialists is called in for tedious, time-consuming work.
Standards will ensure a consistent format on how data and content is logged,
essentially establishing a common language for handling systems problems.
THE BOTTOMLINE
The goal of a successful IT infrastructure is to increase the amount of
automation in the customer's business with minimal mistakes. The more you can
get human error out of the loop, the more efficient the business will be,
whether it's a financial institution, a shipping company or an online
retailer. The beauty of it is that all of the complexity gets hidden from the
user.
The logic is compelling: relief from the headaches of identifying and fixing
problems; an improved balance sheet; and much greater flexibility in meeting the
demands of running a business. Â Autonomic computing will enable you to be
more responsive to your customers and environmental changes.
R Dhamodaran is VP and Country
Executive, Software Group and Developer Relations, IBM India Ltd
EXAMPLE OF AUTONOMIC COMPUTING
A computer at your customer site freezes-up intermittently. No customer
transactions are being processed for several seconds, losing the customer
thousands of rupees in business, customer confidence and loyalty.
Today, the IT support staff might not even find out about the problem for
more than a day. When they do, it takes a couple days to figure out what is
happening.
With an autonomic system in place, the freeze-up is detected the first time
it happens and matched against historical problem data. The settings are reset
automatically, averting any real loss of revenue and customer loyalty.
A report that the administrator reads the next day shows the action that was
taken.