Are all your eggs in a basket up there?
Remember, the cloud is not yet a failsafe fortress for all your data
On December 10 and 11 when Google and Facebook went offline, users must have felt like Tom Hanks in Cast Away,
abandoned alone on an island. Although the outage lasted only 20
minutes in the case of Google, and a few hours in the case of Facebook,
panic and frustration were palpable on Twitter, with #googledown gaining
traction within minutes of the outages. These were not part of any
orchestrated ‘anonymous’ attacks, or, as a few witty tweeters put it,
symptoms of the apocalypse, or of the ‘world going down’ action due
later this December.
Google’s outage led to some of its widely used non-search applications
such as mail, chat and other cloud-based services going offline.
Facebook’s followed, going down for a few hours. Separate infrastructure
glitches caused these outages — a buggy software update in case of
Google, and Web address translation problem associated with the DNS
(Domain Name System) for Facebook.
Although these glitches were not catastrophic in terms of loss of data,
they served Internet users another warning against blindly succumbing to
the pious platitudes and wholesale endorsements of cloud computing as
the next big wave on the Internet.
The hardware hurdle
Soon after the outage, Google promptly reported on the cause of the
problem and the actions that it took, in a report published on the
Google Apps dashboard. A bug in a “load balancing” software update
caused the temporary outage of Google services, it said.
When more than one server is catering to user requests, which is
certainly the case when Google is serving content via thousands of
servers from hundreds of locations, there is a necessity to balance the
load (user requests) on each of the servers. To expose a single server
to bombardments of requests, leaving others lazing would deem to be poor
network design. The arbitration mechanism used to balance the load
between servers and server farms, using intelligent programmes running
on powerful switching computers, is “server load balancing”.
Software updates need to run regularly in order to accommodate
modifications in the server infrastructure. Google engineers located the
18-minute service outage to a bug in the software that ran on some
Google applications. This glitch had affected Google Mail, Chat, Google
Drive and the Chrome browsers. But Google’s kingpin, its search engine,
was immune to this problem.
Facebook, however, ran into a more mundane problem that websites often
face. Facebook spokesperson cited Domain Name System infrastructure
changes that were carried out, to be the reason for the temporary
unavailability of the social networking site.
DNS is the telephone directory look-up equivalent for translating
textual Web address to numeric server addresses (IP address). It is the
first, and the most important, step to allow clients to reach the
servers. When this look-up fails, the website remains inaccessible. The
DNS server reconfiguration seemed to have affected Facebook on the
desktop version; the mobile version escaped the problem.
When users put all their apples on the cloud, as they increasingly do,
these infrastructural “inconsistencies” can have a dramatic cascading
effect. Although Google, Facebook and most cloud service providers claim
that they have built in multiple levels of redundancies into their
infrastructure, the latest outage is yet another example of how millions
of users’ data remains vulnerable to the vicissitudes of global
networks.
Of course, it would be wrong to attribute the outages at Google or
Facebook to technical incompetence. But surely they are pointers to the
risks in putting everything on the cloud, and not anywhere else.
Security and privacy
Reliability is an additional concern to widespread implementation of the
cloud. There is already a raging debate on issues such as security and
privacy on the cloud. Critics of the wholesale movement to the cloud
argue that the world has not yet reached a stage where cloud computing
has become inevitable.
Regular incidents of user accounts being wiped off, or hijacked due to
weaknesses in the authentication mechanisms of major cloud service
providers are certainly discouraging users to bet solely on the cloud.
Security threats in cloud-based services do not just imply lacunae in
the cloud infrastructure, but also the hesitation of service providers
in admitting that the security is not foolproof. Without this emphasis,
users tend to set easy-to-guess passwords, or common passwords between
multiple accounts, aggravating the already-existing security flaws.
In conventional websites, that are not cloud-based, if unauthorised
access is gained to a user’s account, the ‘cracker’ in most cases would
reach a dead end and be unable to invade other accounts of the user; it
would also be virtually impossible for him to reach the user’s local
drives. The daisy chaining of accounts on the cloud, however, results in
a lower-level of protection.
The cloud’s USP is built on the promise: “Take your data wherever you
go.” But the downside risk is that like the domino effect, it can bring
down much more when compared to its terra firma counterpart. Most
cloud-based storage applications, for instance, allow applications to
directly access user data on their hard drive, between multiple devices
such as smartphones, tablet computers and desktop computers. Breaking
into one of these devices implies access to all other ‘connected’
devices.
By signing on to user agreements, they agree that they will remain
continuously tracked, and on logs maintained by the cloud-service
providers. Privacy is simply absent on the cloud.
The hype about the “revolutionary” potential of the cloud has been
tempered by the realisation that its reliability is still not iron-clad.
If anything, the recent outages highlight the need for users to spread
their risks across multiple service providers instead of putting
everything on a single cloud.
0 comments:
Post a Comment