Definition of Cloud Utility
A Cloud utility model of computing represents a fundamental evolution of the computing economics. The model ensures that computing aids and resources are accessible from a remote location. This computing model has been prompted by several factors: the development of Internet applicability, the proliferation of mobile and other hardware devices, as well as the need for processing and energy efficiency. The cloud utility model facilitates the availing of computing resources from remote locations on demand. As such, the model eliminates the need to have these resources reside on a desktop, organization’s servers, a laptop, or any other mobile device (Fong, 2011).
Background on How and Why Cooperation is Pursing
As a process through which the Internet is utilized to avail computing power to consumers and companies, cloud utility has transformed software into services where users are charged for the much they use. In this regard, cloud computing facilitates the availing of storage space and computing power when necessary, as well as the scaling up of these resources as required. Citing improved revenue streams, several companies have rushed to deploy cloud technologies so as to enable them gain a competitive advantage in the market. The deployment follows the identification of some of the main advantages of cloud computing, the first amongst them being the ability to avoid capital expenditure on, say, servers, software licenses, and maintenance costs (Fong, 2011).
Cloud utility exonerates companies of the difficulties associated with the commissioning of own servers and other installations. It, therefore, avails such advantages as cost effectiveness, mobile demands and cloud spotting, as well as cultural shift, among others. The global economic downturn has prompted an enhanced rate of cloud adoption. Due to the squeeze in customers’ budgets, several organizations have been reluctant to invest in capital intensive ventures. As such, their managements have opted to adopt the cloud technology as it promises to replace massive capital expenditures with moderate operational expenditure (Fingar, 2009).
Nevertheless, although the utilization of cloud facilities proves to be cost-effective, there are a couple of associated standardization costs. most organizations resolve to proceed with off-the-shelf software solutions due to their long term advantages. By taking the advantage of the off-the-shelf solutions, the managements avoid the necessity to purchase pricey software packages by training the IT staff on how to select cheaper and smaller applications whose functionality is convenient for the task at hand (Fingar, 2009).
As explained in the earlier sections, cloud utility is meant to avail efficient networking and computing resources in real-time. This helps to solve the challenges that accompany an unexpectedly large number of Internet connections. The cloud technology is viewed as a facilitator of the economy of scale as it helps organizations to save resources that would otherwise be required to secure numerous units of computer hardware as well as an expanded data center space (Fingar, 2009).
Cloud computing proves to be superior to traditional computing strategies due to its unique features, which are affordability, pro-rating, and scalability. In fact, budgetary limitations make other forms of computing take more time than it is usually anticipated. Organizations, therefore, consult service providers who then present them with pre-configured sets of operating systems, database servers, sophisticated auto-scaling system, and web servers. This enables the client organizations to come-up with some load-balanced systems that facilitate the specification of the threshold values which would enable server arrays to shrink and grow as per the clients’ needs. This is usually hoped to not only to facilitate the updating of information, but also improve the automatic scaling of the infrastructure (Chee & Franklin, 2009).
There are various designs of clouds, and with regard to this, it is upon the management of an organization to seek for the design that will best suit the demands of their enterprises. The most common design involves the utilization of public cloud solutions. Its prevalence is due to its possibility of being utilized by the staff members and the customers without necessarily being aware of how the services are delivered. However, it has been established that public solutions are unsafe, especially where the client firms happen to be dealing with sensitive information. Therefore, although the public cloud solutions are cheap environments to conduct business activities, most organizations avert the idea of their implementation upon conducting extensive cost benefit analyses (Chee & Franklin, 2009).
Many client firms, therefore, consider starting off with commercial cloud solutions as these enable them to retain the control of their IT services. Private clouds enable organizations to locate their data and software with precision. They also present opportunities for utilizing hybrid clouds where private cloud solutions are closely intertwined with the organizations’ legacy systems. In recent years, many IT specialists have predicted that as organizations phase out their old systems, they will adopt cloud as their prime architecture for availing IT solutions to their workforce. In fact, some advocates for the ditching of legacy systems term them “sunken costs”. They argue that there is no point for organizations in supporting their old mainframes as this tendency hurts business operations, at least, strategically.
Distributed Models for Disaster Recovery
Disaster recovery is all about planning for business continuity. It, indeed, involves procedures, processes, and policies that relate to the preparation for continuity within the technological infrastructure, especially in situations where these infrastructures are critical to the functioning of an organization. Disaster recovery is perceived to be an integral portion of business continuity and, as such, it involves a form of planning that keep the main functions of an enterprise running, even in the midst of disruptive events. In this regard, disaster recovery is focused on information technology or those technological systems which support the functioning of an organization (Essvale Corporation Limited, 2008).
Before settling on a recovery strategy, disaster recovery planners ought to refer to the organization’s business continuity in order to in a position to utilize some of the key metrics of recovery time objective, RTO, as well as recovery point objective, RPO. This ends up acting as a facilitator of the most important business processes including the process of running payrolls or even generating orders. The specified metrics ought to be mapped in a manner that has the capacity to handle the business processes which underlie the IT infrastructure and systems (Chee & Franklin, 2009). Mapping the RPO and RTO to the IT infrastructure enables the disaster recovery planners to determine the best recovery strategies for each section of a distributed model. Nevertheless, as the business enterprise endeavors to set up the IT budget, the strategists ought to ensure that the RPO and RTO fit within the available resources. This is because as much as the organization endeavors to attain zero time and data loss, such levels of protection are inhibitory expensive. Also, the protection is likely to impair the levels of availability which are attained when rigid protection strategies are averted.
The distributed models for disaster recovery necessitate the adoption of various data protection strategies. The protection strategies include backing-up on tape before sending the back-ups off-site on a regular basis. Cloud utility facilitates other protective strategies which include the data replication to off-site locations. This overcomes the necessity to have data restored after a system failure. In fact, only systems need to be restored after a failure, and this restoration can be facilitated through the application of storage-area-network, SAN, technologies. Other strategies involve the incorporation of highly available systems that help in keeping data and systems being replicated off-the-site. This enables the organization to have a continued access to the said data as well as systems (Essvale Corporation Limited, 2008).
Cloud utility facilitates the use of the distributed models for disaster recovery as it avails the opportunity to provide stand-by systems and sites even as the use of remote facilities continues. When an organization prepares its recovery systems, it ought to implement a section of precautionary measures which enables it to avert the disaster in the first place. These measures include the use of surge protectors and uninterruptible power supply as well as such security strategies as anti-virus software. The use of such engineering sciences as RAID helps mirror the systems and data in a manner that ensures disk protection. In this regard, the idea behind cloud utility has enabled the disaster recovery planning attain an acceptable level of preventive, detective, as well as corrective measures. With proper documentation and testing, the idea behind cloud utility will, indeed, become the best strategy for facilitating the distributed disaster recovery (Baroudi & Reinhold, 2009).
Cloud utility has enabled organizations to benefit from reduced costs, improved service delivery, as well as increased agility. These benefits are best realized when organizations manage to stitch multiple systems of management together as this enables to have a unique view of the environs. By availing an enhanced capacity to exchange data, the consolidated systems enable organizations to achieve the goal cloud utility computing. This section states some of the specifications that help deliver the goals of any organization through the avoidance of some of the most expensive and error prone chores that had to be performed in the past (Baroudi & Reinhold, 2009).
Use case one: Ensuring that the provisioned servers are constantly monitored.
Communication within an organization relies on the capacity to follow the processes that may break down as a result of, say, loss of emails, forgetfulness, as well as other occurrences. The problem results when servers are unmonitored. In most instances, it is the customers who are the first to discover when there is a problem. Such problems are embarrassing, and they, in fact, result into the loss of revenue and customer dissatisfaction (Baroudi & Reinhold, 2009).
For success to be guaranteed, servers should never be provisioned before proper monitoring has been instituted. Monitoring ensures that the organization in question identifies the problem early enough so that corrective measures can be undertaken in good time.
Such problems can be solved through linking the monitoring and provisioning systems.
Use case two: Asset management organization.
Clients utilize asset management systems to maintain device inventories to their proper levels. These are management applications which necessitate constant synchronizing with asset management systems. The systems ought to have the capacity to query asset management systems with regard to the latest lists of devices within the inventory (Baroudi & Reinhold, 2009).
Clients should configure their own monitoring systems for the purpose of keeping tracks of the changes that exist in the schedules and inventory synchronization.
Use case three: Mirroring servers.
There should be a process that helps alert support engineers when storage devices are on the verge of failure. This would enable the engineer to determine some of the configurations in order to repair the system (Ekins et al, 2011).
Support engineering would have the capacity to retrieve some of the stored information on inventory so as to acquire drives that would facilitate the replacement of the defective ones.
Use case four: Addition of server capacity.
System administrators may find it necessary to increase the number of web servers so as to have the capacity to handle an enlarged site capacity.
Newly installed servers ought to have the necessary software installed so that their configuration is done as per the blueprint.
There are three major characteristics that distinguish cloud utility from a traditional form of hosting: cloud utility sells services on demand; cloud utility is elastic; and the service is fully managed by the provider. In most instances, organizations establish the level of application management that is required for it to have a satisfactory level of data security. As such, there ought to be a way of differentiating between sensitive and non-sensitive data so as to institute such regulatory issues as privacy, compliance, and audits in the most convenient manner. Effective mapping of the IT needs facilitates the selection of the right cloud strategy, a strategy that will enable organizations to locate the workload with ease (Fong, 2011).
In order to enable organizations to have an enhanced level of capacity management, service providers are expected to disclose data only as directed by the management of the concerned organization, or as required by the law. The provider should also provide the organization with a prior notice for all legally compelled disclosures. Even so, disclosures which go beyond the extent permissible by the law should not be allowed. It is necessary for the service provider to maintain robust and reliable security management systems that are internationally accredited. Specifically,, these systems ought to be ISO 27002 certified, as this is the only way to assure customers that their data are protected. Additionally, the provider ought to employ strategies that enable third-party auditors to evaluate compliance with security principles that facilitates abidance with the standardized security management systems (Fong, 2011).
Charge Back Models
Enterprises are considering the implementation of cloud-computing models that can foster agility while improving time for marketing for new services. The attainment of cloud benefits, however, necessitates enhanced levels of data traffic, unified server, storage, network, and other applications. In this regard, disparate varieties of management ought to be integrated in a manner that supports different cloud approaches including private, public, as well as hybrid. Therefore, although the public cloud solutions are cheap environments to conduct business activities, most organizations opt to drop the idea behind them, especially after conducting an extensive cost benefit analysis, as it has already been mentioned above (Fingar, 2009).
There are various designs of clouds, and with regard to this, the IT directors have the duty to seek for the design that best suits their organization’s demands. The utilization of public cloud utilities proves superior especially because they provide a possibility of being utilized by the staff members and the customers without necessarily being aware of how the services are delivered. However, it has been established that these solutions may be unsafe, especially where organizations deal with sensitive information. Commercial/private cloud utilities enable clients to retain the control of their IT services. Private clouds facilitate the locating of data and software with precision. They also provide an opportunity to utilize hybrid clouds where private cloud utilities are closely intertwined with the organization’s legacy system (Essvale Corporation Limited, 2008).
Utility Cloud as a Service
According to Jeff Bezos, the CEO of Amazon, a cloud service providing company, cloud utility exonerates companies of the difficulties associated with the commissioning of own servers and other installations. As pointed out in earlier sections, these advantages include cost effectiveness, mobile demands, and cloud spotting. The strained customers’ budgets have prompted several organizations to be reluctant when it comes to investing in capital intensive ventures (Ekins et al, 2011).
This paper has explicated the gains that can be achieved through sourcing the services of cloud utility providers. Cloud utility as a service has been identified as the enabler of the storage of software and files in remote locations instead of having them on hard drives and servers at an office premise. Organizations, especially multinational, stand to gain from the flexibility which is offered by cloud utility. Studies have demonstrated how the staff and clients gain access to data and files even as they work from remote locations. Being cheaper to install, less labor intensive as well as easy to run makes cloud utility services the best option for resolving the challenges that the customers encounter as they seek services from remote locations. Moreover, the cloud utility avails unlimited storage which is beneficial considering the limitations that are presented by the servers and hard drives in the traditional hosting alternatives (Chee & Franklin, 2009).
In spite of its numerous advantages, cloud utility is associated with a number of drawbacks. For instance, although online back-up and storage reduces the instances of data loss and destruction, cloud solutions prompt security concerns, especially when public clouds are utilized by organizations that handle sensitive data. Hence, most organizations manage to overcome security challenges by being cautious as they endeavor to bring their services to the levels that are in-line with the current market situation (Ekins et al, 2011). Companies weigh the risks, and bearing in mind that even the traditional servers have been hacked into, the issue of security does not always deter the efforts of bringing organizational services into conformity with the market demands. In this context, the hybrid cloud utility has been identified as the best strategy for addressing issues such as reliability concerns (Baroudi & Reinhold, 2009). Through the provision of workload diversity, economies of scale and flexibility in power management, cloud utility solutions present the best options to various organizations. This is because they enable even the organizations with limited resources to implement cloud solutions in a timely manner.