StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

The Basics of Cloud Computing - Report Example

Cite this document
Summary
"The Basics of Cloud Computing" paper contains technical solutions advantages and benefits including an action plan, which our company would under, take to resolve the technical problems which Synergy Sol Solution is experiencing. The proposal further explains the importance of log audit security…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.5% of users find it useful

Extract of sample "The Basics of Cloud Computing"

Address: 1 Macquarie Park, Talavera Road Website: http://www.superhero.com.au Email: mailto:sales@superhero.com.au Phone: +61298998888 Consultancy Proposal Date Name of the Institution Name of the student Table of Contents Pg No Table of Contents Pg No 2 EXECUTIVE SUMMARY 5 PROPOSAL I 7 2.0 Introduction 7 1.1 Concept of physical database design and storage 7 1.2 Problem diagnosis and solutions 9 1.2.1 Virtual storage 9 1.2.2 Application bottlenecks 10 1.2.3 I/O response time 11 1.2.4 Poor storage design 12 1.2.5 Mapping driver types against performance 13 1.2.6 Matching the work load and RAID type 13 1.2.7 Upgrade to the larger cache 14 1.2.8 Allocate storage based on performance 14 1.3 Recommendations 14 Proposal II Technical 15 2.0 Introduction 15 2.1 Problem diagnosis and solution 16 2.1.1 Remote Backup that is Cloud Storage 18 2.1.2 Internal hard disk drives 19 2.1.3 Removable storage media 20 2.3 Ways through which company may choose best backup option 21 LIST OF ASSUMPTION 23 ACRONYMS 23 References 24 To the Chief Information officer Synergy Sol P.O Box 3 Macquarie Park, Talavera Road Dear Sir/Madam Introduction letter for IT solution to your company Following the problems you have been facing with your database since the upgrade of your system, I here by introduce our services to your company. Superhero computer solution limited is a private company dealing in IT and software solutions within Sydney and Australia as a whole. Four directors established the company in the year 2002; initially the company was dealing with software development and system networking, but as a result of continuous increase, in growth and development, the company has expanded and currently deals with database management, computer hardware and physical development among other suppliers of computer related accessories. We have a team of expert in various sections with varied qualification in the field of IT. Our technical directors hold masters degree in computer science from Oxford University and previously worked with FBI as security analyst. The company has a wide range of experience of over 10 years in IT related matters including security system development and software and database maintenance among other related areas. Following acknowledgement and receipt of the problem, your company has been experiencing over the past few months. Our company offers its services to assist your company solve the problem it is facing. Attached here are two proposals explaining the cause of the technical problem you have been facing, giving brief explanations and the benefits you would derived from our experts and advise. I belief that our proposal will be accepted Yours faithfully S. T Smith Company Director EXECUTIVE SUMMARY This proposal contains technical solutions advantages and benefits including action plan, which our company would under, take to resolve the technical problems which Synergy Sol Solution is experiencing. Physical database design can be defined as the process of producing a precise description of the implementation process of the database on second option storage. It describes the file organization, base relation, indexes used to achieve effective efficient access to the data, and any kind of associated integrity constrains, and security measures. From the definition of database, the company does not meet any of the above definitions and their desktop computers are not effective and efficient in the secondary data storage. Synergy solution have been experiencing slow down with their computers. Some of the problem may be as a result of virtual environment, which can be described as a complex independent resource, applications, which are I/O intensives, are often very sensitive to storage latency issues, in case there is increase in the response time in the storage, I/O sometimes causes bottlenecks. In case there is a queue in the storage I/O, one would generally see an increase in the latency among other causes. The proposal gives practical steps on how this can be rectified and how it can be fixed in a lasting manner. The proposal further explains the importance logs audit security. Logs ca is defined as the records of events and activities that are happening within an organizations systems and networks. They are composed of log entries and each entry contains information that relates to a particular event that has occurred within the system and network. Logs were initially used, for troubleshooting purposes but things have changed and they are currently used for many activities including system optimization and network performance improvement. The proposal also gives explanation and methods of data storage methods and backup practices that can help fixed the company problems. Financial Report Cost of performing the activities Item description No. units Cost per Unite Total cost Problem diagnosis 1 $ 2000 $ 2000 Consultation fees 1 $ 2500 $ 2500 OS 2 $ 1500 $ 3000 Cables 6 $ 450 $ 2700 Other cost 1 $ 1200 $ 1200 Tax 10% 11400 $ 1140 Total cost $ 12540 PROPOSAL I 2.0 Introduction Some of the performance symptoms experience by the company includes word documents are taking long to open; general server problems and the system could hang for several minutes without working well (Apple Inc. 2008). The users could restart and reboot their brand new desktops. This problem may be attributed the design of the physical database. Physical database design can be defined as the process of producing a precise description of the implementation process of the database on second options storage (Apple Inc. 2008). It describes the file organization, base relation, indexes used to achieve effective, efficient access to the data, and any associated integrity constraints, and security measures. From the definition of database, the company does not reach any of the above explanation, and their desktop computers are not effective and efficient in the secondary data storage. 1.1 Concept of physical database design and storage In building physical design, the plan should have adequate details that in case someone else was to use that plan to build a database, should be similar to the one you have build (Akao 2000). In short, the plan should be simple, straightforward and easy to be followed by any professional users, as this can be easy and simple to troubleshoot (Akao 2000). In the case of the Synergy Sol Ltd, the experts could only instruct the users to restart their computers, as is so basic indicating that they could not follow the physical design used in building the system. It should be noted that several physical database design decisions are implicit in the technology adopted when they are being developed. Some organizations may also have specific standards and information architecture that only gives specific operating systems, DBMS and data access languages (Hauser & Clausing 2008). This kind of architecture constrains wide range of physical implementation. There are two major designs, which are conceptual and logical designs, which initially used to work independently of other physical considerations (Hauser & Clausing 2008). In the case of the Synergy Sol, they are suing rigid operating system, and the upgrading was not. The basic goal of any physical database design is efficiency of data processing. It requires information that is well gathered in the primary stages of data design. Some of the basic information that is required in the data design includes i. Relationship normalization and size estimates of the relationship ii. Each attribute should be well defined iii. A well description of when and where the data will be used, which include retrieved data, data entered, delete data, data update and the frequency through which this should be done. iv. Expectation and response requirement time and security of the data, backups, data recovery, data integrity and retention For the client easy understanding of the problem, which they are experiencing, which is based on the physical database design which its primary aim is to improve efficiency of data processing which the company is not experiencing now, it is important to explain some of the basic underlying concept. Some of the basic concerns include, i. Storage allocation for data and indexes ii. Record description and stored sizes of the actual data iii. Record placement iv. Data compression and encryption Some other critical decisions that should be made include i. Storage format to be used ii. Physical record composition iii. Data arrangement iv. Indexes v. Query of optimization and performance tuning The database designer in choosing the storage format, it is good to choose each storage format of each attribute. The DBMS provides some set of data types that can be used for physical storage of fields in the database (Hauser & Clausing 2008). Data format is chosen to minimize storage space while maximizing data integrity. The importance of this step of data selection type is first to minimize storage space, secondly improve data integrity and help in supporting all data manipulation 1.2 Problem diagnosis and solutions Factors that could have led to the problem experienced by the firm with their computer operating system include: 1.2.1 Virtual storage Virtual environment can be described as a complex independent resource. There are several components, which include virtual machines, application running on a VM, data store attached to VM hosts among others (Lowe, & Ridgway, 2001). Having several VMs that run on a storage LUN from a single data store may often lead to bottlenecks at the storage level seen in the company operations. Furthermore, if VMS are going to run I/O intensive application, it is going to put a lot of pressure on the disk, and other application associated with the storage may experience efficiencies problems as a result of resource contention currently experience in the company (Lowe, & Ridgway, 2001). Another common aspect is memory ballooning and swapping, and it is associated with disk performance. Whenever a VM runs out of physical memory, it is typically moves to page to disk, which causes more problems as it tries to squeeze through to fit in their data, storage I/O is going to be greatly affected because it cannot accommodate the request by the user (Hauser & Clausing 2008). Consideration need to be made bottlenecks in Storage performance is capable of affecting the overall system performance. Storage issues normally create slowdown to the VMs, guest operating systems and to the application in the VMs (Hauser & Clausing 2008). It is capable of creating VMS to experience storage timeouts, and it can to an extent of causing VMs to freeze and crash out. 1.2.2 Application bottlenecks There are applications, software that by themselves are very heavy, and most often cause bottlenecks; applications, which are I/O intensives, are often very sensitive to storage latency issues (Hauser & Clausing 2008). Whenever there are large user base trying to entrée the applications, it tends to slowdowns in the process, in situations like that, IT experts should begin troubleshooting from the server layer all through to down to the storage drivers. It is not easy to tell the storage issue right away when you just see the problem unless there is full troubleshooting (Hauser & Clausing 2008). It is best diagnosed through every level so that one can easily know whether it is a storage matter or a function or virtualization bottleneck. Currently restarting the computers alone is not capable of solving a big problem; therefore, it is advisable for full troubleshooting of every level to diagnose the actual problem that his system is experiencing. In the case, it is not application issue. Then it is storage concern, this could also be caused by lack of storage drivers to service the I/O (Lowe, & Ridgway 2001). The problem can also be as a result of being short of bandwidth at the array’s front-end ports, or it can be owing to an issue with the organizer. One of the best approaches to be used in solving this problem is trying and finding a balance between the most frequently used applications. These applications are used by a large user base and application that are not so storage I/O heavy bandwidth (Lowe, & Ridgway 2001). Clear distinction helps you in tuning performance level for either having a good storage throughput or for IOPS optimization (Anderson & Wincoop 2003). In rectifying this situation, it should also be considered that bottlenecks could occur in case there are multiple busy applications using the same data store because none of application is going to have optimum, performance that causes storage and application bottlenecks (Anderson & Wincoop 2003). 1.2.3 I/O response time In the case, there is an increase in the response time in the storage, I/O sometimes causes bottlenecks. In the case, there is a queue in the storage I/O; one would generally see an increase in the latency (Anderson & Wincoop 2003). In situations when the storage drivers are taking too long to respond to the I/O request, then this is an indication of the presence of a bottleneck in the storage layer. A busy storage device can also be one of the reasons why there is slow in response time. In addition, as one continues adding more workload with the existing I/O bottlenecks, the response time will also continue to vary. One of the solutions, which can be operated to the system, is to add more disk volume to improve the performance (Lowe, & Ridgway 2001). Before adding additional volume, however, it is important properly to monitor the networks, servers and applications before mentioning out issues to your storage system. 1.2.4 Poor storage design One of the primary reasons for the continuation for the storage bottlenecks is due to the fact of a poorly designed storage system. It is quite evident when storage systems cannot process the quantity of work that a user is demanding to run, in the end causing all kinds of bottlenecks and the I/O performance issues (Lowe, & Ridgway 2001). Apart from the design flaw pointed above, other storage design flaw that might exist include Fewer number of spindles in a RAID group trying to take on the entire workload Underperforming SATA, might be in use with low RPMs Drivers and processors that are not loading balances Using a RAID level that has a higher level of penalty and lastly, Using smaller array cache Having any of the above issues affects the storage performance of the computer. Having a proper Understanding on how physical storage along with the underlying LUNS, RAID group and the disk should be aligned to each other to help in eliminating any performance bottlenecks is essential (Lowe, & Ridgway 2001). Some of the best practices that the company can adopt to avoid I/O bottlenecks I/O bottlenecks are some of the common issues the affects the performance of storage system of the computer systems as seen in this scenario (Hauser & Clausing 2008). One need to dig deep to discover what is that trouble the system including storage system. The managers and IT experts need not be reactive in such circumstances but proactive instead. In order, t avoids bottlenecks with your system; these are some simple steps to be followed as suggested by Hauser & Clausing (2008). 1.2.5 Mapping driver types against performance It is very important to consider virtual surroundings where the storage constituent can change the arrangement more quickly. In situations when organizations have critical applications, and failure of them may be detrimental to the organization, it is better to have good performing storage drivers rather than just upgrading the drivers. It is important that no single critical application is assigned to one specific disk, and the organization needs to have anti collocation policies for critical policies for critical I/O heavy applications (Hauser & Clausing 2008). 1.2.6 Matching the work load and RAID type Faster application performance can be realized when the RAID type is matched with the workload. The application availability improvement will increase since the RAID controller can create lost data from parity information 1.2.7 Upgrade to the larger cache Having larger cache shows that the disk is going to have improved read and write operation and lesser performance I/O bottlenecks. Placing solid-state drivers in front of the disk can also act as a cache and greatly improves IOPS capacity (Lowe, & Ridgway 2001).. 1.2.8 Allocate storage based on performance It is important to have full understanding of critical application and processes that cannot have downtime. Based on this, storage should be allocated based on the application and processes, based on the drivers should have enough throughput and IOPS (Lowe, & Ridgway 2001). 1.3 Recommendations End to end storage monitoring is one of the factors the company should consider having. After the company has deployed the storage performance-monitoring tool, the following parts should be always monitored IOPS several IOPS should be looked at like IOPS read/ write for I/O Storage network throughput should be monitored Disk read and writes queue and wait time should be monitored Response time for looking at request process should be always monitored Solar Wind Storage manager is one of the devices I recommend being used in monitoring storage I/O bottlenecks in the organization. It helps in monitoring performance and capacity of end-to-end and the virtual storage infrastructure from storeroom arrangement through to VMs that are connected with them (Lowe, & Ridgway 2001). The physical plan can also be changed and newly designed in case the above process has failed to materialize has this will give the system new outlook and new specification. Proposal II Technical 2.0 Introduction Huth & Cebul (2012) defines logs as the records of events and activities that are happening within an organizations systems and networks. They are composed of log entries, and each entry contains information that relates to a particular event that has occurred within the system and network. Logs were initially used, for troubleshooting purposes but things have changed, and they are currently used for many activities including system optimization and network performance improvement (Krogh 2009). Due to the widespread of the deployment of network servers, workstations and other computer devices, coupled with ever-increasing threats against many networks and systems, the number of computer logs have increased and need of computer security logs have risen. The computer users have put the critical documents at risk by storing the Audit Logs and the database reside on the same hardware communications (Huth & Cebul 2012). 2.1 Problem diagnosis and solution A log administration communications consists of the hardware, networks software’s and media used in generating, transmitting analyzing and log disposal (Lewis 2010). Log generations are one of the log server that receives log data and copies of log data from the host. The data and its copies are transferred either in real time or near real time manner or in the occasional batches based on the schedule or the amount of data, which is waiting transfer. The audit log should not be placed with database login one infrastructure as this might place the organization critical documents at risk (Lewis 2010). The log data may be stored on the log servers themselves or separate database server for security reasons, and this depends on the capacity of the database. Due to changes in database security landscape, attackers are nowadays targeting the database where records can be harvested in bulk. Perimeter security measures are essential, though not sufficient. In order to improve data security, it is important for a company to try to follow the following steps (Huth & Cebul 2012). Scan of all database application to assess security strengths, database vulnerabilities and application discovery inventory. The log should be real time monitoring, defending from misuse, deception and mistreatment from inner and outer users. Avoid residing both the logs and database in one infrastructure (Huth & Cebul 2012). There are applications, software that by themselves are very heavy, and most often cause bottlenecks; applications, which are I/O intensives, are often very sensitive to storage latency issues (Ashford 2012). Whenever there are huge, users support-trying admissions these applications, slowdowns tend to take place, in situations like that, IT experts should begin troubleshooting from the server layer through to down to the storage drivers. It is not easy to tell the storage issue right away when you just see the problem unless there is full troubleshooting(Ashford 2012). Krogh (2009) states that it is best diagnosed through every level so that one can easily know whether it is a storage matter or a submission or virtualization bottleneck. Currently restarting the computers alone is not capable of solving a big problem; therefore, it is advisable for full troubleshooting of every level to diagnose the actual problem that his system is experiencing(Huth & Cebul 2012). Running ad hoc applications on the live database will hinder the efficient performance of the database system hence this should be avoided (Ashford 2012). In the case, it is not application issue. Then it is a storage issue, this could also be caused by lack of storage drivers to service the I/O (Clarke, 2009). The problem can also be. As a result. One of the best approaches to be used in solving this problem is trying and finding a balance between the most frequently used applications. These applications are used by a large user base and application that are not so storage I/O heavy bandwidth. Clear distinction helps you in tuning performance level for either having a good storage throughput or for IOPS optimization (Clarke, 2009).. In rectifying this situation, it should also be considered that bottlenecks could occur in case there are multiple busy applications using the same data store because none of application is going to have optimum, performance that causes storage and application bottlenecks (Clarke, 2009). Computer users both professional and non-professional should always learn to back up critical information and data on their computers, servers and even in their mobile phone devices to help in the protection against data loss or corruption cases(Huth & Cebul 2012). The company or computer users should know that saving just one backup file might not be enough to safeguard the essential information. In order to increase chances of data recovery on the corrupt file, Huth & Cebul (2012) advise the company to follow the 3-2-1 rule in data backups. 3- Here means three copies of any important file that is 1 primary and 2 backups. 2 – Means keep the files on at slightest two dissimilar media types to defend against different types of perils. And lastly Store one copy offsite that is outside the premises like home. Ways through the company may backup its data 2.1.1 Remote Backup that is Cloud Storage The expansion of broadband internet services has given a room for cloud storage available for many computer users (Strebinger & Traiblmaier 2006). Here customer uses the internet to access a shared pool of computing resources, which includes networks, storage facilities, servers, applications, and services that are basically owned by cloud service providers (Strebinger & Traiblmaier 2006). One of the advantages of remote backups is that it can help protect the company data against some of the worst-case scenarios like natural disasters beyond human control and also serious failures of local devices due to malware. Furthermore, cloud services enable users to access it at any time, and the application can also be accessed everywhere one has an internet(Clarke, 2009). The company can either purchase one or more cloud services as needed, and the service provider will transparently manage the organization resources and usages as it grows and shrinks. Some providers can also ensure regulatory compliances in the way sensitive data should be handle, which is to the advantage for the business. Therefore, I propose that data should be backed up by all mean (Clarke, 2009). One of the demerits of remote data storage is that a cloud that entirely depends on the usage of internet can delay communication between the user and the cloud. Furthermore, cloud storage services have no universal standards, languages and platforms hence one can become locked into only one provider which if its collapse then all the data is also lost. The physical distribution of the cloud data over several geographically dispersed servers may cause some organizations, more especially ones that handle sensitive data a big problem with jurisdiction and fair information practices (Huth & Cebul 2012). In most cases, cloud customers have little if any knowledge of their service providers on the infrastructure or its reliability and this makes the users to render most of its control to the service provider hence companies loss control of the security. Huth & Cebul (2012) states that cloud computing services providers are capable of encrypting data, which will make it quite hard for the attackers to access critical information. However, this is prevented by the truth that cloud users have very little control, and sometimes no direct control over their own data and also they have no knowledge of their cloud service provider and their security practices(Clarke, 2009). Therefore, it is highly advised that before entrusting your critical data to any cloud provider, carefully cross-check the service agreement for security practices. Look for a cloud service provider who is capable of encrypting the data and with established encryption algorithms (Clarke, 2009). 2.1.2 Internal hard disk drives Internal hard disk is another way through which the company can decide to backup its data. Hard disk drives store data on a spinning magnetic platter read by a moving read and write head. All computers, laptops normally have internal hard drive in which most of the data and information is being stored (Norris, 2006). There are wide ranges of hard drive in different capacities, which the company can decide to buy. Since hard drives are rewritable, one can use them to perform rolling backups, an ideal technique that mechanically and every so often updates the backup files with the most current versions of primary files (Norris, 2006). One of the merits of keeping backup files and original files in one internal hard drive allows users to update the backup files quickly and maintains a simple file structure, all without purchasing any other storage device (Clarke, 2009). However, rolling backups can slowly spread any corruption or malware in the main files to the backup files. Furthermore, the computer constantly uses the internal hard drive, so the more the backup files, the lesser space the computer operates with which can further result to low-performance efficiency of the computer. Lastly, the lifespan of the drivers varies with time and installing new hard drives requires some technical expert, which the company might not have hence additional cost(Norris, 2006). In terms of security, backup files stored in the internal hard drives are also vulnerable to damage and corruption as the primary files. Additionally, internal hard drives are only as physically secure as the computers in the house. One way to ensure data security in internal hard drives is to prevent authorized access to stored data through encryption (Norris, 2006). 2.1.3 Removable storage media Storage media that can be connected and disconnected from the computer are another more versatile backup option which company can use instead of internal hard drive. Physically separating the backups from your computer helps the company keep the data safe, both from online attackers and power surges (Northcutt, 2010). One advantage is that removable disk are flexible data storage alternative since most of them are portable and work on most computers if they are connected to them. They are also easily available and are reusable (Strebinger & Traiblmaier 2006). Portability though makes them convenient; it also makes them vulnerable to theft and misplacement. Rolling back may spread file corruption and malware from primary files to backups (Strebinger & Traiblmaier 2006). 2.3 Ways through which company may choose best backup option Before choosing data backup option, it is very important for a company to assess the risk each backup options bring (Strebinger & Traiblmaier 2006). The financial resources and the company need. A large business should consider keeping one backup onsite and another backup offsite either through separate data service or on the organizations own offsite servers or on the digital system. Therefore, I would like to recommend either cloud computing data backups or external data storage (Northcutt, 2010). In whichever option that the company is going to adopt, it is crucial to follow 3-3-2 rule of backups where; 3- Here means three copies of any important file that is 1 primary and 2 backups. 2 – Means keep the files on at least two dissimilar media types to defend against dissimilar types of perils. And lastly 1- Store one copy offsite that is outside the premises like home This will prove very crucial in running and management of organization information and data. Therefore, the management should consider using either of the backup options following the golden rule. Proposed architectural design Figure 1: Architectural design LIST OF ASSUMPTION 1. The cost of offering services will remain constant throughout contract period 2. Tax will remain constant 3. There will be no inflation 4. The consultation will be awarded to our company 5. The initial problem diagnosis is true 6. Log: A record of the events occurring within an organization’s systems and networks 7. The following database activity logging should include in assumption: 8. User Account Additions, Modifications, Suspensions, and Deletions 9. User Account changes to Rights (the authorization rights of an account) 10. Escalation of privileges 11. Object ownership changes ACRONYMS 1. EPS Events Per Second 2. IP Internet Protocol 3. OS Operating System References Huth, A, & Cebula, J. 2012. The Basics of Cloud Computing. US-CERT, 2014 http://www.us-cert.gov/reading_room/USCERT-CloudComputingHuthCebula.pdf Krogh, P. 2009. The DAM Book: Digital Asset Management for Photographers, 2nd edition. O’Reilly Media, 2009. Lewis, G 2010. Basics About Cloud Computing. Software Engineering Institute, Carnegie Mellon University, 2010. http://www.sei.cmu.edu/library/abstracts/whitepapers/cloudcomputingbasics.cfm Ashford, W. 2012 SQL injection attacks rise sharply in second quarter of 2012. ComputerWeekly.com | Information Technology (IT) News, UK IT Jobs, Industry News. Retrieved August 1, 2012, from http://www.computerweekly.com/news/2240160266/SQL-injection-attacks-risesharply Clarke, J. (2009). SQL injection attacks and defense. Burlington, MA: Syngress Pub. Cristofor, L. Northcutt, S. (2010). Management 404 - Fundamentals of Information Security Policy. Bethesda, Maryland: The SANS Institute Strebinger, A. and Traiblmaier,H 2006 H., “The Impact of Business to Consumer ECommerce on Organizational Structure, Brand Architecture, IT Structure and their Interrelation.” Schmalenbach Business Review, Vol. 58, No. 1, pp. 81-113, 2006 Norris, G. (2006): Social Impacts in Product Life Cycles: Towards Life Cycle Attribute Assessment. Int J LCA 11: Special Issue 1: 97–104 Apple Inc. 2008 Security Framework Reference,. Retrieved from http://developer.apple.com/library/ios/documentation/Security/Reference/SecurityFrameworkReference/SecurityFrameworkReference.pdf Akao, Y. 2000. Quality Function Deployment, Productivity Press, Cambridge MA. Becker Associates Inc, http://www.becker-associates.com/thehouse.HTM and http://www.becker-associates.com/qfdwhatis.htm Hauser, J. & Clausing D 2008. "The House of Quality," The Harvard Business Review, May-June, No. 3, pp. 63-73 Lowe, A. & Ridgway, K. 2001 Quality Function Deployment, University of Sheffield, http://www.shef.ac.uk/~ibberson/qfd.html. Anderson, J & Wincoop E. 2003, “Gravity with Gravitas: A Solution to the Border Puzzle”, the American Economic Review, Nashville Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(The Basics of Cloud Computing Report Example | Topics and Well Written Essays - 4500 words, n.d.)
The Basics of Cloud Computing Report Example | Topics and Well Written Essays - 4500 words. https://studentshare.org/design-technology/2052655-database-trouble-shooting-performance-tuning
(The Basics of Cloud Computing Report Example | Topics and Well Written Essays - 4500 Words)
The Basics of Cloud Computing Report Example | Topics and Well Written Essays - 4500 Words. https://studentshare.org/design-technology/2052655-database-trouble-shooting-performance-tuning.
“The Basics of Cloud Computing Report Example | Topics and Well Written Essays - 4500 Words”. https://studentshare.org/design-technology/2052655-database-trouble-shooting-performance-tuning.
  • Cited: 0 times

CHECK THESE SAMPLES OF The Basics of Cloud Computing

Cloud computing

One of the most important advantages of cloud computing is that it allows the business organizations to convert fixed price mode (such as cost of ownership, servers, employee salaries , servers and additional expenses) to flexible price mode.... In fact, the implementations of cloud computing can be seen in all the fields such as business, finance, education, defense and so on.... The basic objective for the selection of cloud computing technology was to reduce IT costs up to 37%in the next five years, and eliminating more than 21% support calls regarding their IT system....
2 Pages (500 words) Assignment

Business IT AND SERVICES

Through an adoption of cloud computing technologies, business organizations are able to provide better flexibility to its employees, and they manage to change when, how, and where they work (Bellavista, 2010).... However, cloud computing has not had a vigorous change on the manner in which people work, in a business organization.... When talking about the concept of cloud, people tend to talk about it, in terms of its technological nature, and the impact that it has had on how business personalities and people are able to access data, and communicate with each other (Rountree and Castrillo, 2013)....
11 Pages (2750 words) Essay

Are Cloud Storage Solutions Irrelevant for a Large Organization of 1,500 Staff

Although in-house IT infrastructure is the norm in many large organizations, due to the advent of advancing technology services like cloud computing and their resultant benefits, virtual IT infrastructure is being considered and also favored.... That is, to complement the in-house infrastructure and even also to maximally replace it, cloud computing is being utilized.... Thus, the discussion will be on how these cloud computing solutions can be relevant to a large organization of 1500 staff....
10 Pages (2500 words) Essay

Cloud Service Analysis

The advantage of cloud computing is that,… It, however, does not encompass elements such as information stored on a local drive in the business premise or a local business network.... As far as the definition of cloud computing may be right, they may leave out important aspects that characterize cloud computing (Armbrust et al.... Despite the varying interpretation of cloud computing, they all have the following basic components (Friedman 45-50); Cloud computing is always off-premise....
4 Pages (1000 words) Assignment

Cloud Computing Software

cloud computing innately encompasses the applications that are delivered to the customers over the internet alongside equipment and system… In addition, cloud computing comprises of Software as a service and utility computing.... However, small or medium data centers do not makeup cloud computing although it is under virtual management. cloud computing requires software as an cloud computing Insert Insert Can a cloud exist with only hardware and no software (200 words)?...
2 Pages (500 words) Assignment

Knowing of Cloud Computing Services

?? Cloud… Cloud computing has become the emerging trend for almost each and every enterprise irrespective of its size because, utilizing the power of cloud computing in an It also makes the routine computation works, far easier so, undoubtedly cloud computing can be a constructive support for organizations.... This report will discuss in detail about the basic concepts of cloud computing, different types of cloud computing services, cloud computing service providers, advantages and disadvantages of cloud computing....
4 Pages (1000 words) Essay

Enabling Technologies and Technological Innovations Used to Construct Cloud-Computing Solutions

This makes the web technology to be used as an implementation and management interface for cloud computing services.... … The paper “Enabling Technologies and Technological Innovations Used to Construct Cloud-computing Solutions ” is a thoughtful example of an assignment on the information technology.... The paper “Enabling Technologies and Technological Innovations Used to Construct Cloud-computing Solutions ” is a thoughtful example of an assignment on the information technology....
8 Pages (2000 words) Assignment

Mobile Computing and its Business Implications

The affiliations take the upside of cloud organization and convenient application for dealing with epic data with compactness and security.... This essay is about the "Mobile computing and it's Business Implications".... Convenient Distributed computing gives points of interest, for instance, decreasing costs of information advancement and the need for structure to help the business movement.... Distributed computing Distributed computing is a get-together of remote servers in a framework to allow concentrated data amassing and online correspondence to PC organizations or resources....
7 Pages (1750 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us