Cloud Storage - Where and Secure

“Cloud” data is stored on hard drives (much the way data is usually stored). And yes, it’s probably more secure than conventionally stored data.

What makes cloud storage different? Instead of being stored directly on your own personal device (the hard drive on your laptop, for example, or your phone), cloud-based data is stored elsewhere — on servers owned by big companies, usually — and is made accessible to you via the internet.

When people think of cloud computing, they often think of internet-connected public clouds run by the likes of Amazon, Microsoft and Google. (If you use Gmail, Dropbox or Microsoft’s Office 365, you are using a cloud service.) There are also consumer clouds that, for example, hold your pictures and social media posts (think of Facebook or Twitter), or store your music and email (think of Apple or Google).

Each of these companies has cloud computing systems — computer servers and storage devices, connected with computer networking equipment — that span the globe. (Facebook’s systems can allow more than one billion people to interact with them.) Your data is in their computers, usually stored in a regional data center close to where you live.

Individual companies can also have their own clouds, called private clouds, that employees and customers access, usually over the internet and on their own private networks.

Storage aside, computing clouds can also process information differently; they have special software that enables workloads to be shared among different machines. Your Facebook photos, for example, don’t have a permanent home on a specific chip, but may move among computers.

That is a big deal. When workloads are shared, computers can run closer to full capacity, with several programs going at the same time. It’s much more efficient than stand-alone computers running one job at a time.

For the people running the computers, it doesn’t really matter where the data or the programs are at any one moment: The stuff is running inside a “cloud” of computing capability. Ideally, if one machine fails, the operation moves over to another part of the system with little downtime.

Nowadays, computing clouds are everywhere — which is one reason people worry about their security. We hear more and more often about hackers coming over the internet and looting the data of thousands of people.

Most of those attacks hit traditional servers, though. None of the most catastrophic hacks have been on the big public clouds.

The same way that your money is probably safer mixed up with other people’s money in a bank vault than it is sitting alone in your dresser drawer, your data may actually be safer in the cloud: It’s got more protection from bad guys.

In the case of the big public clouds, the protection is the work of some of the world’s best computer scientists, hired out of places like the National Security Agency and Stanford University to think hard about security, data encryption and the latest online fraud.

Hollywood Cloud Computing

Gradually, more and more industries are seeing the benefits of cloud over traditional computing. A growing list of advantages includes dynamic scalability, security, cost-effectiveness and speed. The latter is particularly prevalent in perhaps the biggest (and possibly most surprising) industry to have made the transition to the cloud in recent years: Hollywood.

The Cloud in big budget productions

With production deadlines and film release dates more set in stone than The Ten Commandments, any delays can cost millions of dollars. One way that production companies try to speed up the output process is by investing in the cloud for the rendering/exporting of the completed movie. Especially in high detail, high frame rate productions like the animated masterpieces that Pixar develop, rendering a movie used to be a process that would take years. Each frame of a Pixar animation can take between 10 and 100 hours of CPU time to render. Multiply that by between 24 and 60 frames per second, and then multiply again for a circa ninety minute movie, then a theoretical render time for a movie on a single machine could hit between 10 and 100 million processing hours (or between 100 and 1000 years). No wonder we’ve had to wait 12 years for a sequel to Finding Nemo.

This theoretical one-machine render time is cut down by creating a super computer cluster of machines that are rendering simultaneously. What takes one machine 100 hours to render, 100 machines can do in one hour. That kind of thing. That’s why Hollywood is migrating to the cloud. Because with render farms built on the cloud they can connect thousands of virtual machines to scale down processing times to months instead of millennia.

One other benefit of Hollywood using the cloud is that when they’ve finished rendering using the cluster of vCPUs they can just turn it all off. If you wanted to do the same process using physical processors then when the film premiered you’d still have tens of thousands of machines whirring away in a building out in California. The functionality of the cloud allows you to spin up the hypervisor environment when you need it, and then turn it off until the sequel.


This benefit also expands to other industries. Take UCAS, for example. Once a year on a (probably) wet and miserable day in August 400,000 college students log on to UCAS to find out their A-Level results, and subsequently how those results affect their University applications. For that one day UCAS becomes the most visited site in the UK. At the start of August UCAS scale up their cloud infrastructure in anticipation of the impending onslaught. On results day the website and its servers have to be able to handle traffic of 180 logins per second, as students flock frantically to find out the fate of their future. But, two weeks later UCAS turns off most of its infrastructure for another year. UCAS don’t need to handle that amount of traffic every day, all year round, instead they turn it on for a fortnight and are done for the year.

Similarly, the DVLA website sees a spike in traffic at the beginnings of March and September as people register their new cars and 15/65/16/66 license plates. The cloud allows them to scale up their infrastructure when they need more power, and then lower it when they don’t.

R3 Data Recovery have in the last 4 years brought together industry leading engineers and bought several other DR companies equipment, donor drives and websites as part of a an expansion plan to bring to the UK the very best affordable data recovery.

8 in 10 UK companies are cloud users

More than eight out of ten UK companies currently store some or all of their data in the cloud, according to a new survey from the Cloud Industry Forum (CIF).

Published on May 12th, the research found that 84 per cent of firms have now adopted one or more cloud services - up from 78 per cent in June 2014 and an increase of 75 per cent since 2010.

Furthermore, CIF forecast that this momentum will be maintained over the summer thanks to the end of support for Windows Server 2003, which 58 per cent of its respondents were still using.

Almost four-fifths (77 per cent) reported that their cloud deployment decisions were driven by infrastructure refresh cycles.

“While first-time adoption is likely to slow somewhat, penetration of cloud services within organisations ... will continue unencumbered,” said CIF chief executive Alex Hilton.

As much as this will cut costs and boost agility for many companies, increased cloud adoption may also give rise to more complex data recovery scenarios than those that occur within on-premise IT.

Complex data recovery requires true experts, speak to us the data recovery experts at R3 Data Recovery for free advice and our ‘no data, no fee’ guarantee to recover from any data loss type, system or cause.

Virtual Discs And Cloud based data

One of our engineers took this shot from our lab Window and shared it, I commented “the edge of this virtual disc is located in the cloud and has speed of light data transfer capability.”

This reminded me that 10 years ago the same car park was under 3ft to 5ft of water, my wifes car was washed away and we were very busy. We recover flood damaged servers and computer drives most years except this winter has been differrent .
It may not be flooding but for each cloud there does seem to be a silver lining, more data is lost than ever before each month despite cloud backups, VM, SAN and replication.

Data disasters range from a few mb excel sheet, a few gb databases to Tens of terabytes of data stores with thousands of email users in mail stores. Large numbers of VMs on hosted servers in data centres seem to be most vulnerable if not monitored.
A recent call had 40TB lost from Dual 1PB (each 4TBx 240) before they realised and no way to shutdown or do a block level copy.

For any VM data recovery speak to the R3 team for free and impartial advice, and get the UK’s best data recovery company restoring your data. Call us FREE on 0800 999 3282 or use the quick contact form and we will get right back in touch with you.

Has a storm hit your cloud

​For every cloud there is a silver lining comes to mind in this age of Virtual Servers and Cloud Storage.
R3 have grown much faster than anticipated this last 5 years not because of hard drive failures. In reality hard drives are not at risk of failure from manufacturer level design faults. It is the protection of hardware and backing up of data that is the real reason a data recovery team like R3 at Security House are kept busy 7 days per week.
But the actual reason turnover has tripled in the last 6 years is the workload being put on servers and NAS boxes plus the inevitable human errors.
Virtual Servers and drives may appear to be a data file but its also a file system and an environment that can communicate with other virtual and physical connected devices. What we are seeing in the lab is quite disturbing, dozens of virtual servers a month are going offline and are not properly backed up.

​Whilst it may appear a bonanza time in reality it is a storm that could be developing as more businesses move to cloud based storage not actually knowing the implications in the event of a failure. Consider that most RAID5 storage arrays are recoverable even with multiple disk failures. But the damage to VMs can be catastrophic

Cloud Data Management and Disaster Recovery Readiness

Cloud computing has become an integral part of the business. New levels of virtualization, content delivery, and user access are allowing organizations to be truly agile in today’s fast-paced market. Still, the increase in cloud utilization has also greatly increased the modern organization’s dependency on this very technology. This means that outages and downtime are much more costly as well. Consider this – the average cost per minute of unplanned downtime is now £5,200, up a staggering 41 percent from £3,700 per minute during previous years, according to a recent survey from the Ponemon Institute. Our reliance on the data centre and cloud ecosystem that it supports is continuing to increase. And, this increase is picking up pace.

With all of this in mind – new data and cloud control methodologies aim to ease WAN configurations, create better data management systems, and even improve disaster recovery capabilities. Let’s examine some new methodologies around creating a good DR plan and ways to improve data management in the cloud.

Replication and data migration methodologies
When creating a cloud and data recovery plan, administrators should plan out how their data is being accessed, backed up and of course replicated. There will be times when it is necessary to move data over the WAN –between cloud data centres – as a part of a replication or migration policy. It’s important to work with tools which give an IT team the capability to move data over the WAN during set hours and in designed bursts so as to prevent bandwidth saturation. Site-to-site replication on a cloud-based backup system is a feature that comes with many enterprise solutions and should be enabled given a need for the function. There are a lot more interconnectivity points using APIs which allow for this type of cross-cloud communication. Remember, we’re not only replicating or migrating data – in many environments we are also working with snapshots of VMs and data sets. Today’s modern datacentre and cloud ecosystem is highly virtualized.

As mentioned earlier, it’s important to work with a cloud backup system capable of not only taking and backing up VM snapshots – but allowing the administrator to restore from those snapshots as well. In some DR cases, those VMs might need to be brought up in a different cloud environment entirely. This is something to consider when creating a cloud DR and backup system to help eliminate single points of failure.

Creating a cloud-ready DR plan
During the cloud disaster recovery plan creation process, numerous teams will be involved to ensure the proper design of the solution. The team in charge of the cloud-based backup solution must know and understand their function should a disaster occur. There are features which can be enabled to ensure that data is relocated back to a downed site once it is restored; and for proper logs to be sent out to the right people. Remember, you’re not just trying to recover a data set as quickly as possible. You’re also creating automation and intelligence around your entire cloud backup strategy. The most important concept to take note of here is this: As part of the DR plan, administrators must know what actions they will take should an emergency occur. Without a good plan, even the best cloud backup architecture will serve little purpose if no one knows how to quickly restore the environment.

“Future-proofing” the environment
First of all – there’s really no way to completely “future proof” an entire cloud and data centre environment. However – there are ways to come close. Remember, an intelligent cloud infrastructure is a flexible one. This means rolling out a solution which can support the environment both now and in the planned future. With cloud computing being deployed within many organizations, a major consideration for an IT group may very well be the effectiveness of data replication between cloud environments. Why? Cloud is a powerful tool which can help abstract physical resources and allow you to become truly agile. Cloud ecosystems have become the closest technology which you can use to help “future-proof” a business. You can innovate on the fly and allow cloud-software to help you respond quickly. With all of this in mind – IT managers can plan out their backup and DR strategies, and effectively purchase more resources as required.

Once a cloud management and disaster recovery plan is in place and all product features are created, it’s important to ensure that these processes are all working well. Testing is a big part of this and conducting occasional restores or other testing functions is crucial to the health of the actual data recovery plan. As more data is pushed through an environment, it becomes increasingly more difficult to work with and manage this information. This is why proactive testing, monitoring and management are all important tasks to keeping your cloud environment up and running well.

Cloud delivery for enterprise

Cloud delivery of enterprise applications is rapidly transforming the software industry, IT organisations, and the modern data centre. The as-a-service model for delivering advanced software functionality has moved into mainstream acceptance. IDC projects the cloud software market to grow to $151.6 billion by 2020 with a five-year compound annual growth rate (CAGR) of 18.6% - far exceeding the growth of traditional software.

Enterprise IT organisations are embracing cloud software for good reason. SaaS versions of enterprise applications introduce new innovations faster than traditional software. Faster public networks, pervasive mobile devices, and modern development paradigms enable excellent user experience. Powerful cloud-based data centres serve up sophisticated real-time analytics functionality to users accessing applications from a variety of devices. And probably most significantly, the adoption of SaaS frees up IT resources and simplifies enterprise data centres.

The adoption of SaaS and cloud based software is a cross industry phenomenon. Health tech, fintech, retail tech offerings are emerging alongside enterprise applications-as-a-service (e.g. ERP, CRM, HR, accounting). SaaS versions of enterprise applications to smaller businesses that never had the option to host complicated applications on premises. Looking ahead, the rise of IoT enabled businesses - including smart industry, connected health, and smart city oriented companies - will also be delivered through as-a-service offerings. These applications all share common infrastructure requirements. This growing demand is giving rise to a new generation of data centre technologies that address the needs of as-a-service businesses.

Infrastructure needs for ‘as-a-service’ businesses

Modern SaaS and cloud-based software place new demands on data centre infrastructure. Some demands are simple evolutions of the requirements for traditional enterprise application infrastructure. Other are unique to the as-a-Service model where a single application instance is supporting many different organizations.

Modern SaaS businesses compete on user experience and functionality. Database platforms, server, and storage infrastructure must deliver the performance needed to deliver a positive user experience. Many modern applications incorporate sophisticated real-time analytics functionality. For instance, eCommerce sites may want to make real-time recommendations as part of a personalised shopping experience. Real-time analytics put complicated workload requirements on underlying infrastructure and require unique approaches to maintaining consistently high performance.

Part of staying competitive on user experience and functionality is the ability to constantly introduce new versions of a software offering. One of the benefits of cloud software is the ability to roll out new software versions to all users extremely efficiently. Modern software development organisations have embraced this concept with Agile development methodologies and by incorporating DevOps thinking into their organisations. As new functionality is rolled out continually, the underlying data centre infrastructure must be as agile as the application. Rigid management paradigms and complexity have no place in the modern data centre.

Probably the most significant requirement for as-a-service infrastructure is the need for scalability. Modern SaaS business models are based on the principle of scale. Scalable infrastructure strategies are critical as underlying business growth drives growth in users, in devices, in transactions, and in raw data. Infrastructure solutions that cost-effectively deliver performance, simplicity, and reliability for a single enterprise often do not support the scalability needs of SaaS and cloud software.

Transformative technologies for the ‘as-a-service’ world

Service virtualization transformed the economics of data centre management, paving the way for the as-a-service world. Beyond that, several infrastructure technologies are continuing to transform the modern data centre.

Solid state storage: Enterprise storage platforms built for solid state disk technologies are rapidly taking share from traditional hard-drive based arrays. All-flash arrays that leverage the latest solid-state technologies and incorporate efficient data reduction technologies enable a step-change in performance with costs competitive with traditional storage.

Software defined data centres: Across the data centre stack (including servers, networking, and storage), software defined principles enable a new class of highly flexible, highly cost-efficient infrastructures solutions. Custom hardware based solutions are unable to innovate at the rate of software-defined solutions that can ride the commodity hardware curves.

Convergence: High speed data centre networking technologies including NVMeF will transform the notion of shared storage infrastructure. By connecting server resources to shared storage with this extremely low latency interconnect, data centres will achieve a new level of performance and flexibility.

New database paradigms: As the server and storage layers of the data centre transform, database and application development principles will shift to gain full advantage. Traditional RDBMS technologies will be augmented or supplanted with alternative database technologies like NoSQL/NewSQL to more efficiently deliver advanced functionality like real-time analytics.

‘As-a-service’ data centre winners and losers

The rapid growth of SaaS and cloud software is not reducing the market for data centre technologies, it is driving a massive shift in who is buying these solutions and what their buying criteria are. Data centre technologies that have been successful on premises, will not necessarily have the same success in the as-a-service world. Legacy solutions saddled with rigid hardware based architectures are unlikely to keep pace with more agile, software based solutions. Technologies with proprietary interfaces will lose to those that offer developers access through open standards. Complex technologies requiring specialist support resources will give way to simpler solutions that can be managed by generalists.

The as-a-service data centre will be architected for scale, for simplicity, and for the applications that will drive the future of digital business and digital lifestyle.