Data Disaster Recovery

Data can be restored immediately to either the original server or an alternate server. Restores can also be redirected to an alternate physical location if the original office is no longer available. The server disaster recovery solution lies in having devices and technologies available which allow us to restart the vital information systems in a shorter time period than the estimated critical. These mechanisms, and all the associated plans of actions, are known as data disaster recovery.

Why should I have a data back up and recovery system!

By not having a data center disaster recovery plan, you are quite simply, putting your business at risk; don’t take chance with your vital data. Would you leave any other aspect of your business to chance? Not really. Always be prepared for a natural calamity or any disaster, don’t leave any if’s and but’s, leave no stone unturned when it comes to backing up your important data.

Care should be taken while selecting a data disaster recovery services. The reason being your computer backup systems are the core piece of every data center disaster recovery plan. That is why your computer backup system must posses the ability to accommodate online computer backups for the most efficient protection.

Have a back up of your important programs, files, and applications as your priority. For the best protection it is imperative that your computer is backed up. Back up your data to an online server. What will your software systems do to automatically and quickly get your users back online? Data disaster recovery plays its vital role here.

How do I find the best data recovery services?

You can begin the work of finding the best data recovery expert by asking people around. Check with your colleagues to find who their data recovery services provider is. If you have already short-listed some experts, the next step is to check their experience in the field. Prefer to choose a service provider who has a minimum five years experience in the line. Also, find out his record of accomplishment for performance.

Always identify the risks to critical business information that may not be addressed by current disaster recovery plans; Optimize the value of the current backup and recovery infrastructure. New or improved processes to enhance overall disaster recovery capability are advisable. Resort to a reliable online Data Disaster Recovery. Bank on cost-effective remote backup and data disaster recovery solutions.

Sound Shock Shuts Down Banks Data Center

ING Bank’s main data center in Bucharest, Romania, was severely damaged over the weekend during a fire extinguishing test. In what is a very rare but known phenomenon, it was the loud sound of inert gas being released that destroyed dozens of hard drives. The site is currently offline and the bank relies solely on its backup data center, located within a couple of miles’ proximity.

“The drill went as designed, but we had collateral damage”, ING’s spokeswoman in Romania told me, confirming the inert gas issue. Local clients were unable to use debit cards and to perform online banking operations on Saturday between 1PM and 11PM because of the test. “Our team is investigating the incident,” she said.

The purpose of the drill was to see how the data center’s fire suppression system worked. Data centers typically rely on inert gas to protect the equipment in the event of a fire, as the substance does not chemically damage electronics, and the gas only slightly decreases the temperature within the data center.

The gas is stored in cylinders, and is released at high velocity out of nozzles uniformly spread across the data center. According to people familiar with the system, the pressure at ING Bank’s data center was higher than expected, and produced a loud sound when rapidly expelled through tiny holes (think about the noise a steam engine releases).

The bank monitored the sound and it was very loud, a source familiar with the system told us. “It was as high as their equipment could monitor, over 130dB”.

Sound means vibration, and this is what damaged the hard drives. The HDD cases started to vibrate, and the vibration was transmitted to the read/write heads, causing them to go off the data tracks.

“The inert gas deployment procedure has severely and surprisingly affected several servers and our storage equipment,” ING said in a press release.

There is still very little known about how sound can cause hard drive failure. One of the first such experiments was made by engineer Brendan Gregg, in 2008, while he was working for Sun’s Fishworks team. He recorded a video in which he explains how shouting in a data center can result in hard drives malfunction.

In ING Bank’s case, it was “like putting a storage system next to a [running] jet engine,” a source told me.

Researchers at IBM are also investigating data center sound-related inert gas issues. “[T]he HDD can tolerate less than 1/1,000,000 of an inch offset from the center of the data track—any more than that will halt reads and writes”, experts Brian P. Rawson and Kent C. Green wrote in a paper. “Early disk storage had much greater spacing between data tracks because they held less data, which is a likely reason why this issue was not apparent until recently.”

Siemens also published a white paper a year ago saying that its tests show that “excessive noise can have a negative impact on HDD performance”. Researchers said this negative impact may even begin at levels below 110dB.

“It can now be established with a high degree of certainty that the faults in storage systems as a result of an inert gas extinguishing systems discharge were caused by the impact of high noise levels on the hard disk drives,” according to Siemens.

The Bank said it required 10 hours to restart its operation due to the magnitude and the complexity of the damage. A cold start of the systems in the disaster recovery site was needed. “Moreover, to ensure full integrity of the data, we’ve made an additional copy of our database before restoring the system,” ING’s press release reads.

Over the next few weeks, every single piece of equipment will need to be assessed. ING Bank’s main data center is compromised “for the most part”, a source told us.

See our news story regarding Gas Drops and Air Shock effects here: Gas Drops and Air Shock

Source: http://motherboard.vice.com/read/a-loud-sound-just-shut-down-a-banks-data-center-for-10-hours

Four tips for disaster recovery

I bet you didn’t know that one simple command line resulted in the full deletion of most of the production files for Pixar’s Toy Story 2 from a studio server back in 1998. The studio ensured it created daily backups of production files, however they didn’t realise until they made an attempt to restore the lost files, that the backup solution hadn’t worked.

This event happened when backup solutions were immensely complex and difficult, if not impossible, to “test.” According to the story, the studio had seemingly been prepared for this unimaginable situation; however, they ultimately had to rely on blind luck to recover lost files.

An employee just happened to have a copy of the movie that she had taken home the week before, and that became the de facto backup file.

Today, cloud-based disaster recovery solutions are quickly gaining enterprise-wide adoption as organisations seek to reduce hardware costs and improve flexibility in responding to unplanned downtime events.

Disaster Recovery as a Service (DRassS) not only allows organisations to quickly and easily recover data, but more importantly, enables them to resume operations seamlessly during a disaster. Advances in cloud-based DR solutions allow IT administrators to determine the level of protection at the server level. Mission critical servers can be set to recover instantly while other servers with less critical data might be set to recover at a longer Recovery Time Objective (RTO).

Despite the benefits of cloud-based DR over traditional solutions, a DR program can only be successful if it is consistently tested. Regular scheduled testing must include communications, data recovery, and application recovery. DR testing in these areas is required to conduct planned maintenance and train staff in disaster recovery procedures.

Traditionally, DR tests have been complex, disruptive and consequently unpopular. Too often, testing focuses on backing up instead of recovery. While this approach ensures you have a copy, it does little to make the data, server, or application easy to reinstate. To further complicate efforts, many of the systems used in the testing are needed to run day-to-day operations. To have those systems down during testing is unacceptable.

A hybrid-cloud approach to DR has changed the testing landscape for the better, combining public cloud and SaaS automation software to make continuity planning easier. Companies gain data backup, fail-over of servers and the ability to have a secondary data centre at a different site to allow for regional disaster recovery.

Here are four suggestions to make your DRaaS testing more efficient and productive.

Plan Ahead and Plan Often

The problem with disasters is they aren’t planned and are unexpected. If you’re not testing your DR frequently, you might find yourself hung out to dry when lightning strikes. DR tests can be done frequently because DRaaS doesn’t have the physical infrastructure and configuration synchronisation associated with traditional disaster recovery.

With an automated DRaaS solution, you don’t need to schedule IT personnel to manually check system configurations. Recent innovations make it easy to create an on-demand recovery node that you can test quickly. Unlike a typical backup-only cloud storage solution, hybrid DRaaS solutions can maintain up-to-date, ready-to-run virtual machine clones of your critical systems that can run on an appliance or in the cloud.

Test Your DRaaS in a Sandbox

With DRaaS solutions, standby-computing capacity is available to recover applications in the event of a disaster. This can be easily tested without impacting your production servers or unsettling the daily business routine. A sandbox copy is created in the cloud, which is only accessible by the system administrator. These copies are created on demand, paid for while being used and deleted once the test is complete. The approach makes testing simple, cost effective and does not disrupt business operations. You can test DR and applications every day without missing a beat, assuming you have the right DRaaS provider.

Test cases can be performed against the recovery nodes in as little as 15 minutes, depending on the application, often with no incremental costs. Applications and services are immediately available for other uses, enabling businesses to effectively adopt cloud infrastructure or speed tie to production for new applications or initiatives.

Take Advantage of a Sliding Scale

There are financial benefits to cloud-based testing. Service providers regularly offer sliding scales for DR testing. Putting your DR solution in the cloud also means there isn’t a redundant in-house infrastructure that is sitting unused most of the time.

The cloud gives small- to medium-sized businesses the same capabilities of larger organisations. With a level playing field, SMBs have greater access to DR solutions and the ability to test frequently.

Entice Regular Employee Participation

In traditional DR settings, employees may consider testing to be time consuming and distracting from their already busy schedule. However, according to a survey by market research company, Enterprise Strategy Group, respondents using cloud-based DR services were four times more likely to perform weekly DR tests than those-hosting their BC/DR solution.

People learn by reputation, so just like fire drills, we have to create and practice DR drills, which are critical to a DR Plan. Companies that fail to conduct regular drills shouldn’t be shocked when its employees panic during a disaster.

As you consider these steps, you might find yourself among other skeptics who think drills are unnecessary and that the chances of disaster striking are still relatively slim. But according to a May 2014 study by the Aberdeen Group, the annual average number of unplanned downtime events in the US is 1.7 per SMB, with the average downtime per event 6.7 hours. The average cost from downtime is estimated at $8,600 (£5,500) per hour, or about $100,000 (£64,000) per year.

Unplanned downtime events, whether caused by a natural disaster, human error, or hardware failure can have immediate and long-term negative impact. Take steps to ensure your business can quickly and easily recover its IT infrastructure and data, and minimise the impact by being prepared. Rather than just relying on luck.

 

 

Flood and disaster recovery advice and support from the team at R3 Data Recovery

R3 Data Recovery assist with flood disaster affected servers, computers and hard drives.

Those affected by the floods in Cumbria, Keswick, Lancaster and surrounding areas requiring data recovery should not be risking their data or others with the temptation of offers of free data recovery.

In reality most situations are covered by insurance or we can assist with staged payment.

It is important to react quickly and seal servers, computers and hard drives in strong plastic sealable bags ready for collection. Be sure not to force dry any electronic items with excessive heat. Keep the items stored in cool conditions. As well as water damage causing electrolytic and chemical reactions there is a great risk of health risks from biological contaminated flood water.

Consult the disaster recovery team at R3 for help and fast action. Andy and the team at R3 have the experience and facilities to recover data from flood damaged storage devices.

Examples of recoveries carried out at the Security House lab in Sheffield are flooding of Turks and Caicos / St Kits / Gran Cayman as a result of hurricanes. Floods and landslides cases from France, Italy and the U.K. Including the floods in Sheffield.

Call 0800 999 3282 or 079 3282 4264 - If lines are busy leave a voicemail, duty staff receive text and email notifications. You of course can email us on enquiries@r3datarecovery.com or use the contact form

Has a storm hit your cloud

​For every cloud there is a silver lining comes to mind in this age of Virtual Servers and Cloud Storage.
 
R3 have grown much faster than anticipated this last 5 years not because of hard drive failures. In reality hard drives are not at risk of failure from manufacturer level design faults. It is the protection of hardware and backing up of data that is the real reason a data recovery team like R3 at Security House are kept busy 7 days per week.
 
But the actual reason turnover has tripled in the last 6 years is the workload being put on servers and NAS boxes plus the inevitable human errors.
 
Virtual Servers and drives may appear to be a data file but its also a file system and an environment that can communicate with other virtual and physical connected devices. What we are seeing in the lab is quite disturbing, dozens of virtual servers a month are going offline and are not properly backed up.

​Whilst it may appear a bonanza time in reality it is a storm that could be developing as more businesses move to cloud based storage not actually knowing the implications in the event of a failure. Consider that most RAID5 storage arrays are recoverable even with multiple disk failures. But the damage to VMs can be catastrophic

Cloud Data Management and Disaster Recovery Readiness

Cloud computing has become an integral part of the business. New levels of virtualization, content delivery, and user access are allowing organizations to be truly agile in today’s fast-paced market. Still, the increase in cloud utilization has also greatly increased the modern organization’s dependency on this very technology. This means that outages and downtime are much more costly as well. Consider this – the average cost per minute of unplanned downtime is now £5,200, up a staggering 41 percent from £3,700 per minute during previous years, according to a recent survey from the Ponemon Institute. Our reliance on the data centre and cloud ecosystem that it supports is continuing to increase. And, this increase is picking up pace.

With all of this in mind – new data and cloud control methodologies aim to ease WAN configurations, create better data management systems, and even improve disaster recovery capabilities. Let’s examine some new methodologies around creating a good DR plan and ways to improve data management in the cloud.

Replication and data migration methodologies
When creating a cloud and data recovery plan, administrators should plan out how their data is being accessed, backed up and of course replicated. There will be times when it is necessary to move data over the WAN –between cloud data centres – as a part of a replication or migration policy. It’s important to work with tools which give an IT team the capability to move data over the WAN during set hours and in designed bursts so as to prevent bandwidth saturation. Site-to-site replication on a cloud-based backup system is a feature that comes with many enterprise solutions and should be enabled given a need for the function. There are a lot more interconnectivity points using APIs which allow for this type of cross-cloud communication. Remember, we’re not only replicating or migrating data – in many environments we are also working with snapshots of VMs and data sets. Today’s modern datacentre and cloud ecosystem is highly virtualized.

As mentioned earlier, it’s important to work with a cloud backup system capable of not only taking and backing up VM snapshots – but allowing the administrator to restore from those snapshots as well. In some DR cases, those VMs might need to be brought up in a different cloud environment entirely. This is something to consider when creating a cloud DR and backup system to help eliminate single points of failure.

Creating a cloud-ready DR plan
During the cloud disaster recovery plan creation process, numerous teams will be involved to ensure the proper design of the solution. The team in charge of the cloud-based backup solution must know and understand their function should a disaster occur. There are features which can be enabled to ensure that data is relocated back to a downed site once it is restored; and for proper logs to be sent out to the right people. Remember, you’re not just trying to recover a data set as quickly as possible. You’re also creating automation and intelligence around your entire cloud backup strategy. The most important concept to take note of here is this: As part of the DR plan, administrators must know what actions they will take should an emergency occur. Without a good plan, even the best cloud backup architecture will serve little purpose if no one knows how to quickly restore the environment.

“Future-proofing” the environment
First of all – there’s really no way to completely “future proof” an entire cloud and data centre environment. However – there are ways to come close. Remember, an intelligent cloud infrastructure is a flexible one. This means rolling out a solution which can support the environment both now and in the planned future. With cloud computing being deployed within many organizations, a major consideration for an IT group may very well be the effectiveness of data replication between cloud environments. Why? Cloud is a powerful tool which can help abstract physical resources and allow you to become truly agile. Cloud ecosystems have become the closest technology which you can use to help “future-proof” a business. You can innovate on the fly and allow cloud-software to help you respond quickly. With all of this in mind – IT managers can plan out their backup and DR strategies, and effectively purchase more resources as required.

Once a cloud management and disaster recovery plan is in place and all product features are created, it’s important to ensure that these processes are all working well. Testing is a big part of this and conducting occasional restores or other testing functions is crucial to the health of the actual data recovery plan. As more data is pushed through an environment, it becomes increasingly more difficult to work with and manage this information. This is why proactive testing, monitoring and management are all important tasks to keeping your cloud environment up and running well.

Protected Flood damaged drives from further risk

The recent weather and flood damages have affected many areas over the last month, but one thing we don’t expect is the hard drives in our computers to be recoverable, flood damaged drives are recoverable providing they are protected from further risk of damage and not dumped along with other biohazard goods.

Water ingress is generally through the pressure equalisation valve. Thankfully most drives have already stopped spinning when power is cut or the PCB shorts out.

It is important that any flood affected storage device not be powered up, even after drying off the PCB. Many will have water still inside, as some platter damage can be caused making the recovery more time absorbing.

The R3 team are currently assisting with over 50 other emergency cases this week ranging from SSD and Hard drives from PC’s MacBooks, servers, NAS and industrial production equipment such as computer controlled CNC machines.

Parliament seeks better disaster and data recovery

The Australian Department of Parliamentary Services has said it needs to improve its disaster and data recovery capability after a power outage in its data centre last week.

Following a power outage in the Parliament House data centre, the Department of Parliamentary Services has said it needs to improve disaster recovery and data management.

The issue was highlighted by the department’s acting secretary Dr Dianne Heriot in a Budget Estimates hearing on Monday.

In the Budget, the agency has been given AU$3.031 million to improve IT security in parliament, and an additional AU$7.7 million for improved network and IT security for the electorate offices of members of parliament.

Heriot said that in addition to that, the department was now looking at funding for disaster recovery management.

“Last week’s power outage in the Parliament House ICT data centre has highlighted the need for better practice data recovery capability for parliamentary IT services,” she said.

“We are looking as a matter of priority within our current budget as to what we can do to improve our disaster recovery capability.”

A spokesperson for the department told ZDNet that the power outage occurred on Friday night during scheduled maintenance.

“No data was lost during the outage. It occurred during a scheduled maintenance of the Uninterruptible Power Supply (UPS) system and all Parliamentary Computer Network (PCN) users had been advised of a planned service disruption to 3am. All core ICT services were restored within two hours,” the spokesperson said.

DPS has responsibility not only for the IT in parliament but also the IT for members and senators in their electorate offices.

We can support and advise you on creating a disaster and data recovery plan for your business, please dont heitstate to contact us on 0800 999 3282 for a free no obligation quote!