Source-code hub GitLab.com is in meltdown after experiencing data loss as a result of what it has suddenly discovered are ineffectual backups.
On Tuesday evening, Pacific Time, the startup issued a sobering series of tweets we’ve listed below. Behind the scenes, a tired sysadmin, working late at night in the Netherlands, had accidentally deleted a directory on the wrong server during a frustrating database replication process: he wiped a folder containing 300GB of live production data that was due to be replicated.
Just 4.5GB remained by the time he canceled the rm -rf command. The last potentially viable backup was taken six hours beforehand.
That Google Doc mentioned in the last tweet notes: “This incident affected the database (including issues and merge requests) but not the git repos (repositories and wikis).”
So some solace there for users because not all is lost. But the document concludes with the following:
So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.
The world doesn’t contain enough faces and palms to even begin to offer a reaction to that sentence. Or, perhaps, to summarise the mistakes the startup candidly details as follows:
- LVM snapshots are by default only taken once every 24 hours. YP happened to run one manually about 6 hours prior to the outage
- Regular backups seem to also only be taken once per 24 hours, though YP has not yet been able to figure out where they are stored. According to JN these don’t appear to be working, producing files only a few bytes in size.
- SH: It looks like pg_dump may be failing because PostgreSQL 9.2 binaries are being run instead of 9.6 binaries. This happens because omnibus only uses Pg 9.6 if data/PG_VERSION is set to 9.6, but on workers this file does not exist. As a result it defaults to 9.2, failing silently. No SQL dumps were made as a result. Fog gem may have cleaned out older backups.
- Disk snapshots in Azure are enabled for the NFS server, but not for the DB servers.
- The synchronisation process removes webhooks once it has synchronised data to staging. Unless we can pull these from a regular backup from the past 24 hours they will be lost
- The replication procedure is super fragile, prone to error, relies on a handful of random shell scripts, and is badly documented
- Our backups to S3 apparently don’t work either: the bucket is empty
Making matters worse is the fact that GitLab last year decreed it had outgrown the cloud and would build and operate its own Ceph clusters. GitLab’s infrastructure lead Pablo Carranza said the decision to roll its own infrastructure “will make GitLab more efficient, consistent, and reliable as we will have more ownership of the entire infrastructure.”
At the time of writing, GitLab says it has no estimated restore time but is working to restore from a staging server that may be “without webhooks” but is “the only available snapshot.” That source is six hours old, so there will be some data loss.
Last year, GitLab, founded in 2014, scored US$20m of venture funding. Those investors may just be a little more ticked off than its users right now.
“On Tuesday, GitLab experienced an outage for one of its products, the online service GitLab.com,” a spokesperson for the San Francisco-based biz told The Register in an email, adding: “This outage did not affect our Enterprise customers.”
“We have been working around the clock to resume service on the affected product, and set up long-term measures to prevent this from happening again,” the spinner said. “We will continue to keep our community updated through Twitter, our blog and other channels.”
Meanwhile, the sysadmin who accidentally nuked the live data reckons “it’s best for him not to run anything with sudo any more today.”
I have used R3 Data Recovery for the second time now in the last three years and have been exceptionally impressed from start to finish both times; from loosing all data from a failed usb memory stick to receiving an email stating all list of data had been recovered all within a few days.
Once I made the enquiry to R3 Data Recovery making them aware of my troubles; they organised the failed memory sticks collection by courier, got it back to their lab and got straight on with locating my lost data. On this second most recent occasion the file list was copied through to my email account following on with a file upload where I had difficulties installing and accessing the files to continue working on.
I made contact to a staff member called Andy at R3 data Recovery stating I had difficulties but after linking through the “Team Viewer” programme; in no time at all was my data file successfully uploaded and all files were accessible.
A true honest company that I will be strongly happy to recommend and would happily make contact with them if I face another major issue!
Thank you to all of you at R3 Data Recovery.
R3 Data Recovery is rated (4.6) by 294 customer reviews on Reviews.co.uk
If your hard drive happens to fail, contact our team of experts. Our skilled professionals will be able to reliably recover data on your hard drive and provide industry-leading turnaround times which give you the confidence that's necessary to develop a solid, effective recovery plan.
Call us today at 0800 999 3282 or fill out our FREE, no-obligation diagnostics form for immediate help!