cybersecuritynews.com/google-cloud-accidentally-deletes/
Google Cloud Accidentally Deletes $125 Billion Pension Fund’s Online Account
Give that manager who forced through the backup IT wanted for business security a raise. And also the IT too.
It's essential to have at least 1 backup located at a different location in case of catastrophic disaster on one of the locations.
That includes vendor.
At least 1 copy of the backup must be located with a different vendor.
I agree it is essential. But given cost cutting measures companies do, it would not have surprised me to have learned that they were out of business after the Excel Sheet that holds the company together was deleted (yes I am aware or at least hope it wasnt an Excel sheet)
I had an employer who needed to save money desperately and ran everything possible on AWS spot instances. They used a lot of one type of instance for speed (simulation runs would last days).
One Monday morning, every single instance of that type had been force terminated. Despite bidding to the same as the reserved price.
Management demanded to know how to prevent it happening. They really didn't like mine or the CTO's explanation. I tried the analogy that if you choose to fly standby to save money, you can't guarantee you'll actually get to fly, but they seemed convinced that they could somehow get a nearly free service with no risk.
Thats why in the original post I specifically called out the manager who forced the backup to be present. Because some managers know you have to have a fail safe even if you never use it and they should be rewarded for when they have it
Management don't care and don't understand tech. And they don't need to. It's better to define redundancy and backups as insurance policies, which is something they do understand. If they don't wanna spend money on that theft insurance because they think they're safe that's fine, but then you can't expect to receive any payout if a thief actually breaks in and steals stuff.
don’t care and don’t understand
I’ve shared the story many times on Reddit, but TLDR a tech executive once signed off on a physical construction material with a 5% failure rate, which in business and IT is some voodoo math for “low but not impossible” risk masquerading as science; but in materials science is 1 in 20. Well, he had 100 things built and was shocked when 5 failed.
Which to be fair, 3, 4, 6, or 7 could have failed within a normal variance, too. But that wasn’t why he was shocked.
(Bonus round, he had to be shown the memo he had signed accepting 5% risk for his 9 figure budget project, wtf)
a tech executive once signed off on a physical construction material with a 5% failure rate,
Anyone with any knowledge of DnD or any other D20 based TTRPG cringed at reading the above, I assure you :D
which in business and IT is some voodoo math for “low but not impossible” risk masquerading as science.
I've had execs before who thought negative statistics go away if you reinterpret them hard enough. Worst people to work with.
1/20 failure rate. Well, he had 100 things built and was shocked when 5 failed
Hm don't let that guy ever play XCOMM, or go to Vegas
which in business and IT is some voodoo math for “low but not impossible” risk masquerading as science
Ah, yes. MTBF. Math tortured beyond fact.
I bet the current management at that company will take tech seriously moving forward. Imagine facing the prospect thst you lost data for over 100 billion in investment accounts. That would make anyone have a sudden heart attack that you'd never forget.
Financial institutions should absolutely be required to have multiple safeguards like this.
Agreed. Don't know Australians laws, but perhaps their laws do. Either way, their IT department deserves Kudos for being on top of it.
but regulation BAD!
I bet the current management at that company will take tech seriously moving forward.
The current management will. But wait until the C-sutie changes over and they are looking for ways to "save money". I have seen it first hand that they try to cut perceived redundancies right out the gate.
That's why one prints out these examples and tapes it to their office door, with the caption "this could be us".
A lot easier for the C-Suite to understand "if this goes bye-bye so does this company" lol
Backups are not an IT decision. They are a Risk Management decision. IT doesn't make risk management decisions in most companies. All an IT person can do is make their recommendations to the people who decide risk and go from there. And, obviously, get their decision in writing, print it out, and frame it, because when it happens (and it will), you want to CYA and have something for your next employer to laugh at.
As my veterinarian reminds me every time I pay her bill after bringing in another free rescue, "no such thing as free".
I had an employer who needed to save money desperately
Should have just told them "well, you were desperate to save the money." Enough apparently to risk the whole business.
I get it these people never want to be told to their faces that they messed up. It can't ever be that they misunderstood the risks and made a bad call, there must be another explanation.
They were panicky and whiny that half a dozen people couldn't work, and what would have happened if I wasn't there to start up new servers?
I pointed out that the process was well documented and other people had the necessary privileges even if they weren't totally familiar with the process. Some engineers agreed that my documentation was excellent, even if they didn't fully understand it.
The reason for the management attitude became clear a week later, when I was made redundant, to the dismay of the developers and the desktop support guy (quite junior) who were given my jobs. And the build system stopped working, exactly how I predicted at my exit interview but nobody took any notice at the time, as they failed to renew the certificates.
Lol
Dont put all your eggs in one basket and you're going to have to pay rent on the extra basket.
Fun story that will be vague, For Reasons -
After a newsworthy failure that could have been avoided for the low, low cost of virtually nothing, the executives of [thing] declared they would replace all of [failed thing] with the more reliable technology that was also old as dinosaurs. There may have been a huge lawsuit involved.
But! As a certain educator (and I’m sure others) had argued, “Never let a good crisis go to waste,” the executives seized upon the opportunity to also do the long overdue “upgrade” of deploying redundancies.
Allow me to clarify/assert, as an expert, my critique of the above is that it required a crisis and that these were best practices, that aside.
Now we enter the fun part. The vendors - of whom there were multiple, because national is as national does, would find out they were deploying the same thing in the same place. You know, literally a redundancy. One fails, the other takes over. Wellllllllllll each vendor, being a rocket surgeon, made a deal where they’d pay for right of use for the other vendor’s equipment.
And they charged the whole rate to us, as if they’d built a whole facility. Think of the glorious profits!!
We’d poll the equipment and it’d say Vendor A, then (test) fail over and the equipment would answer Vendor B. Which, to be clear, was exactly the same, singular set of equipment.
They got caught when one of our techs was walking 1000 ft away from one of our facilities and thought it looked really weird that Vendor A and Vendor B techs were huddled together at one facility where two should be. It did not take long from that moment to a multi-million dollar lawsuit - which, I believe, never made it beyond counsel are discussing exercise before the vendors realized building the correct number of facilities would be ideal.
And a “our tech is coming to your facility and unplugging it” got added to the failover acceptance criteria.
And my dad wonders why I have such a low opinion of MBAs.
So, you're saying the company built one server/toothbrush/whatever then went to one customer and said "we made this for you, pay us for the whole thing!", and then took the same toothbrush to the next vendor and said "we made this for you, pay us for the whole thing!"?
Fucking christ.
To take a completely unrelated example, say you’re a taxi company, and you pay NotHertz and NotEnterprise to keep a spare car at every airport for you, just in case. It’s very important to you that when you need a car at the airport, it is ready to go, so if one fails to start, you’re literally hopping in the next car over. No time to futz with the oil or anything. Maybe life or death important.
And if there were only 200 airports… NotHertz buys 100 cars, NotEnterprise buys 100 cars, and NotHertz rents NotEnterprise’s 100 cars, and vice versa, so instead of 400 cars, every airport with 2, there are 200.
And yes, they charged for 400 cars.
The dirty secret is most of the civilized world is held up by Excel.
In the beginning there was Windows XP running 2003 Excel
2003? My sweet summer child... I've worked with an Excel spreadsheet that should have been a SQL database that was older than me. I'm old enough to remember 9/11.
I'm old enough to remember 9/11.
I do not like this age descriptor
Lotus 1-2-3 anyone?
No, don't you dare speak those cursed words.
or at least hope
ALL HAIL THE 6 GB EXCEL FILE
That crashes excel after 10 minutes of trying to open the file and reaching 95%.
Yep, I wrote a batch script that just repeatedly opens the file when it detects it closes. I usually run it when I arrive at work, then spend 45 minutes taking a shit (on company time of course).
By the time I come back its usually opened properly. Usually. Sometimes I just have to go take a second shit, y'know? One time I even had to take a third shit! My phone's battery was at like 30% and it was only 10am!
LMFAO.
That was fucking hilarious.
Less cost cutting measures and more greed. We have so many vendors over the last year fully drop the on prem deployment of the systems for a monthly cloud subscription cost. Usually doubling the cost of that system. We just changed from on prem microsoft to m365 and the cost nearly tripled with licensing and a few of the accounts we needed that did not use on prem licensing needs m365 licensing to make our stuff work (each of our license is around $600 per user per year)
Fun fact, the UK government lost some Covid data because it was stored in a spreadsheet and they ran out of columns. They weren't even using the latest version of Excel which would have had more column space available.
yes I am aware or at least hope it wasnt an Excel sheet
UK government has entered the chat
Financial services license holders don't get the option to cut all the corners, so to maintain a license you need to stick with a lot of expenses for just such occasions
And mandatory audits for compliance.
In some industries it’s mandated by regulation
Also, if you don't regularly (say, annually) test that you can restore from a backup, you don't have a backup.
So many people don't understand that the 'cloud' is just someone else's server.
Well large cloud providers are supposed to maintain data parity & backup across geographic borders already.
Yes, and that's why a single cloud provider is enough to meet 2 out of 3.
However, that's still a single vendor.
To get up to 3 out of 3, you need a second vendor, to be able to recover on a catastrophic issue with the vendor.
Umm... Have you read the terms and conditions?
Yes, I'm a software engineer and formerly worked on a team within AWS. There are many storage options for different specializations based on needs. Data reliability is one of them.
And within AWS or G Cloud you can make use of multiple different storage options since these are owned by fully different organizations within the company. They sometimes share the same data center so a geographic event could disrupt both of them but a system issue like a bad rollback can't.
Generally I think most people assume catastrophic issues to be Yellowstone erupting, a solar flare that got one half the earth, maybe a meteor hitting earth.
Not someone at Google Cloud overwriting the live version and backup version during a regular operation. Like I imagine Google had a secret settlement for the 2 weeks and tons of manhours put into restoring the company cloud structure.
I work for a software company in a field where many of our customers prefer to host their own versions of the software. It’s a data driven industry, specifically.
Despite data security being probably the most important aspect of this industry, I’m aware of customers/vendors who keep no backups whatsoever.
None. Nada. Nothing. It’s a nightmare. I couldn’t imagine living like that.
Thank you for including IT.
I can’t tell you how much money my team has saved our company and we still get treated like little dust rats that can be laid off at any moment.
IT deserves the raise always. The specific manager that made sure the company securing project actually got funding rather than looking only to the next quarter deserves it too
Based on how few IT employees a large company can succeed wirh, and how much damage can occur from having your three IT guys be underpaid inexperienced dweebs...
It's insane that a company would not have three well-paid experienced IT guys
Do the three of you work in a basement with a pale-skinned goth hiding in a closet?
Here, it's Cradle of Filth. It got me through some pretty bleak times. Try Track 4, Coffin Fodder. It sounds horrible but it's actually quite beautiful.
You've described most IT departments, yes.
When I started my last position, I did a voluntary audit of mobile device plans and found twice my pay per month in unused lines. The accounting department was issuing devices before I came on, and wasn't deactivating them when people quit. Still got fired because someone else fucked up their job and I got thrown under the bus, even though I cost them negative money to be there...
IT's lament:
"Why do we even pay you guys? You don't do anything!"
"Why do we even pay you guys? Everything is fucked up!"
imagine how much begging and groveling it took too lol
"sir, i beg you, this is part of essential infrastructure i assure you"
"idk, 1 back up seems like it would be ok, we may never need to use it"
"please sir, think of the emplo- think of the money you will save if something goes wrong"
I cannot imagine someone begging for this. I can imagine that the IT people involved kept a very good backup of the emails in which they warned the execs about this risk :)
This isn't something new. I used to be a lab manager and when we moved off-site to AWS we created an in-house backup solution. I know most major companies practice this in some form or another.
Backup to S3 AND Wasabi.
Best we can do is cutting half the team.
And also the IT too.
But IT doesn't bring in revenue! Better to just give their entire budget to the sales department.
Maintenance is a cost sink, and without, the company is sunk.
How many 9s of guarantee does GCP provide again? Bezos and Satya just got such good advertisement for free.
Having worked for a few small wealth managers, I would be seriously surprised if any person at the board level were against having a backup. The whole industry is based on controlling (financial) risk and trying to mitigate it. A pension fund of this size definitely wants to have backups of everything. You do not want to be the one holding the biggest bag of excrement if the music stops and you do not have crucial data on hand.
Google will ask that IT operation be reduced to “streamline operations” and get into “growth” mode.
I work in IT. Even though this is on Google's shoulders, we'd get blamed, forced to work overtime (salary so it's "free" overtime for them), and then someone would get fired once we got everything back up and running.
Don't ever go into the IT field.
It never gets a raise.
Best they can do is pizza party.
Yup. Just like the unauthorized copy of Toy Story 2 that ended up saving the day. They got SUPER lucky. I sent that same info to my IT team asking if we have redundant & independent backup storage. I prefer to learn from other people's mistakes where possible.
What’s the story?
Toy Story 2 data was lost during production. Fortunately a producer or whatever on maternity leave had a full copy of the raw data at home.
Wow. That’s crazy. Thanks for sharing.
Then they laid* her off sometime later even though she saved a multi million dollar project. These hoes ain't loyal
*Edit: wrong laid lol
Galyn Susman; Disney went on to make $500,000,000 on that film.
They laid her off last year.
It’s about Toys. It’s streaming on Disney+.
What do you mean super lucky? They planned into having a backup outside google cloud. It's not even close to Toy story story lol....
According to UniSuper's daily emails, the banking data was not effected, only the interface used by customers. Hence, there was no danger to them or the Australian superannuation industry.
Big difference between losing the key to the front door and the key to the filing cabinet.
Or between losing the mat you stand on while opening the file cabinet and losing the file cabinet itself...
This incident has damaged both of their reputations despite service being restored within 2 weeks; what do you honestly think would have happened if the backup did not exist?
This is a silly line of thinking, a contingency was put in place specifically to stop freak issues like this from being catastrophic. That contingency worked. It's not like the data was saved by complete chance or something.
The website is just a poster on your wall, while you have your money in the bank. Someone threw away the poster. Thank God it wasn't customer data.
Guess they call it the cloud because it can just disappear.
Ctrl-Z Ctrl-Z Ctrl-Z ... Awww crap
And the other backup wasn't actually a backup. It was with a 3rd party for some evaluation.
What a frustrating article.
What exactly is the "major mistake in setup" being mentioned?
I feel like there's multiple bugs here.
Like, why is a deletion triggered immediately when a subscription is cancelled?
There needs to be a grace period.
Because, you know.
MISTAKES HAPPEN
and engineering that doesn't allow for that, is bad engineering.
Google Cloud Engineer here. They definitely don't start deletions right away. I think there are a lot of details being left out of the story.
I would certainly like to know the whole story.
Google needs to be more transparent, because it looks pretty bad right now.
Yes, from a business perspective if nothing else. CTOs, even the smart ones who are keeping redundant backups would be looking at that statement and going "Why would I want to risk my business on that infrastructure again?"
if you're a small company/team wouldn't you expect google to be the ones have backups. I get that this wasn't a small customer for google but what are those companies and orgs with 5-50 employees/people going to do. maintain two cloud infrastructures?
Paying for the actual level of Tech Support you need is expensive. It's not cheap to run a business properly.
I'm guessing everyone involved fucked up in some way and no one wants to say anything about how dumb they all were
Yeah my pretty much my entire business exists on Google Workspace. They need to give a fucking full story asap or I'm going to need to look at alternatives.
It's likely they're taking their time to ensure they can disclose the details safely, the bugs have been completely fixed and can't be exploited by malicious parties.
If I had to guess based on the extremely limited information available, I'd imagine something like UniSuper submitted a config change, possibly an incorrectly written one, and then the GCP server software hit some sort of bug triggering perma deletion rather than handling it gracefully
This is just my best speculation based on what they said and I wish there were more info available
The immediate perma-delete feels very "why do we even have that lever?"
The nature of software bugs is that it might not have even been an explicit lever - maybe the lever was "relocate elsewhere then delete the current copy" and then the relocation step didn't go through due to a bug but the delete part did work
You need that lever, legally. There are various laws that, quite reasonably, say that when a customer demands you delete their data, you must scrub it from your systems permanently - sometimes with short time windows (and you always want the system to do it faster than the "maximum" time window, to leave a safety buffer). And this typically includes backups.
As a google cloud engineer, you should be aware that there is a data retention period, and outside of a CATASTROPHIC bug in production, there is literally no other way to delete the data without it being extreme incompetence, malice, or a major security breach.
CONSPIRACY THEORY:
Ever since I read the press release from google I felt like this could've been a state actor that got access to some of the funds that were being held by UniSuper and to mitigate a potential run on the bank they've coordinated with Google to put this out as a press release. Normally when you see an issue like this from google they're fairly transparent about what took place but "a 1-off misconfiguration" is incredibly non-descript and actually provides no technical explanation at all, and doesn't ascribe blame to a team or an individual for this misconfiguration. While they provide assurance that it won't recur, without details about the nature of the issue, the consumer has no idea of what it would look like if it did recur.
The whole thing kinda smells fishy from an opsec standpoint.
I think you're right in their vagueness, misconfiguration reads as exploit. Although, my money is on disgruntled tech.
I too as a disgruntled tech jumped to that conclusion but op above is right from a security standpoint it makes most sense. Would not look too good if google admitted there was a bad actor and exploit involved. Stock and public trust would plummet drastically over night.
It does, doesn't it
I'd guess they overwrote or corrupted their encryption keys somehow, which is effectively the same as deleted but can be done very quickly if Googles key management code had a bug.
I would assume that accounts this size have Account Representatives of some sort?
Yeah, however they generally are in more of a reactive role rather than proactive with unforeseeable (?) issues like this. In circumstances like this they are most helpful to expedite a resolution.
Like, why is a deletion triggered immediately when a subscription is cancelled?
Why does an account of this size not have dedicated liaison personnel?
And why is any automation of account status allowed on the account without intervention?
This is a technical and social (HR) fuck up.
Under no circumstances should it have even been considered for deletion without having to go thru several people/approvals first.
They 100% do have a dedicated account team.
Everything else you said is spot-on. There's no way this should be possible, but one of Google's biggest failings over the years has been to automate as much as possible, even things that shouldn't be automated.
That is a bug of legendary status!
The sheer number of places I've been asked to evaluate that I have looked at where they replicated deletes without snapshots is insane. This configuration is ridiculously common because people just don't take the time to wonder "What if it is human error on the first site and not just the server crashing?"
"We replicated the corruption" is also another common thing that happens with replication DR.
When asked if they agree to the terms and services they accidentally clicked no instead of yes
Yeah the article and the public statements are so ambiguous that it's not even clear whether the fault lies with Google cloud and not the customer.
Translation: They forgot to make sure the power cord was fully seated in the wall socket and the cord came out.
From the other articles and public statements, it sounds like Google just straight up screwed up and accidentally deleted and then because it was deleted one region it automatically deleted in the redundant region.
The straight up sounds like a Google screw up and they are releasing a very vague statement to not provide any details around it and just promise that it will never ever happen again.
This is going to be devastating to their cloud business if they can't really provide clarity.
I could see both sides of the story, it's either Google rolling out a broken configuration that their systems should have normally caught in advance, UniSuper having horribly misconfigured their cloud account - Google essentially saving them an enormous PR nightmare by being vague as to who caused it, or possibly just a mix of both.
That never before seen bug could mean just about anything, like automated systems meant to detect configuration mistakes not setting off alarms/preventing an action from going through. Keep in mind that Meta/Facebook essentially nuking their entire BGP was also a "never before seen bug" in their tool meant to catch bad commands from being ran.
There is no way the Google ceo would be on the record with a joint statement if it was purely the customer’s error.
The statement is quite vague, stating:
inadvertent misconfiguration during provisioning of UniSuper’s Private Cloud services ultimately resulted in the deletion of UniSuper’s Private Cloud subscription
It doesn't say who misconfigured it or how. With this wording, I could see this being fully Google's fault, or I could see it being something UniSuper misconfigured and believes that Google shouldn't allow them to configure in such an manner. Or somewhere in between.
It's also not clear if it was an automated deletion (indicating a potential software bug) or a manual deletion (indicating a process issue which stemmed from how the account was configured).
Being so vague, it leaves the interpretation open enough that both parties can save face a bit. This makes me suspect that either UniSuper had some role in the initial incorrect configuration which set the series of revenues into action or Google is paying a fair amount of money as a settlement with a condition that the joint statement is worded in such manner.
I doubt we will ever know the details, but I would love to have been a fly on the wall when they figured out what happened.
Most likely scenario is UniSuper was allowed to configure in a way that is not normal which caused the a failure that Google could technically have prevented but never expected to see in production. Both companies likely made a series of errors that compounded on each other causing this, and both legal teams agreed they will try to save face together with this vague statement.
After reading another article, it sounds more like Google made it too easy for them to configure a screw up and Google shares in the blame for basically having an "easy button", metaphorically speaking, that let them delete everything.
Also I'm not too familiar with Google's private cloud... If that's some sort of on-premise offering, I would guess that they don't have the same intense focus as they do for their pure cloud.
Yeah I feel like the fact that neither has taken the full blame/neither party is blaming the other one (despite really bad PR being at stake here) makes it likely that whatever UniSuper configured should have set off alarm bells for both of them, this being a "one of a kind configuration error" that has never happened before implying that their automated systems didn't catch it in time.
I don't know, reading between the lines of the joint statement, the only party "taking measures to ensure this does not happen again" is Google Cloud. Throughout all of the communication over the past couple of weeks (I'm a client), Google Cloud has taken the full brunt of the blame. Given the ramifications for GCP's reputation, I don't think they would be quite so willing to do so if it had been Unisuper's fuck-up in some way.
Yeah, I think Google realizes they made it far too easy to delete everything and should have had more protections in place. I'm also guessing they couldn't recover anything and they realized how bad it looks that a customer makes a seemingly minor mistake and loses everything and Google can't do anything to help.
Google cloud out of all 3 big providers is easily the worse UI of them all, every little thing is hidden behind a different bullshit tab.
Simple idea creating a VM instance and accessing VM instances is already a massive twist of turns, accessing the network interface and applying rules is an even bigger headache, and then not confusing a dedicated network that can be added onto the VM versus the default configuration that comes standard.
AWS and Azure make it so much clear cut, even Azure's Powershell cmdlet is much more intuitive than the weird bullshit Google uses since everything is done in-browser as well (although Google's in browser SSH is fire).
“Google essentially saving them an enormous PR nightmare by being vague as to who caused it” there’s 0 chance of this. 0, 0.
A couple of years ago I received an email from a company we used to hold off-site copies of our backup data. They said that during the process of migrating from their own data center to Google's cloud they lost all of the data. Irretrievable and unrecoverable. They apologized. No offer of compensation of any kind.
Fortunately we had other copies of the data so it wasn't a big deal but I told the company that if they didn't refund every dime we had paid them that I would organize a class action lawsuit (data from dozens of other customers was also lost).
As soon as I got the refund I canceled the service.
Last month the same company announced that they were getting out of the business of holding backup data and said all data would be deleted within a couple of months. Intentionally this time.
Do you know where your cloud based backup provider stores their data?
In the cloud??
/s
There is no cloud, it's just somebody else's computer.
Although UniSuper have made it clear that their data was not stored on Google cloud; the cloud was used only to provide the Web interface and the interface for phone apps.
In one or more datacenters depending on how redundant you made your backups.
Which won’t matter if your entire account is deleted.
So after all the google layoffs, some new kid joins and earns the "In my first week at Google, I managed to delete Production and Backup and all I got was this lousy T-Shirt."
Team-member-1 strikes again!
What was that? "'rm -rf /" you say? Okie dok....
rmdir should work on directories containing content so people are less tempted to use rm -rf
Honestly, this pisses me off so much. What's the point of "rmdir" if I can only use it on empty folders? Who is creating all of these folders and then doing nothing with them!
I like to live life on the edge.
They probably laid off the one dude who could have avoided this and the dude who fired him is trying to avoid being noticed.
Remember: Not your server, not your data. The only thing that saved them here was an offline backup on machines they (presumably) controlled. Never rely on 100% cloud solutions unless you're okay with them disappearing.
More people need to remember that keeping important stuff in “the cloud” is just a shorthand way of saying “I keep all my most important things on someone else’s computer.”
But what other alternative do "most people" have? Like what, they're all gonna be able to afford to buy, maintain and upkeep their own servers? In what world. The "cloud" is still way safer and a better alternative than lugging around a harddisk or usb all the time. How often do mistakes like this really happen vs. you losing your usb or whatever?
Cloud services are easily more reliable than owning your own servers and it’s not even remotely close.
The real take is that you should always have your data in multiple places whether it be multiple cloud services or multiple colo services.
I have been doing colo since the 90s and cloud since 2008. Ain’t no way it’s remotely possible to meet cloud levels of reliability anymore. I haven’t had a single data loss in the cloud. Colo I have to do manual recoveries at least once every 2 years, no matter how redundant the systems.
They had backups in place "with another provider."
It wasn't an actual backup. It was data they had with a 3rd party for evaluation purposes, and they were able to use that as a backup.
My fear is that one day my gmail account will be deleted for some reason. Then i'm screwed.
Is there a way to backup the gmails locally or cloud?
Yea Google Takeout.
In addition to takeout, you can run Thunderbird ( or some other email app) and retrieve you Gmail to your local PC and backup that data store. I actually do both.
And use your own domain, so in case Google decides you're done for, you can just use a different email server.
As of now, Google Cloud knows what caused this problem and has taken steps to prevent it from happening again.
Someone got fired for sure.
Sundar be like: “Gemini, tell me about what caused major fuck up in Google cloud”
10 minutes later: “Gemini, tell me why the entire cloud team is missing”
Google engineer here: we have an open and blameless postmortem culture so that we all learn from mistakes so as to not repeat them.
Imagine how would this have played out if they didn't have that 2nd backup. They'd have to reconstruct account balances from whatever data they could scrape together from printers, workstations, emails.
At this point if I see the google brand on something, it makes me less likely to go for it compared to a no-name
It may get you to google the no-name brand first though.
Oh boy, just want I needed, 1003888492817 pages of AI generated results interspersed between paid ad results.
Google search is so bad nowadays. Duck duck go is just as good; fewer ads.
I've been avoiding Google Search for a few years now, and the gap in usefulness between it and DDG has been getting narrower, but not because the latter have gotten any better...
Google in the 2000s: I want all things Google can offer
Google in the 2010s: All these integrations are great but kinda scary to have Google owning all my data
Google in the 2020s: I can't wait to degoogle everything
Step 1: Be new and innovate on stagnant industry
Step 2: Grow to be a giant corporation with global reach
Step 3: Enshittify due to contempt <----- Google is here
Step 4: Go bankrupt
Step 3.5 poach oracle employees and execs to be more enterprise friendly (this fails)
Nah that's step 2.5 and Google already did that.
Tommy Kurian has entered the chat
Use the 3-2-1 backup method.
3 copies of all your data, on 2 different mediums, with 1 offsite.
This is absolutely wild. As an Australian, I'm shocked that I hadn't heard about this before now.
It has been in the news for at least a week. Both the ABC and Guardian have had article covering this.
I'm with unisuper, been getting daily emails with updates for a while now, but outside that, I've seen zero coverage.
Yep. They really only started the daily emails when people really kicked up
"We have instituted changes to ensure that this will never happen again."
Changes: an "Are you sure?" confirmation dialog on delete requests.
It's not the cloud, it's someone else's computer!
Can't Locate Our User's Data
Way to go chuckleheads
bro just click ctrl + Z
Google is the ultimate example of enshittification
Why was it possible to delete that much data without MANY checks and balances? When you have customers that big, why would you even allow auto-delete? It should be a carry manual human process to approve deletion of data.
I’m pretty surprised the data was actually deleted and not just held in cold storage where it could be revived.
‘Hey Bob, did you purge the Google Sheets for this Pension fund? You did a backup before the purge, right?’
Your daily reminder that “The Cloud” is just someone else’s computer.
Offline backups come up clutch yet again. Always have an offline backup.
I’d laugh if it wasn’t my super company that holds a couple of million dollars of super for me.
Damn dude I only have like a thousand dollars of super
Well Unisuper was originally to the Super for University staff, and the Unis have always paid 17% of your salary in to super, do that for like 35 years and it will add up :-)
Good for them on having 2 alternate backups.
I’m not surprised. My Google Drive spontaneously deleted roughly 1.3 TB and Google wouldn’t do sh!t about it. Forget the last twenty years, Google is a garbage company now.
It's almost like allowing a few select corporations to monopolize their sectors is a bad idea. Whod'a thunk.
Isn’t there a long ass window after an acct is closed before deletion? Like 90 days or something? At least that’s how it is at AWS.
Google got so tired of shutting down all their own products, they decided to start shutting down products from other companies.