If the company gave a noob unlimited access and can’t restore their data from backups, it’s really their fault, not the employee’s.
We had a management course in the university where this was one of the main things they highlighted:
Managers faults are the managers fault.
Employees faults are the managers fault. Without exception.And if you think about it, that’s completely true. If an employee does something stupid, it’s most of the time because they a) had the opportunity to do it and b) they weren’t taught well enough. If the employee keeps doing this mistake, the manager is at fault because he allows the employee to do the job where he can make the mistake. He obviously isn’t fit for that position.
And people wonder why manager is paid more
Well yes, but they wonder that when the manager isn’t taking responsibility and ensuing mistakes don’t happen. A good manager is worth their weight in gold, but thanks to the Peter Principle most of them just end up there without being qualified or even wanting to do it!
The problem is often checks and balances. A bad manager often (but not always) has a more secure job than a good employee.
I have this opinion as a manager. If I have to terminate an employee, it’s my fault. It’s not a hard and fast rule and there are times where terminations happened because of unpredictable reasons… but it’s my job to find the candidate. It’s my job to match their skills to the job. It’s my job to give them a process wherein they can thrive. It’s my job to remediate non-issues before they become issues. There’s very few things that aren’t my job that could lead to a person being fired.
I rate my team’s success higher than any other metric, even technical goals and milestones. I want to say it’s because I care about them (and I do!), but that’s not the reason. It’s my JOB to make them succeed. It’s my JOB for them to stay happy, for them to get recognition so they don’t feel marginalized. Bad managers aren’t bad because they put the company over the team. They’re bad because they put themselves over the team (and by extension, the company).
I wish my last manager realised that. As well as being a people manager they were also a team lead and a sort of project manager. Guess which of the three roles they cared least about?
I know a lot of people who are great strategists or great team leads but who cannot actually focus on the needs of the team. I’ve seen so many situations where a little intra-team conflict turned into six figures of lost revenue and jobs lost because the manager couldn’t bring himself to get involved before money was being lost. You can’t NOT fire someone who crosses too many lines, but you can absolutely be at fault for them crossing those lines when they gave months’ notice and you could’ve talked situations down or improved policies.
I was lucky. My first managing role was under someone whose philosophy was "the manager’s job is to focus on their team. If you can get 33% more productivity out of each team member, you do more good for the company than you could ever do by “just being better” or “just designing better” than them. And I thought 33% was crazy, until I actually started learning you can. By
When’s the last time you tested backup restore and how long did it take?
“Eh, go away. I suppose it’ll work flawlessly. I’ll test it if I need it. I’ll have to look into the procedure anyways. Get off my back!”
0, thanks for asking.
Seriously though, how are you guys testing your home backups? I don’t have a spare Synology nas sitting around or spare 16tb drives.
The only way to test restoring a backup is to actually restore it. And for that, you do need spare hardware.
So, to answer your question, I don’t test my home backups either. I reckon pretty much no one is dedicated enough to do that.
I’m hoping, if shit really hits the fan that I can still pick out my important files and just manually re-setup the rest of the system. So, with a longer downtime in that sense.
That strategy is just absolutely not viable for companies, where downtimes are more expensive than spare hardware, and where you really can’t tell users you restored some files, they should do the rest.
Wasn’t there some saying about if you’re in a server room, the calmer the “Oops,” the worse the problem?
“Ooopppsss… 💤”, both containers of the UPS flow battery ruptured at the same time and flooded the whole server room… call me tomorrow for the planning meeting when things stop burning and firefighters have had a chance to enter the building.
If there isn’t then there should be.
Forget coffee, this will wake you up. There’s nothing like dropping the wrong database scheme on a lazy Monday morning.
If you can, always set the title of whatever window you’re working on to capital bold letters, preferably red, saying PRODUCTION SERVER - DON’T FUCK IT UP. This has saved my dumbass a few times when I looked up before hitting enter.
I use IntelliJ for this and my prod connection is red, has warning symbols and it’s read only. I can switch on write mode if necessary, but it will prompt for it. Saves me a lot of stress.
What’s the setting to make it in read only?
https://www.jetbrains.com/help/idea/configuring-database-connections.html#connection-modes
It’s pretty easy to set up but very helpful
This here is wisdom
💖
On SecureCRT I make the backgrounds of production devices a rosy tint so I have something to remind me as I’m working. If it’s a core switch, fire engine red background and neon green letters. An added benefit to this is that I want to get off the core devices as soon as possible.
Had a colleague do this to the local AD server years ago.
Thankfully they pulled the plug before the changes could propagate through the network completely but it still took 3 days to recover the data and restore the AD server.
That’s on the company for not having a proper disaster recovery plan in place.
Or DR test was literally the CIO wiping a critical server or DB and we had to have it back up in under an hour.
To be fair to the company it was a friday afternoon when said person ran a script
Yikes. At least it was only 3 days and not weeks or months of cleanup trying to rebuild shit!
You might like this little video then. Well, it’s 10 minutes long but still. It’s a story detailing a Dev who deleted their entire production database. Real story that actually happened. If you went through something similar then you definitely gonna relate a little.
That’s not an oopsie daisy that’s the whole oopsie bouquet
F*cking Gitlab moment
You’re allowed to say “fucking” on the internet
Yeah, that was extremely funny, but I had nothing stored there at that moment. I guess some gitlab administrator lost twenty pounds in sweat that day.
This is funny, cute, and too relatable.
internally screaming