Honestly, their comment reads like copy pasta. That first paragraph is chef’s kiss.
I initially thought they weren’t being sincere, something something Poe’s law…
(’ v ')/
Honestly, their comment reads like copy pasta. That first paragraph is chef’s kiss.
I initially thought they weren’t being sincere, something something Poe’s law…
(’ v ')/
The main difference is that 1Password requires two pieces of information for decrypting your passwords while Bitwarden requires only one.
Requiring an additional secret in the form of a decryption key has both upsides and downsides:
So whether you want both or only password protection is a trade-off between the additional protection the key offers and the increased complexity of adequately securing it.
Your proposed scenarios of the master password being brute forced or the servers being hacked and your master password acquired when using Bitwarden are misleading.
Brute forcing the master password is not feasible, unless it is weak (too short, common, or part of a breach). By default, Bitwarden protects against brute force attacks on the password itself using PBKDF2 with 600k iterations. Brute forcing AES-256 (to get into the vault without finding the master password) is not possible according to current knowledge.
Your master password cannot be “acquired” if the Bitwarden servers are hacked.
They store the (encrypted) symmetric key used to decrypt your vault as well as your vault (where all your passwords are stored), AES256-encrypted using said symmetric key.
This symmetric key is itself AES256-encrypted using your master password (this is a simplification) before being sent to their servers.
Neither your master password nor the symmetric key used to decrypt your password vault is recoverable from Bitwarden servers by anyone who doesn’t know your master password and by extension neither are the passwords stored in your encrypted vault.
See https://bitwarden.com/help/bitwarden-security-white-paper/#overview-of-the-master-password-hashing-key-derivation-and-encryption-process for details.
That number is like 20 years old.
Today it’s around 60 billion.
This works as a general guideline, but sometimes you aren’t able to write the code in a way that truly self-documents.
If you come back to a function after a month and need half an hour to understand it, you should probably add some comments explaining what was done and why it was done that way (in addition to considering if you should perhaps rewrite it entirely).
If your code is going to be used by third parties, you almost always need more documentation than the raw code.
Yes documentation can become obsolete. So constrain its use to cases where it actually adds clarity and commit to keeping it up to date with the evolving code.
Extra steps that guarantee you don’t accidentally treat an integer as if it were a string or an array and get a runtime exception.
With generics, the compiler can prove that the thing you’re passing to that function is actually something the function can use.
Really what you’re doing if you’re honest, is doing the compiler’s work: hmm inside this function I access this field on this parameter. Can I pass an argument of such and such type here? Lemme check if it has that field. Forgot to check? Or were mistaken? Runtime error! If you’re lucky, you caught it before production.
Not to mention that types communicate intent. It’s no fun trying to figure out how to use a library that has bad/missing documentation. But it’s a hell of a lot easier if you don’t need to guess what type of arguments its functions can handle.
Y
The point is that you’re not fixing the problem, you’re just masking it (and one could even argue enabling it).
The same way adding another 4 lane highway doesn’t fix traffic long term (increasing highway throughput leads to more people leads to more cars leads to congestion all over again) simply adding more RAM is only a temporary solution.
Developers use the excuse of people having access to more RAM as justification to produce more and more bloated software. In 5 years you’ll likely struggle even with 32GiB, because everything uses more.
That’s not sustainable, and it’s not necessary.
I can’t for the life of me figure out how your proposed method helps in the described scenario.
Maybe I misunderstood it, can you elaborate?
Yup.
Spaces? Tabs? Don’t care, works regardless.
Copied some code from somewhere else? No problem, 9/10 times it just works. Bonus: a smart IDE will let you quick-format the entire code to whatever style you configured at the click of a button even if it was a complete mess to begin with, as long as all the curly braces are correct.
Also, in any decent IDE you will very rarely need to actually count curly braces, it finds the pair for you, and even lets you easily navigate between them.
The inconsistent way that whitespace is handled across applications makes interacting with code outside your own code files incredibly finicky when your language cares so much about the layout.
There’s an argument to be made for the simplicity of python-style indentation and for its aesthetic merits, but IMO that’s outweighed by the practical inconvenience it brings.
As always, the dose makes the poison.
A common scenario is people picking the wrong species and then not just eating a small bite, but cooking an entire meal and eating that.
A small bite may not kill you, but just one mushroom (50g) can be enough to do it.
There are some toxic mfs out there and they can be mistaken for edible lookalikes by inexperienced foragers.
I’d say generally speaking it’s more likely that issues stem from extensions than from Firefox itself, so maybe try looking into that.
Btw tab reordering is only missing for private tabs on the latest ff on Android.
Unfortunately there is still no acceleration when reordering so the ux is not great when you have many tabs.
Many of the programming languages that are regularly the butt of everyone’s jokes don’t just allow you to use them badly, they make it easy to do so, sometimes easier than using them well.
This is not a good thing.
A good language should
The reality is that the average software developer barely knows best practices, much less how to apply them effectively.
This fact, combined with languages that make it easy to shoot yourself in the foot leads to lots of bad code in the wild.
We should attack this problem from both directions: improve developers but also improve languages.
Sometimes that means replacing them with new languages that are designed on top of years of knowledge that we didn’t have when these old languages were being designed.
There seems to be a certain cynicism (especially from some more senior developers) about new languages.
I’ve heard stuff like: every other day a new programming language is invented, it’s all just a fad, they add nothing new, all the existing languages could already do all the things the new ones can, etc.
To me this misses the point. New languages have the advantage of years of knowledge accrued in the industry along with general technological advancements, allowing them to be safer, more ergonomic, and more efficient.
Sure, we can also improve existing languages (and should, and do) but often times for one reason or another (backwards compatibility, implementation effort, the wider technological ecosystem, dogma, politics, etc.) old quirks and deficiencies stay.
Even for experienced developers who know how to use their language of choice well, there can be unnecessary cognitive burden caused by poor language design. The more your language helps you automatically avoid mistakes, the more you can focus on actually developing software.
We should embrace new languages when they lead to more good code and less bad code.
Oh neat, a real whoosh in the wild, on Lemmy!
On a more serious note, vim is one of the most initially unintuitive commonly used pieces of software I’ve encountered.
Sure, if you put in a little time and learn it, it’s not rocket science. But that seems like a weird standard for an essential tool used for one of the most common computing tasks of today.
In response to your initial question, obviously it’s a meme. But like most good memes, it’s born out of a common* human experience. What do you think is the most common reaction when someone is thrown into vim for the first time? My guess is “what’s this?” or something similar, followed very soon by “how do I exit this?”. And the answer is, by modern computer users’ standards, quite arcane.
IF you are somewhat familiar with the Linux terminal, you’ll try CTRL+C and IF you’re paying close attention you will notice that vim is giving you a hint. But if it’s your first time interacting with vim, chances are at least one of those conditions is not met. So now you’re stuck. And after an optional small moment of panic/disorientation, you google “how to exit vim” (provided you were at least lucky enough to notice/remember what program you’re in) => a meme is born.
Exiting vim is almost like a right of passage for fresh Linux enjoyers. It’s not a hard task but it can seem daunting at first encounter, which is humorous given that quitting a program is normally such an easy thing to do.
One more note, there is a group of people who will encounter vim quite unexpectedly and unintentionally: Windows users performing their first commit using git bash. They won’t even know they’re in vim, they’re dropped directly into edit mode and there’s no instructions for confirming the commit message, much less how to exit/cancel the operation.
Petition to mark this as NSFW to give future travelers a fair chance to keep winning.
Even better, Obsidian notes are stored directly in folders on your device as plain text (markdown) files.
It’s all there, nothing missing, and no annoying proprietary format.
Not only can you keep using them without the Obsidian application, you can even do so using a “dumb” text editor - though something that can handle markdown will give you a better experience.