• 1 Post
  • 185 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle




  • I don’t think it’s possible to make a blanket statement in this sense. For example, Lemmy doesn’t handle as sensitive data as 23andMe. In this case, it might be totally acceptable to have the feature, but not requiring it. Banks (at least in Europe) never let you login with just username and password. The definitely comply with different standards and in general, it is well understood that the sensitivity of the data (and actions) needs to be reflected into more severe controls against attacks which are relevant.

    For a company with so sensitive data (such as 23andMe), their security model should have definitely included credential stuffing attacks, and therefore they should have implemented the measures that are recommended against this attack. Quoting from OWASP:

    Multi-factor authentication (MFA) is by far the best defense against the majority of password-related attacks, including credential stuffing and password spraying, with analysis by Microsoft suggesting that it would have stopped 99.9% of account compromises. As such, it should be implemented wherever possible; however, depending on the audience of the application, it may not be practical or feasible to enforce the use of MFA.

    In other words, unless 23andMe had specific reasons not to implement such control, they should have. If they simply chose to do so (because security is an afterthought, because that would have meant losing a few customers, etc.), it’s their fault for not building a security posture appropriate for the risk they are subject to, and therefore they are responsible for it.

    Obviously not every service should be worried about credential stuffing, therefore OWASP can’t say “every account needs to have MFA”. It is the responsibility of each organization (and their security department) to do the job of identifying the threats they are exposed to.


  • Yes, forced mfa (where forced means every user is required to configure it) is the most effective way. Other countermeasures can be effective, depending on how they are implemented and how the attackers carry out the attack. Rate limiting for example depends on arbitrary thresholds that attackers can bypass by slowing down and spreading the logins over multiple IPs. Other things you can do is preventing bots to access the system (captcha and similar - this is usually a service from CDNs), which can be also bypassed by farms and in some cases clever scripting. Login location detection is only useful if you can ask MFA afterwards and if it is combined with a solid device fingerprinting.

    My guess in what went wrong in this case is that attackers spread the attack very nicely (rate limiting ineffective) and the mechanism to detect suspicious logins (country etc.) was too basic, and took into account too few and too generic data. Again, all these measures are only effective against dumb attackers. MFA (at most paired with strong device fingerprinting) is the only effective way there is, that’s why it’s on them to enforce, not offer, 2fa. They need to prevent the attack, not let just users take this decision.


  • If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.

    I mean, device fingerprinting is used for this purpose. Then there is the geographic pattern, the IP reputation etc. Any difference -> ask MFA.

    It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them.

    Cloudflare, Imperva, Akamai I believe all offer these services. These are some of the players who can help against this type of attack, plus of course in-house tools. If you decide to collect sensitive data, you should also provide appropriate security. If you don’t want to pay for services, force MFA at every login.


  • Of course this is not a brute force attack, credentials stuffing is different from bruteforcing and I am well aware of it. What I am saying is that the “lockout period” or the rate limiting (useful against brute force attacks) for logins are both security measures that are sometimes demanded from companies. However, even in the case of bruteforcing, it’s the user who picks a “brute-forceable” password. A 100 character password with numbers, letters, symbols and capital letters is essentially not possible to be bruteforced. The industry recognized however that it’s the responsibility of organizations to implement protections from bruteforcing, even though users can already “protect themselves”. So, why would it be different in the case of credentials stuffing? Of course, users can “protect themselves” by using unique passwords, but I still think that it’s the responsibility of the company to implement appropriate controls against this attack, in the same exact way that it’s their responsibility to implement a rate-limiting on logins or a lockout after N failed attempts. In case of stuffing attacks, MFA is the main control that should simply be enforced or at the very least required (e.g., via email - which is weak but better than nothing) when any new pattern in a login emerges (new device, for example). 23andMe failed to implement this, and blaming users is the same as blaming users for having their passwords bruteforced, when no rate-limiting, lockout period, complexity requirements etc. are implemented.


  • My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users’ own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that’s why you enforce things.

    Very often companies use “ease” or “users don’t like” to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That’s why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don’t enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it’s clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.

    It’s up to each user to determine how securely they want to protect their data.

    Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That’s the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?


  • The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.

    Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.

    Regarding the last bit, it might noto have helped against this specific breach, but we don’t know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.

    Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don’t have sufficient controls.



  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldStalwart v0.5.0
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 months ago

    I don’t think it’s you, it generally is a bad practice to have multiple processes inside a container. It usually defeats most of the isolation, introduces problems with handling zombie processes (therefore you need an init) and restarting tools when they crash (then you need something like supervisord, which I guess this image might use - I didn’t check). Each software adds dependencies, which can conflict (again defeating the idea of containers), and of course CVEs. Then you have a problem with users etc.

    So yeah, containers are generally not meant to be used this way. The project might be cool but I would be very uncomfortable running it like this, especially if that’s going to be my primary email, with all the password resetting capabilities etc.


  • This misses the point in my opinion. The point of a protocol is to establish a set of rules that need to be followed, that’s it. After this, it can be managed in many ways, it can be open or it can be closed, etc. The fact that ActivityPub is “engineered from the ground up to support multiple apps with different functionality” it’s because ActivityPub is an open protocol. Every protocol is designed to support whoever implements it. This doesn’t have any inherent “the protocol (changes) will suit everyone” or “everyone will be able to keep up with it” property, though. If changes to a protocol happen very fast, apps that are compatible today - and can be compatible tomorrow too - still need to implemented those changes, or at some point they will not be compliant anymore. This is not because the protocol loses the property of supporting multiple apps, but because a protocol still needs to be implemented, which is responsibility of the consumers, which requires time.

    So my point was to challenge OC perspective that since ActivityPub is designed to support multiple apps, then there is no risk that it gets messed up and breaks compatibility with those apps (because it’s generic) due to - in this case -Threads influence. This is just nonsense, in my opinion.


  • Absolutely. Your email has an image? Maybe spam. Your email does not have an unsubscribe link, even if has nothing to do with transactional emails? Spam. Your email is from an address or domain which did not send many emails before? Spam.

    It feels the meme from parks and recreation.

    And you can’t reliably even know if your message was received or not, the only way to do that is asking directly through some other channel…so the fact that email is open is essentially just an empty quality.


  • I don’t know what is going to happen, and as I said, I don’t even care that much to be honest.

    Blast radius of what? How does that affect existing Mastodon instances?

    It does if this happens gradually, when instances bleed users to Threads because it has “more features”/works better/etc.

    I’m optimistic because I think open alternatives are generally better and will win long term.

    Good for you, I am not sure what this optimism is grounded on, but I lost it completely. I think the battle is already lost, and open solution can -at best- represent a niche corner of the internet. People are used to things that are addictive and create expectations that are unrealistic for services run with budget at 4 digits top. There is no going back, in my opinion. Either way, this is very much besides the point of my argument, which was that email is exactly an example of how big companies can take over “open” protocols with them being left “open” but effectively having 99% of users on 2/3 providers, and a very high entry barrier which renders the “open” nature of the protocol just a formality.


  • which is engineered from the ground up to support multiple apps with differnent functionality (hence me writing this in Kbin and others reading it in Lemmy and being able to link it and follow it from Mastodon)

    I mean that’s basically what every protocol is. ActivityPub abstracts concepts, that apps implement in their own way (for example the concept of group). If you manage to deliver changes, even improvements, to the protocol, apps need to keep up and comply with it. This is what means “drifting towards the corporate actor”. I propose changes to the protocol to a rate that only me (the corporate actor) can keep up with. This way only my users will have certain features and eventually some apps will become incompatible with the recent version(s) of the protocol.


  • Email an open standard? Sure, on the surface it is. Running your own mail server and getting your emails delivered to gmail/outlook users? Good luck.

    Who cares what the form is, if the substance is the problem?

    Same with web. To this day, nobody besides google has the possibility to compete in the browser space. So much shit was added to the web standards, that you need an incredible amount of resources to produce a modern browser engine (I am talking one that users can use for their daily stuff, not lynx). You have chrome, you have all the chromium clones, you have Firefox which is anyway paid by google, and you have safari. Period.


  • No really relevant for my point, but I assume that preventing them to be effectively part of the fediverse, can reduce the blast radius of their changes, since they will be (more) isolated.

    If they are on the other hand fully part of the fediverse (I.e. nobody defederates them) many people may be incentivised to move to “that instance” because it will realistically have better availability and in the future might have more “features”, which is exactly the kind of extensions to the protocol that other won’t be able to keep up with.

    I personally used to care more in the past, I don’t now that much, but I can definitely see the potential danger.