For such a large org, they did not have 2-party review/approval for high risk changes prior to this incident. You’d think there would be at least peer review and approval before a change is implemented.
From the blog post someone else linked, it sounds like they do have this process but it’s just a process it’s not enforced by their control software. So someone has made a change without following the process, and they mention prioritising getting enforcement into their control panel or whatever.
For such a large org, they did not have 2-party review/approval for high risk changes prior to this incident. You’d think there would be at least peer review and approval before a change is implemented.
I mean it’s just eventual that shit like this will happen at that scale. As long as people are involved you really can’t guard fully against it.
AI will save us.
From the blog post someone else linked, it sounds like they do have this process but it’s just a process it’s not enforced by their control software. So someone has made a change without following the process, and they mention prioritising getting enforcement into their control panel or whatever.
The number of these that their researchers receive it would be impractical.