This is where the fediverse is both powerful and a bit of a challenge to moderate. The best way to deal with these things is to be vigilant in sharing information that supports the contrary; since there’s no real way to filter out bad information unilaterally (and even if so, I’d find that to be a dangerous precedence as who constitutes “good” and “bad” across the federated instances).
While the post was quite toxic towards the admins, the opinion of the user was done in what I see as exasperation at the situation without necessarily understanding the logic of these choices made for the beehaw instance as a whole; so there’s an opportunity to redirect them to a different path or understanding. I’m aware that there are likely several others who share this opinion and may learn from this (just taking a moment to review some of the kbin.socal and lemmy.world threads on this subject shows this as a common concern). Moderation and intervention is more about systemic patterns of an individual’s behavior that clashes with a community’s ethos. Following the ethos of our admins, we take a measured response based on history and engagement.
As for now, things appear to have resolved through disengagement, so mission accomplished: we got the information out there and addressed their concern (and possibly inform other lurkers and the various instances that federate with us on this point).
Also, ML is just statistics and calculus.