It’s a trend lately, that potentially sensitive things will be said or output from the models, so you can see an increasingly crazier set of guardrails getting put around the LLM’s so that they don’t offend someone by mistake. I’ve seen their usefulness decrease significantly, but their coding assistance is still somewhat good, but their capabilities otherwise decrease significantly.
Agreed, but in the context of this post, that copilot key on the keyboard will take people to the most inoffensive and “walled garden” variety of generative AI that will be so one-size-fits-all to the point that its usefulness will pale in comparison to local run models or SaaS hosted style services that give you a hosted model to run off of.