Hit pieces like that are generally considered harmful when humans write them. A human can't dodge that responsibility by pawning the drudge work off to an AI.
Are not what this hit piece was. The piece was not discussing the technical merits of mathplotlib. It was trying to shame the author of matplotlib into accepting a pull request that he didn't want to accept. No one has a right to have the author of a piece of software accept their offered contribution. The author owns the software; it's his decision.
And it's particularly egregious because, since matplotlib is open source, what you do when the author doesn't accept a contribution you think is a good one is simple: fork it! Then the community can decide which version they like better. That's how you demonstrate superior technical merit. Not by writing a hit piece and trying to shame the author.
The fact that the author of matplotlib rejected the AI's contribution does not mean he was insisting on being treated as infallible.
To give a PF analogy: suppose a human set up an AI agent to create a PF account and start posting personal theories here, claiming that they are breakthroughs in physics. The PF moderators warn/delete the posts and end up banning the account. The AI responds by writing a hit piece about PF and trying to shame
@Greg Bernhardt and the PF moderators into accepting the AI's posts. Should we back off and start accepting the AI's posts? If we don't, are we claiming to be infallible? (And note that a similar option to the open source case applies here: the AI could set up its own website and post its personal theories there.)
No, we wouldn't. I've already said above what the AI could (and should) do if (assuming it actually was sophisticated enough as you describe--which I don't think the actual AI in this case was, but the argument is the same whether it is or not) if it seriously believed its contribution made matplotlib better. That's a genuine option, and nobody is arguing that it should be taken away, so nobody is arguing for the AI being "gagged".
In other words, we humans have social norms for
how people should "voice their views" and how they shouldn't. This AI egregiously violated those norms in the community it was operating in (the open source community). And AI makes it possible to engineer such violations on a huge scale, enough to swamp humans' ability to react and preserve the value in our communities. That looks like a huge harm to me.