If Google can nearly perfectly catch spam in the manner they do, then catching misinformation/disinformation is no more difficult.
There are plagiarism checkers that use massive corpuses of text to ferret it out, and handling misinformation/disinformation is directly analogous to that.
I'll believe that we're down to the "last 1%" that you claim when I see anything near to the effectiveness of spam filtering, and I don't. Not even close.
You think Google's spam filer is perfect?
The only response I have to that is to laugh, long... hard... and fully. It's good, just as M365's filters are good, but they are not perfect. AND they do fail ~1% of the time, presenting exactly the same order of magnitude problem.
Furthermore, spam filters on both platforms DO NOTHING to stop Phishing attempts. Misinformation on a social media platform is in effect, a phishing attempt.
This problem is beyond our means to solve. And again, the fact that you fail to recognize that is simply because you do not understand the limits of modern automation.
Phishing is dealt with via training programs for a reason, because we have to patch the human brain also for a reason. The same applies to misinformation.
Now what we can possibly do is prevent people from profiteering on misinformation to some degree. But that requires some solid terms of service, a reporting system, and a transparent auditing system enforced with some sort of fine or platform access denial. There is actionable stuff in here, but we have no means of fully automating it, and the scale of human input required here makes social media at large impossible to monetize.
We MIGHT be able to make things work if social media was limited to within a specific nation. But that concept also all but defeats the purpose of social media as we know it.