If before predictions were categorized as helpful or unhelpful, now users can report them as “hateful; sexually explicit; violent or includes dangerous and harmful activity; or other” with commenting capabilities.
This doesn’t mean Google will pull flagged “suggested result” immediately. But they said that with enough volume, they might prioritize a review.
Likely they will pull only the most strident cases of alternate facts. But there remains the big, theoretical question of censorship. If Google pulls some information because it is deemed offensive, what moral grounds will prevent them from censoring other search results?
Feedback for “Featured Snippets” in Project Owl
Google was embroiled in another media scandal late last year. The Featured Snippet used for Google Assistant gave a terribly offensive answer to whether women were evil. The Project includes improving these Snippets.
Just like with auto-complete, instead of a feedback form that merely indicates helpfulness, the improved forms will let Google know whether it was harmful, offensive, etc.
Another way Google is combatting fake news is by promoting authoritative content through Project Owl. But Google didn’t explain how its algorithm determines a site more authoritative than others. Undoubtedly many factors go into this process, which is what Google says without divulging the specifics.
Surely the New York Times is a more authoritative source than a weird uncle’s blog who lives in the Oregon woods. But what if I am looking for information on how to live day to day off the grid in the woods? The problem with fake news sites cropping up more prevalently than real news sites is because people search for things to validate their worldview.
It is true most of the process is still shrouded in mystery. We don’t know how the changes will affect how we search online.
But it is clear that this issue needs our full attention. If we don’t want to be fooled again by what we choose to believe.