Human Rights Watch accused Facebook/Instagram of bias against support
for Palestinians, in its moderation of postings.
It said the company
fails to follow its rules and the result is systematic bias.
The company replied that it attempts to apply its rules
even-handedly but it is not straightforward to do so.
Since the combat in Gaza is asymmetrical, it is not surprising that
applying one set of rules to supporters of both sides, as
even-handedly as a human can be, could lead to a biased result.
For instance, Israel's massacres often result from systematic
carelessness. In one case, it led to Israeli troops killing three
escaped Israeli hostages. Surely the soldiers did not specifically
wish those hostages dead; but their "this has to be a trap" mindset
predisposed them to killing anyone who was in the wrong place at the
wrong time. The fog of war did the rest. All that would apply to
each of the 2 million Palestinian civilians, just as it would to the
occasional escaped hostage.
I can imagine that Facebook/Instagram moderators look at a video of
civilians killed by Israel's bombardment and calling that a
"massacre", and rejecting that claim as false on the grounds that
there was no sign of a specific intention to kill Palestinian
civilians at that moment. At what point does adoption of a policy
that leads to indiscriminate killing of civilians become tantamount to
a decision to kill civilians?
It is true that 1,000 examples of postings allegedly handled wrong is
a minuscule fraction of the postings that are made. On the other
hand, to check that many examples is a large task. Facebook/Instagram
has the money to check far more than that; Human Rights Watch does
not.
I have an idea for a way of evaluating whether the moderation system,
over all, is biased. First, randomly choose N instances of postings
accused of being unfair to Israel, and N instances of postings accused
of being unfair to Palestine. Then study what the moderation system
did with each one, and deduce from those examples what the real
moderation system does in actual practice. What are its real,
practical rules and real, practical criteria?
Those conclusions about how the moderation system actually works would
provide a basis to judge whether that system is fair in practice —
and provide specific recommendations to improve it, if not.