But the BIGGER lesson, as has been stated by so many people far smarter than I am, is that explicit prejudice isn’t necessary to create a discriminatory system. Just four steps:
1) Algorithm is created to flag offensive terms.
2) Algorithm is fed with data from survey respondents who may not have been selected in enough quantity and demographic diversity. Subconscious (or worse) prejudice in a segment of respondents is amplified dramatically; bias against a term leads to a full ban of that term.
3) On a separate team, support agents attempt to resolve problems one by one. Their goal? Click “resolved”, get a gold star, move on to the next case. There’s likely little incentive to escalate cases potentially symptomatic of larger and more malignant problems in the system.
4) Engineers remain blissfully unaware of the problem as it persists and even intensifies. After all, no one has reported any “offensive” ads, so it must be working. And a group’s voice has now been muted without requiring the slightest malicious intent from anyone involved.