The video is just 68 seconds long. In it, a narrator (a grey-haired white man with a Floridian accent) talks about his views on homosexuality.
I don’t need or want to share what the views are, needless to say, the average viewer would likely find them pretty extreme and offensive.
Before the video started I got a thirty-second targeted ad from a trading platform (I’ve seen this ad a million times, as I’m working for one of their competitors).
We can’t know how sensitive this brand is to associations with extreme content, but we can guess it would rather not have been associated with this video.
Certainly, other brands would not have wanted this, and some are voting with their feet, pulling ads from YouTube until they can be confident of the content they’re advertising budgets are supporting.
Having come under significant fire, YouTube’s owner Google has promised action and come up with a plan.
Google will make it harder for videos using incendiary and derogatory language, especially when based on religion, gender or sexual orientation, to monetise through advertising.
Google is going to redraw the boundaries of propriety for content on YouTube. Good luck to them because that is a real can of worms.
The problem with setting these kinds of boundaries is that it involves objective decision-making that requires human judgement.
This creates a massive, perhaps insurmountable, logistical challenge. Around 400 hours of video are uploaded to YouTube every minute.
Technology might be deployed to review all this content, but how does it make a judgement between a video including genuinely offensive content, a video telling a joke about offensive content, and a video with a news style report about offensive content?
That leaves humans reviewing the content.
Even if Google could employ the hundreds of thousands of people required to sift through over half a million hours of video uploaded to YouTube every day and make value judgements about the nature of each video, who is Google to dictate what’s proper and what’s not?
Facebook has tried controlling content with a human hand through an editorial board and trending topics, which resulted in accusations of political bias, forcing them to revert back to algorithms.
Laying all this blame at Google’s door is unfair. It has created the misperception that Google alone can solve the problem.
Media buying agencies have gotten off lightly, but have a big role to play. The dominance of Facebook and Google have begun to reduce media buying to mathematics. In a world where Google and Facebook have so much control of advertising space, it’s getting harder for these agencies to demonstrate value to clients.
But media buyers have failed to hold Google to account for an issue that has been building up for years. If Google isn’t providing the tools for agencies to sufficiently protect their clients’ brands, then the agencies need to be honest with clients about the risks of using Google; and they need to be lobbying Google for more granular spending management tools.
Google could use technology to categorise content by the words frequently used within an individual video. This might allow brands to black-list certain categories or words.
So, for example, videos in which offensive racial epithets appear (or appear a X times per minute) could be excluded from a given brand’s advertising budget by selection.
This is not a perfect solution, far from it, but it gives some control back to brands and agencies, and distributes value judgements beyond ‘what Google thinks’.
It is ultimately better for brands (perhaps with agencies as their proxy) to make the call about what content their advertising budget should support.
Forcing Google to decide for everyone is bad marketing and dangerous ethics.