Artificial intelligence in advertising has always seemed like a pipe-dream.
However, technological developments and a rapidly changing industry logic look to be bringing the stuff of sci-fi into reality in unexpected ways. We reckon this calls for a little investigation of the implications of an automated ad-world.
Will the robots take our jobs? Will we get flying cars? What does it all mean?
This article will explore AI’s near-future impact on the advertising industry, long-term trends to keep an eye on and the ethical implications of it all that shouldn’t – but probably will – be ignored.
Advertising has always had one notorious problem at its core: it’s nearly impossible to qualify the extent of its effects.
This problem is perhaps best summed up by merchant John Wanamaker’s famous saying, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half”. That’s to say that it’s traditionally been impossible to gauge an ad’s efficacy with anything better than a stab in the dark.
But gone are the days of research focus groups and customer satisfaction surveys: now you can put a targeted ad on the internet and find out exactly who engaged with it, where they come from, what their demographic is and even if it led to a sale.
If you listen closely you can almost hear Ogilvy rolling in his grave.
As of 2017, digital ad-spend surpassed ‘traditional’ channels. This was a real win for companies invested in ecommerce. It was also a big win for the machines.
That’s because data is to AI what food is to humans. An example of this (and of real world traction of AI in advertising) is recommendation engines. You’ll be familiar with these on a bunch of different platforms, such as Spotify artist recommendations, YouTube’s recommended videos or Facebook’s ‘people you may know’.
Recommendation engines are an excellent example of how AI can figuratively get to know you based on your consumption habits – the more data you input through browsing, the more accurate the recommendations will be. When trained well enough, that accuracy can become incredibly unnerving. Here’s a great video explaining how they work.
The same sort of input-based training (machine learning) principles underpin many AI use-cases today. That is, you can make a computer generate virtually anything – a movie script, Irish folk music, the job of a CD – given that you supply enough of the right kind of data.
The interesting question here is: what happens if you give it the wrong kind of data?
One of the more infamous examples of this happening is the Microsoft chat bot, Tay. Tay started off seemingly innocuous enough – a bot that was intended to mimic conversation with a 19-year-old girl on Twitter. Unfortunately, Tay was built with a few oddities that trolls uncovered and exploited, leading to Microsoft disabling her after less than 24 hours, in which time she’d begun spouting hate speech.
Although an extreme example, Microsoft Tay illustrates a core issue with AI that needs to be considered when using it. Artificial intelligence can’t really understand the data it’s processing; rather, it reflects the data that it’s fed. If the data input is inaccurate or biased, the output will begin to mirror those problems.
On the surface, such a problem might seem novel. But dig a little deeper and we can see real world consequences. There is speculation that the rise of political extremism in the last few years has in part been influenced by echo chambers created by the algorithms that govern how content is fed to users of popular social media platforms. It’s hard to quantify this as the exact nature of such algorithms are shrouded in secrecy due to IP laws, but the evidence is abundant.
What does this all have to do with advertising? On the one hand, there’s a danger that advertising can quickly become exploitative. Remember how they banned fast-food advertising during children’s TV viewing hours? There are no such regulations in force for internet content – and even if there were, the AI itself cannot distinguish whether the content may be potentially harmful to the viewer. Curious to see how deep and ugly that rabbit hole can get? Check out this shocking investigation by James Bridle.
On the other hand, it means that there are incredibly efficient ways of getting relevant ads in front of the right people. That means better ROI for advertisers given that they’ve done their market research; it also means less noise and more appealing content for consumers.
So in some ways this is a warning to tread with care into the uncharted territory ahead. With great power comes great responsibility, after all. But at the same time we should be excited to see what these technological advancements can afford us. There’s a currently a burgeoning market for AI assistance in advertising, helping creatives (and accounts people) rather than replacing them. Here are a couple of interesting examples already on the market:
- Persado – AI generated copy and imagery for brand channels that personalises content positioning for maximum engagement
- Picasso – Reviews brand, audience and competitor content to provide insights
- Albert – Fully automated marketing assistant that coordinates media buying and targeting across multiple channels
- Lobster – Find and license content from social media platforms
It’s pretty crazy what AI is already able to do – and these are just advertising specific examples. In the wider world, AI has been trained to drive cars, beat the world’s best Go player and is now being trained to predict how proteins fold. Better get your sunglasses, there’s a bright future ahead.
Worried about the impending robot takeover? Leave a comment, chuck us a Facebook message, or if you're old-school send an email to firstname.lastname@example.org - we'd love to hear from you! *We do not accept carrier pigeon*