Talking Heads: Five things to consider about the impact of AI
Paul Roberts from MyCustomerLens and Michael Evans from Byfield Consulting go head-to-head on how Large Language Models are changing our world as marketers.
The problem of “beige content”
Paul: AI is a very broad topic, but it’s the recent launch of large language models like ChatGPT and Bard that have got people talking. AI is a great way to automate manual processes, but in the case of ChatGPT-style AI it’s creating very beige content. For the most part it's correct and accurate, but it's missing the human voice that comes from an individual with an opinion and perspective on the subject. All the similar sounding ChatGPT articles written by ChatGPT sound the same - they all have the same point of view that ChatGPT is amazing, but would we expect it to say anything else? To get something more personalised and original, authors need to spend a lot of time crafting detailed prompts. I'm not sure how practical, or how high a priority this will be for inexperienced and under-resourced marketing teams.
Michael: Let’s be honest, a lot of marketing materials are beige content. I remember a general counsel talking about how during a panel process the in-house team played a game by anonymising what law firms said about themselves and trying to guess which firm was which – apparently it was impossible because they all said the same things. And these are big firms with plenty of resources. So perhaps one role of AI is to make the creation of beige, undifferentiated content more efficient. Why spend your own time on a brochure no-one will read when you can have AI write it for you? Leave the high-end, valuable marketing content to human beings and the tech can do the rest!
Can it outperform us in analysing client feedback and creating PR messaging?
Paul: Given my role at MyCustomerLens, I'm often asked about how ChatGPT will transform feedback analytics by creating effortless summaries of text comments. It's an attractive idea, but again risks generic outputs. AI is already transforming the speed and consistency of text analysis through personalised techniques like natural language processing (NLP). In contrast, a generic LLM (large language model) like ChatGPT has been trained on billions of bits of data that don't relate to a specific firm's brand and priorities. So, while it can summarise text, it can't tell you what YOUR firm should make of a piece of feedback because it doesn't know YOUR brand, in-house budgets, client base, strategic priorities etc. Michael: I tend to think the LLMs are telling us the world is actually more generic than we think it is. Our lived experience and what we feel is the stuff that is not generic and that plays into the nuances and personal details that you talk about. Ultimately, we are the meaning-making machines and LLMs are just a new tool to facilitate that. I can’t speak about client feedback but in the PR world, messaging doesn’t vary as much as you might think. Press releases about a hire or a deal, or even a merger will look and feel similar but there are messaging nuances and the thing AI has not yet managed to do is build the relationships with the media that are needed to make a story land well. I’m very comfortable with the idea of using ChatGPT for first drafts of materials or to stress test something, but it can’t make judgement calls about how transparent to be when handling a reputational issue, getting the timing of announcement just right, or deciding who to give an exclusive to.
Where are the risks?
Paul: Large language models are a black hole. They're trained on such vast data volumes that it's impossible to identify where the results have come from or the logic behind achieving them. These models are also prone to hallucinations, where they very convincingly make things up. While humans remain at the controls, the risk is limited. Everything needs to be checked and edited, which reduces my excitement for how much time LLMs will really save. In comparison, older forms of AI such as machine learning are less sexy but more robust and easier to audit. There are going to be some very interesting debates on competence versus performance over the next 12 months.
Michael: I do wonder if AI can be legally liable for its actions when things go wrong. We will no doubt find out in a few months or years. At the PR level the risks are if businesses deploy AI too early or without care and attention. Chatbots gone wild and abusing customers is the stuff of tabloid dreams, as are out-of-control driverless cars. At the more prosaic but important level though the risk really is not building tech and AI capabilities. Successful businesses must invest in the right skills and right tech for the future to ensure they reap real value from the advance of AI. Those failing to do so will be left behind.
“Reports of the demise of marketing and client listening jobs at the hands of AI are very premature. I see AI enabling faster and more scalable processes that reduce the need to hire additional people, but it won't replace the people already there.”
"I think being able to effectively use LLMs (large language models) will be key to those entering the workforce in three years’ time – just as important as being able to use a search engine and the internet was for those of us starting our careers at the turn of the century."
Replacing marketing and BD roles?
Paul: Reports of the demise of marketing and client listening jobs at the hands of AI are very premature. I see AI enabling faster and more scalable processes that reduce the need to hire additional people, but it won't replace the people already there. Until the world achieves Artificial General Intelligence (AGI), businesses will require people to sense check and interpret outputs for the company's specific needs. People are also required to turn insight into action that's driven across the business. The future is people leveraging AI to make themselves and their firms more successful.
Michael: I think being able to effectively use LLMs (large language models) will be key to those entering the workforce in three years’ time – just as important as being able to use a search engine and the internet was for those of us starting our careers at the turn of the century. Whether it replaces roles is less relevant than the fact that being unable to deploy this technology professionally will exclude you from the professional workforce. It’s as simple as that. Make it work for you or you won’t have any work at all.
Tangible use cases
Paul: I've been using Google's Bard in a couple of ways that could also be relevant to professional services firms. Firstly, to get our content marketing past the blank page. I ask Bard to outline a blog post based on a given topic. I then flesh out the story and add my experience and point of view to avoid the post becoming beige. SEO tools like UberSuggest have similar functionality that can produce a whole post but should really be used just to outline it. Secondly, I started experimenting with using Bard for market and brand research, for example asking it to compare the value propositions of different companies or asking its opinion on the most customer-centric firms. While these responses aren't "correct", they provide an interesting perspective.
Michael: I started by getting ChatGPT to write bedtime stories for my kids. It was pretty good, but a bit generic. Then the headmaster of their primary school died suddenly and it was a huge shock. I asked ChatGPT to write a story about him becoming an angel and watching over the school, and it did a fabulous job. I was welling up reading it. Professionally, I went around the junior members of the team a few months ago, asked each of them what they were working on and had ChatGPT write things to help them with those tasks. I then encouraged them all to get accounts and play around with ways to use it in their work. I also used it to help me fill out a speech I was giving about crypto disputes with some big picture, helicopter view stuff about the industry – it did a good job and made me feel more confident I knew what I was talking about.