This site is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Artificial Intelligence (AI) may be the most transformative technology to impact the marketing industry since the Internet – or for that matter, the printing press.

But the excitement around AI is tempered with trepidation, as the exponential speed of adoption threatens to streak past ethical considerations of privacy and human-inserted bias, turning aspects of the well-intentioned breakthrough into an episode of the hit technology-wary TV series “Black Mirror.”

The marketing leader’s guide to getting started with AI

AI now touches every step of the buyer journey. But where should marketers start? From CX to targeting and boosting creativity, our team will focus on what marketers need to know right now.

Watch the webinar now

The concerns around AI can’t be taken lightly. Up to 60% of European consumers already believed back in 2019 that AI would lead to more abuse of personal data. Fast forward to a marketing world where images and videos can be created with a few lines of text, and it’s easy to see how the ability to maintain trust and a human touch with customers has never been more tenuous or critical. 

Marketers looking to reap the many benefits of AI must start weighing those ethical concerns now. “Generation AI” is here, forcing us to navigate both the positive and negative implications of AI-powered marketing.

“It’s not about ‘Can we use this?’ Sometimes the question is ‘Should we use it?’” said Natalia Modjeska, research director for AI and intelligent automation at Omdia, during a recent webinar. “Is this the appropriate use case, appropriate application of technology, and do we have the right guard rails in place?”

The impact AI in marketing today

The rise of AI – and especially generative AI – has already been a game changer. ChatGPT, a large language model developed by OpenAI, reached 1 million users in just five days. It took Netflix 3.5 years, Facebook 10 months, and Instagram 2.5 months to reach the same milestone. 

With 44% of private sector companies planning to invest in AI in 2023 according to Info-Tech Research Group – and a projected $15.7 trillion impact on the global economy by 2030 according to PWC – the technology will continue to revolutionize every industry. We’ve already seen how AI in marketing has increased customer engagement and satisfaction through:

  • More (and more targeted) recommendations: AI-powered algorithms can analyze huge amounts of consumer and business data, from browsing and purchase behavior to demographics and personal preferences, creating highly personalized and relevant recommendations and content.
  • Better customer experiences: AI tools such as chatbots provide instant and personalized customer support, as well as dynamic pricing and customized promotions. 
  • Efficient and optimized campaigns: AI can spot patterns and trends for better tailored campaigns and messaging, resulting in better conversion rates and overall ROI.

AI today functions as another member of your marketing team. But like any employee or partner, you need to establish rules and guardrails to avoid violating customer trust. 

Ethical concerns on the rise

AI’s great power requires the exercise of greater responsibility on the part of the marketers who wield it. Ethical considerations around transparency and data control have always been top of mind, but now – with humans in control of the training, but not the execution – protecting customer information and eliminating unconscious bias are more important than ever. 

Businesses must ensure they have robust data protection policies in place and comply with all necessary regulations. They should be transparent with customers and demonstrate their commitment to privacy and trust. 

That’s because AI can inadvertently perpetuate biases and spread misinformation. Yes, each AI instance creates models and executes tasks based on inputs and rules, but it’s our job as humans to set those rules and train the AI, which can lead to inadvertently discriminatory or even dangerous recommendations.

“AI models are really confident about what they say,” said Mark Beccue, principal analyst for AI and NLP at Omdia, during a recent webinar. “Whether it’s right or wrong, they’re just confident in their outputs.”

Case in point: The National Eating Disorders Association made news recently when it had to shut down its chatbot – which was designed to help users at risk for developing eating disorders – after it gave a few users weight-loss advice (a tactic that could be dangerous for someone who has anorexia). Similarly, tech news site CNET was heavily criticized earlier this year for publishing 77 stories that were written by AI without disclosing how they were created. CNET had to issue 41 corrections on those stories, an abnormally high number for a brand associated with quality content.

Marketers must monitor and address any instances of bias, misinformation or other sensitivities as warranted, and follow responsible AI practices including using diverse data sets, conducting regular audits, and completing ongoing evaluations. 

AI systems, because they tend to prioritize content based on past preferences, also tend to create filter bubbles – an ethical violation by omission that can limit exposure to diverse perspectives and reinforce existing beliefs. Marketers need to find a balance between personalization and opening customers to new ideas and perspectives. 

As we advance, we must maintain our human touch

The marketing applications for AI appear limitless, from hyper personalized campaigns to virtual shopping experiences incorporating augmented and virtual reality technology. But the more we move forward, the more we must remember that marketing is, at its core, a human-powered endeavor. 

“Some feel that they can give over their creativity to these tools. In reality, marketers should see AI programs as a tool that can inspire creativity and give them options that they didn’t know they had prior to using it,” said Bradley Shimmin, chief analyst for AI and data analytics at Omdia, during a recent webinar. “It’s a resource to approach this world in a more impactful way.” 

Automation is efficient but lacks the empathy and holistic understanding of a human customer service representative. AI can lead to psychological manipulation and create ethical gray areas. For instance, tech company Raydiant is developing AI facial recognition technology to personalize menus based on customers’ faces. Imagine being told what you should eat based on your looks or mood. 

“One of the big minefields is the privacy issue, especially when it comes to things like end-user behavior analysis,” said Curtis Franklin, principal analyst for enterprise security management at Omdia, during a recent webinar. “How much information can you gather before you inappropriately intrude on the privacy of the user? That’s the sort of thing that we’re still working out in the industry.”

As AI tools become easier to deploy, marketers should be wary of abdicating their data-driven decision making completely to machines. Human intuition and creativity should still play a central role in designing marketing strategies that connect with customers on an emotional level. 

It all comes down to trust – and to us

Clear guidelines are necessary to prevent the misuse of AI. And governments are already starting to act.

In June, the European Parliament passed a draft law known as the AI Act, which categorizes AI actions into three levels of risk. If adopted, it would ban the highest risk (like government-run social scoring programs similar to what China operates), regulate the mid-tier of risks (which would include AI like ChatGPT) and leave any AI judged to be on the lowest risk level unregulated for now. The United Kingdom is taking a parallel approach, and plans to host an AI safety summit in London later this year.

The eventual establishment of these guardrails must go beyond good intentions. Marketers must take steps to reduce any risk of bias, from validating data training sets and outputs to the specific way an AI system is used. Security risks should be assessed and plans put in place to lock down any highly sensitive data.  

After all, while legislative progress on AI is widely seen as both necessary and good, the ethical decisions are completely in the hands of marketers for now.

“Self-governance and self-regulation within your particular organization and your particular industries is of paramount importance,” Modjeska said during the webinar. “We can’t afford to wait until the [government] powers above are ready.”

Informa Tech delivers high-performing digital marketing services across the customer journey. Learn how we can help you build your brand, earn trust, and drive demand.