The AI Safety Summit, held last week, was a significant event that brought together world leaders and tech executives to discuss the future of AI and its implications for humanity. The summit’s central theme revolved around ensuring that AI does not become a tool for unchecked wrongdoing and remains beneficial for humanity.
Rishi Sunak, quoting Stephen Hawking, concluded the summit by stating, “AI could be the best or the worst thing to happen to humanity”. This sentiment encapsulates the crossroads we find ourselves at with AI technology, which is poised to become more powerful and pervasive.
One of the most notable outcomes of this summit was the announcement of the AI Safety Institute. Based in the UK but backed by Singapore, Japan, Germany, the US, and tech giant Google DeepMind, this Institute is set to be the watchdog for AI advancements. Its primary goal? To shield the UK and the world from unforeseen AI disruptions by ensuring robust governance. This includes testing AI models before they’re launched, researching ways to ensure AI is safe, and ensuring that information about AI is easy for everyone to understand.
Meanwhile, the US has already gone a step further when President Biden signed an Executive Order last week that mandates developers of influential AI systems to share their safety test outcomes with the US government. This ensures that AI systems are thoroughly vetted for safety, security, and trustworthiness before they go public.
With AI being on top of policymakers’ agenda, it’s time for the communications industry to take stock and explore how we should think about AI adoption in our line of work.
Balancing automation vs. human insight
The AI tech that is available already today is no doubt awe-inspiring. However, according to The Economist, the economic impact of AI will only reach its full potential if countless companies outside tech hubs such as Silicon Valley embrace it. This goes beyond merely integrating the odd chatbot. Instead, it requires a comprehensive transformation of business structures and internal data management.
As communications professionals, we aim to build and manage reputations for individuals or organisations. With the rise of generative AI, communications departments can and should play an active role in a company’s AI adoption process because many of our regular tasks required to manage reputation can be automated with the technology.
Take media monitoring, for example. Most tools have already integrated generative AI into their products that can produce very accurate summary notes of media coverage and social media conversations.
While their AI-generated outputs are likely still a work in progress, it’s easy to imagine a fully automated tool that delivers media reports with recommendations much faster than any human could do.
Nonetheless, human intelligence is invaluable in understanding the complex context of a business and making sound judgments that benefit the company. AI may be taking over routine tasks, but humans are still in a job. As communication specialists, incorporating AI into workflows could free up more time for more strategic, creative, and relationship-focused work.
However, automating routine tasks also has some real downsides. To paraphrase NYT columnist Ezra Klein, we lose some important elements in the creative process by using AI for some routine tasks, such as summarising long articles. Even if AI summaries and drafts are pretty good, the increased efficiency in creating them comes at the cost of new ideas and deeper insights you would gain from manually summarising an important document.
Therefore, knowing when and how we’re deploying AI tools is important.
Defining the AI’s purpose in communications
The guiding question for communications professionals here should be: “AI for what purpose?” Answering this question helps us clarify our thinking, prioritise efforts, and focus on high-value possibilities, according to Avinash Kaushik, a leading expert in AI adoption. His three-tiered AI adoption framework identifies three distinct AI purposes:
- AI as a muse. Think of AI as your brainstorming buddy. It provides insights, thought starters, and first drafts that can spark ideas, but the human drives the strategy and execution. Imagine you’re working on a research report, and you use AI to summarise a long media article. It may help you identify emerging narratives quicker and give you a starting point, but crafting the report remains in your hands.
- AI as a co-pilot. Here, AI is more involved. It actively collaborates with the human in specific tasks, enhancing their capabilities and making processes more efficient. Consider media monitoring tools like Meltwater or Brandwatch. They can sift through vast amounts of data to provide you with relevant media mentions and sentiment analysis. However, it is up to the human analyst to interpret this data, understand its implications, and decide on the next steps.
- AI as a tool. In this scenario, AI operates with minimal human intervention. It’s set up to handle specific tasks autonomously based on your parameters. For example, this automated system could automatically optimise your corporate website for search engines by deploying the best-practice SEO advice without human input.
When it comes to important matters that can significantly affect your company’s reputation, such as making an important announcement, you’re better off using AI as a co-pilot or even a muse rather than a tool. This will ensure that you still rely on human judgment to drive your corporate reputation, ensuring your message is accurate and conveys the intended meaning and tone.
Overall, last week’s summit underscored that AI is here to stay. As the models become more powerful, they will inevitably transform our businesses and society. Policymakers are rightly focused on making sure AI models are safe. In the meantime, communications professionals should think carefully about balancing the use of AI for task automation vs. using it for enhancing human intelligence and creativity. Need help with AI adoption in your comms function? Contact Headland’s Digital Team at digital@headlandconsultancy.com.
*Image created with DALL-E 3 using the prompt “Generate an image that captures the following sentiment: As the models become more powerful, they will inevitably transform our businesses and society. Policymakers are rightly focused on making sure AI models are safe. In the meantime, communications professionals should think carefully about balancing the use of AI for task automation vs. using it for enhancing human intelligence and creativity.”
Read more Insights & News