CommCore Blog and News

Don’t be caught in a deep-fake AI crisis

Most organizations have a list of things that can go wrong including natural disasters, cyber hacks, and unfortunately, active shooters. We are adding AI to that top ten list. Anyone with a smartphone or laptop knows that various versions of AI are everywhere.
 
AI is making our lives easier and more complicated at the same time. According to a Pew Research survey, 38% of Americans are more concerned than excited about the increase of AI use in everyday life. Another study found that employees given access to generative AI tools became 14% more productive than those who were not. By simplifying content creation and assisting with time-consuming activities like summarizing research, generative AI is being revered by organizational leaders as a time-saver and productivity booster – and perhaps a way to keep budgets in line.
 
As crisis experts we know that with benefits come risk; One deepfake, an artificially generated picture based on a text prompt, can begin trending and damage reputation in minutes. For many people, seeing is believing and for this reason, organizations must add AI to the list for crisis preparation. 
 
Here are CommCore’s suggestions to manage benefit/risk – for now:
  • Get familiar with the technology. The AI world is growing with or without you. Whether you feed prompt to ChatGPT or create art through Midjourney, using these platforms will increase your understanding of how they can help you.
  • You can use AI to generate templates for crisis response. It’s fast, but must be customized.
  • Complete a thorough risk assessment before crisis strikes. Identify organizational weaknesses. Crisis communications is all about preparedness, and with a proper plan in place to tackle AI-related crises, you won’t be caught off guard if or when a deepfake image that threatens your company’s image begin to trend.
  • Step up your media monitoring activities. Having that canary in the coal mine to catch trends has always been an important tool in risk management. The prevalence of AI and its ability to create realistic fake content makes this monitoring effort even more important.