November 6, 2024
Most PRs and journalists now use generative AI to create content, but keep quiet about it
A new report claims that while the majority of content writers in the UK’s PR and communications industry are using generative AI tools, most are doing so without their managers’ knowledge. The study, titled CheatGPT? Generative text AI use in the UK’s PR and communications profession, claims to be the first to explore the integration of generative AI (Gen AI) in the sector, uncovering both its benefits and the ethical dilemmas it presents.
The report, conducted by Magenta Associates in partnership with the University of Sussex, surveyed 1,100 UK-based content writers and managers and included 22 in-depth interviews. Findings indicate that 80 percent of communications professionals are frequently using Gen AI tools, although only 20 percent have informed their supervisors. Moreover, a mere 15 percent have received any formal training on how to use these tools effectively. Most respondents (66 percent) believe that such training would be beneficial.
The research highlights how Gen AI has transformed content creation, with 68 percent of participants saying it boosts productivity, especially in the early drafting and ideation stages. However, many organisations have yet to establish formal guidelines for Gen AI use. In fact, 71 percent of writers reported no awareness of any guidelines within their companies, and among the 29 percent whose employers do provide guidance, advice is often limited to suggestions such as “use it selectively.”
While the technology offers clear advantages, concerns about transparency and ethics linger. Although 68 percent of respondents feel Gen AI use is ethical, only 20 percent discuss their use of AI openly with clients. Legal and intellectual property issues also loom large; 95 percent of managers express some level of concern about the legality of using Gen AI tools like ChatGPT, and 45 percent of respondents worry about potential intellectual property implications.
The report’s authors stress the need for industry-specific guidance to ensure responsible AI use in content creation. Magenta’s managing director, Jo Sutherland, emphasised the importance of an informed approach, stating, “This isn’t just about understanding how AI works, but about navigating its complexities thoughtfully. AI has undeniable potential, but it’s crucial that we use it to support, rather than compromise, the quality and integrity that defines effective communication.”
Dr. Tanya Kant, a senior lecturer in digital media at the University of Sussex and lead researcher on the project, highlighted the need for what she terms “critical algorithmic literacy” – a foundational understanding of AI tools’ broader implications for ethics and industry dynamics. Dr. Kant pointed out that smaller PR firms must be able to contribute to shaping AI standards and ethics, an area currently influenced largely by tech giants.
The report calls for transparency, industry guidelines, and ethical standards to help UK PR and communications professionals use Gen AI responsibly, particularly within smaller businesses that may lack the resources to shape AI policies. Magenta and the University of Sussex intend to keep collaborating to foster a more ethical and inclusive AI landscape in the communications sector.