EthicalVoices

Promise and Pitfalls, the Ethical Use of AI for Public Relations Practitioners – A Conversation with Michelle Egan and Mark Dvorak

In November 2023, PRSA issued new ethics guidelines titled “Promise and Pitfalls, the Ethical Use of AI for Public Relations Practitioners”. I wanted to dig into the guidelines and in mid-December 2023, I interviewed two experts to help me do just that. Please welcome Michelle Egan, the 2023 National Chair of PRSA and Mark Dvorak, the 2023 Chair of PRSA’s Board of Ethics and Professional Standards.

What’s should PR pros know about the new PRSA guidance that was issued on November 20th?

Michelle Egan: We started the year knowing that there were some really important issues that we needed to address as an organization and AI was at the very top of the list. Mark chairs the Board of Ethics and Professional Standards, and that’s really the place where we look for guidance on emerging issues. This was a well thought out process on a topic that’s important to all of our members and well beyond our membership.

Mark Dvorak: Early in the year when you would bring up the subject – you would have a percentage of the population that was saying, this is the greatest thing since sliced bread. You had maybe a slightly smaller subset of the population that was scared to death that we were all going to lose our jobs and be replaced by technology. The truth is somewhere in between. This technology is going to help us practice our profession better. It’s not going to completely change how we operate. There’s still that human element with PR. It is very important for people to get into it a little bit more and actually have a chance to play with and touch and feel the technology to see what it can really do and what are its limitations.

Michelle Egan: I saw a real shift across the year. Early in the year when presenting to groups at conferences, I would ask, “How many of you have tried one of these tools like Chat GPT?” I would get just a couple of reluctant hands rising. By the end of the year, you had most of the room and a lot more enthusiasm about getting in the playground and trying things out. It’s really timely that we would issue this guidance.

What are some of the key takeaways you want PR pros to keep in mind from the Promise and Pitfalls documents?

Mark Dvorak: Most of all, this is a sea change in terms of how knowledge is shared, and things get done. It’s incumbent upon all of us to really dig in and figure out what we don’t know and what we need to know. Get our hands dirty and realize that none of us is going to have all of the answers or be completely comfortable right off the bat. Just like we weren’t when social media came out back in the early 2000s or a decade earlier when we first started having the internet.

Those innovations transformed how we live and how we practice. I think for anyone who’s gone through those eras, we can see it’s going to take some getting used to, I’m going to have to do my homework. This is not a one and done that tomorrow I’m going to be fully up to speed on it and I’m going to be good. We know that AI is already evolving every day and we’ve got to stay on top of it. I’ve been really excited to see how many people have been coming to PRSA and the guidance, because they want to get a handle on it and want to figure out where do I need to build my skill set. What are those gaps that I’ve got to try to fill?

Michelle Egan: I’ll add to that. Our members are used to our Code of Ethics. Every member agrees to the code of ethics and then re-ups their commitment to it every single year. When we take that framework that we’ve agreed to and we practice, and then apply it to a technology like this or any emerging issue, then there’s real value. We’re speaking the language that our members and non-members speak and we’re giving some guidance there.

It’s a 50-year-old code. It stood the test of time. But issues will change. Mark heard stories of people who picked up the guidance and used it in ways that we might not have foreseen, like using it in their classroom as a professor. Or just recently a non-member talking about having her own decision-making framework around AI, but then using this particular framework to validate that going forward.

Mark Dvorak: I would just want to add on to something Michelle mentioned. When you chair BEPS, you drink the Kool-Aid a little bit and you say things like, wow, the code is still as valid and right today as it was when it was first developed, but it is absolutely true. As we sat down and went through the provisions of a code as it relates to AI and what we were seeing and hearing, it’s nothing short of amazing that the folks who first developed the code way back when were so enlightened. The way they crafted it, they did it in such a way that as times changed and we had new technology and everything going on in our world, you could still apply the code to whatever’s happening in the world and still find a lot of benefit in it in terms of helping you practice the profession. I think that’s just truly amazing.

That’s a hallmark of a great code. Great codes of ethics don’t change all the time or frequently. From Kant’s deontology to utilitarianism to Aristotle and virtue. There may be permutations and new things that surface, but the core principles haven’t changed in 2,000 plus years.

What are some of the points of the Promise and Pitfalls that you want to highlight?

Mark Dvorak: A lot of the things that we are dealing with now, were issues even before last November when ChatGPT made news, and this all became so much more real. Issues of mis- and disinformation, issues of diversity, inclusiveness. How are we developing our applicant pool for positions? How do we make sure that when we are casting the net to engage our publics, that we are doing it very thoughtfully and appropriately?

These challenges are even more so front and center today. It’s even more important that we’re on top of them because AI has the potential to exacerbate any problems and to magnify the situations that can develop. At the end of the day, it’s still technology and it’s incumbent upon us to bring that human element, that reason, that analysis to the table and say, yes, this is what it’s spit out, but have we been very thoughtful across the board in terms of what we’re going to do with this data and how we’re going to represent it?

There is other ethical guidance on AI for public relations. I helped develop PR Council’s Ethics Guidelines on Generative AI. CIPR has guidance as does CPRS. I’ve seen a bunch of others. What makes PRSA’s guidelines different?

Michelle Egan: I’ll start this one off and then I’ll let Mark add to it because he’s been really deep in this work. It is different because it’s based on PRSA’s code and that is very different. It also provides a lot of use cases, and it acknowledges that it’s not a one-size-fits-all answer. It’s not just don’t do this, please do that. You have to take the guidance and apply it in your own work. That makes it somewhat different.

Mark Dvorak: Throughout the year. It’s no secret I was really pushing the work group along to get it done. But I think it happened in the time it needed to happen. There was guidance that came out from other PR organizations. We had a lot of great programming that PRSA in particular developed throughout the year that was there because there was a need and an interest, didn’t need BEPS to be pushing it.

All of that learning and interaction helped BEPS develop subject matter expertise and helped the work group in particular really have a chance to get deeper into it than probably we could have without it. Because none of us do this all day long, every day. It’s what we are passionate about as members of PRSA and as professionals, but having that time allowed us to take a more thoughtful approach and hopefully bring something different than what we saw from other groups.

Michelle Egan: The members of BEPS are PR professionals coming from a cross-section of the practice. But they really are deep in their understanding of ethics and its application and then specifically our code of ethics. Bringing in the expertise around AI, which again, very dynamic and emerging, is part of the process that had to take place as well.

Thank both of you for providing that perspective. I know I was one of the louder voices saying, where’s PRSA? We need to get the code out, get the guidance. I love what you issued. Good things are worth the wait.

Michelle Egan: Mark and I were right there with you. We were both like, come on everybody. I definitely have a bias for action, but I’m so pleased with where we are, and it has been appreciated by our members and others.

There are not one-size-fits-all answers. As I helped develop the AI guidance for the PR Council. There were a number of hotly debated items where there was not always total agreement. What are you seeing as some of the areas that have the greatest debate?

Michelle Egan: For me, attribution is one. Where do you give the credit and what’s the line on where that credit belongs? That’s a little bit different for every organization or every person. For some people it might be like, if I used it at all, I need to disclose that. For others, might look at it and say, well, I use other tools all the time, do I need to disclose a little bit further down the line? That’s definitely one of the big decision points and points that get argued back and forth.

Mark Dvorak: That’s my sense as well of where most of the challenges are going to be. We’re a creative profession and we work with other creatives, whether they be illustrators or photographers or writers of some sort. Everybody has a slightly different take on where that line is and what’s appropriate and how you do it. In social media, we have to acknowledge and disclose who’s behind this, who’s paying it, influencers and stuff. How do we do this in a way that’s going to be manageable and not take away from the impact of the message that we’re trying to deliver? That’s the big one I think on our plate.

That’s the one where I’ve seen the most discussion. There are people that are still very strongly in both camps on that one about how often you disclose and where. Two others I’m seeing a lot of debate on is using it in transcreation and engaging diverse communities, and then the use of AI meeting tools. When is it okay to record? How do you need to disclose it? Are they a good or a bad thing?

Michelle Egan: We had a diverse dialogue session that was specifically on bias in AI. There’s obviously a very robust discussion. The guidance is very clear. You can’t take the human element out. You have to know where is the information coming from and what is the source? What lens can you apply to this as a human thinking strategic individual? We can’t take that piece out of it.

How are each of you using AI personally and what personally concerns you the most?

Mark Dvorak: I wasn’t really concerned until the Open AI event of a couple of weeks ago. Which philosophy are we as a leading AI organization going to take as represented by who our CEO is? No, I have to have hope that we are going to figure this out as a society and society and nations that this is not going to become the scary big brother takes over the world thing that some people have taken it to an extreme. But I’m afraid that in the short term we are going to see more worse mis- and disinformation.

We have finally gotten to the point where most everybody, but not everybody, sees that they have to be careful about what they read, and they have to check sources and they have to not just trust everything that comes across their plate. This adds a whole other layer that people may not be dug into enough to realize that, okay, here we go again. We’ve got to get out of this place that we’re in as a society. I’m hopeful that we’re going to value journalists again in this world because we’re going to need somebody to cut through the clutter and check sources and get multiple sides of an argument. Maybe that’s going to be something that comes out of this, but that’s probably what keeps me up at night the most.

How are you using it right now, Mark?

Mark Dvorak: I’m using it for just about everything. I was working on an updated plan for a client, and I wanted to do some synthesized research that had been done by medical researchers and social science researchers. I used it to go back and dig through the knowledge base, pull it together, summarize it, develop some insights for me. I even said, “Tell me what my client’s contribution was to solving Y, Z issue”, and it was phenomenal at what it was able to pull up. Then I said, “Okay, based on all this, what do you think are some appropriate next steps?” I was pleasantly surprised at how it was able to take the existing information and synthesize and analyze it as a starting point for me moving forward. It’s been just a different source of information for me and a different way to look at things.

Early in December, I was on a PRSA Corporate communications AI webinar with someone from Coke and they highlighted how they were using it in their crisis scenario planning…If this happens and we do this, what are the three likely outcomes from that? If we do that, what are the three likely outcomes from that? Really having it help with the overall simulation and thinking of the implications of their actions.

How are you using it, Michelle?

Michelle Egan: I’m using it, and my team is using it, pretty regularly to spark creativity. If we get stuck and we just need to move on a theme for something or do some research, it’s a really fun and easy way to do that. Just yesterday I was laughing with one of my staff members. We had fun with it because we helped to write a piece for our president on the end of the year and I said, “Well, let’s see if we can make this better and put it into ChatGPT in particular.

Then we had a good laugh because it was like, well, what if we said this in the voice of Joe Biden or can you make it a Taylor Swift song? We use it to also increase the fun and creativity around our team. I did ask a little bit around the organization about how others are using it. Our HR team is using it to help them pre-draft job descriptions and do some research on things like how to salary band a job. They’re using it quite a bit.

They’re also using a tool where they could drop in a script and then get a voiceover for something that’s maybe an intro to a training course or something like that. We’ve got users across the organization. I’m really comfortable with the way that we’re using it as a team.

But there are concerns and I can’t do some of the things Mark’s talking about. I can’t take a very technical document from my company, put it into ChatGPT and ask it to give me a summarized version because of the nature of the information that we have. Not only is it proprietary, there’s also cyber security issues, and so here in my company, there’s just a tremendous amount of caution around that. We find our ways around it, but I really like to be more of a super user and be able to have a fence around what we’re doing so that we can do those sorts of things. On a broader scale, like Mark, I think my biggest concern is mis- and disinformation and the potential for that to proliferate at scale because of AI, and that’s a really big concern.

Is there anything I didn’t ask you that you wanted to highlight?

Mark Dvorak: We as PR professionals have to use this as an opportunity once again to truly demonstrate the value of what we do and what we bring to the table. Just like when we all started in this business, somebody would get up in the morning and take scissors and clip news stories out of the newspaper, paste it on paper, and that’s how we would get a clip book together. Then it became all digitized and computers.

That made the whole process of gathering reports easier so that we could use our skills and talents for more important pieces. It didn’t make that less important as a step.

That’s what this is going to do for us. It’s going to allow us to spend less time on some of the rote elements of our jobs, but it’s not going to replace the critical thinking that comes from expertise, from experience, from having done things, from having been able to talk with others, build consensus. All the things that PR folks do in a day. If we do this right and if we are cognizant of it as we move forward, we should be able to better explain and better demonstrate what PR is all about to people and have them better see really the true value of what we do. I hope we take advantage of that.

Listen to the full interview, here.

*Note: This interview was edited slightly for length and clarity

Mark McClennan, APR, Fellow PRSA
Follow Me
Mark W. McClennan, APR, Fellow PRSA, is the general manager of C+C's Boston office. C+C is a communications agency all about the good and purpose-driven brands. He has more than 20 years of tech and fintech agency experience, served as the 2016 National Chair of PRSA, drove the creation of the PRSA Ethics App and is the host of EthicalVoices.com

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *