EthicalVoices

Ethics and PR Measurement: A Conversation with Katie Paine

Joining me on this week’s episode is Katie Paine, the CEO of Paine Publishing. Katie has been a pioneer in the field of PR measurement for three decades. Her latest company is the first educational publishing firm entirely dedicated to making more PR measurement mavens.

Katie discusses a number of ethical issues around public relations measurement and analytics, including:

Please tell us more about yourself and your job and career.

I majored in Asian Studies and Asian History in college and found myself working in marketing in Silicon Valley surrounded by engineers and scientists and tech types. And so, whenever I wanted to argue for something, I lost because I was using words. Then I had this epiphany one day that if I threw data at them and put it into a chart or graph, they might actually listen to me.

They did.

And so, I got fascinated with measurement at Fujitsu, then went to Hewlett Packard. They trained me well. Finally, I was at Lotus. I was the ninth director of corporate communications in five years. Needless to say, we were all always on shaky ground. I analyzed my results and showed that the number of people who are likely to buy our products increased and the negatives have gone down. Bob Straighten of Gray Straighten at the time, looked at my presentation and said, “Anybody that isn’t doing this by the year 2000 doesn’t deserve to be in business.”

I took advantage of a buyout and quit and started the Delahaye group. And that was about 30 years ago now. I have been on this mission to bring good measurement to public relations, marketing, social media and integrated marketing ever since. So, I started one company, the Delahaye Group, which is now part of Cision. Then I started Katie Paine & Partners, which got bought by Carma. Now I’m an independent consultant splitting my time between helping companies design integrated dashboards and writing stuff that’ll help everybody learn a little bit more about measurement.

What is the most difficult ethical challenge you have ever confronted at work?

In research and measurement, it’s a continuous ethical challenge. First of all, you’ve got data analysis. It’s very easy to make data say what you want it to do- just look at all the government statistics. It’s very easy to make numbers lie. The wonderful saying that I give in all my speeches, which is, “Data is like political prisoners. Torture it long enough, and it’ll say anything.” The truth is you’ve got to be completely ethical and honest. The data is the data. You cannot manipulate it to say what you want it to just say or to say what’s convenient. And 50, 60, 70% of the time, it doesn’t say anything very interesting. That’s always going to be a challenge.

The other side is also when somebody asks you to do bad research. In the bad old days of 2008, we’d lost about 60% of our business, and I was trying not to lay people off. People asked me to do a research project, which on the surface sounded legit. A lawyer hired me to back up this guy’s argument in an intellectual property lawsuit. Essentially, he asked me to produce data that I didn’t believe in. There you are with the payroll looking at you in the face and check on the other side. What I basically did was I told him, “No, I wasn’t going to give him that data. I was going to give him some other data.” He lost the lawsuit. He was not a happy camper. I wasn’t going to deliver fake research under testimony, under oath. It haunted me forever.

What’s your recommendation on how to deal with pressure to fudge the data or modify sources to make the data more palatable to the managers or the end user?

It happens all the time. This is the difference between me as an outside consultant and somebody working in house.  I remember somebody fired us, because they didn’t like our report because it didn’t back up what they had to say. This happens. You just have to understand that the data is the data.

I basically go in and I say, “Lookie, your boss and your boss’s boss and the board of directors doesn’t want to make decisions based on bad data. If I do this to your data, it’s going to make you look good, but it’s going to lead to bad decisions.”

The first thing I do is set expectations correctly and say, “Look it, I’m going to give you the data. If it’s bad news, it’s bad news. I’m going to give you an explanation for it and try and put it in context.” I always try to put bad news in some kind of context.

For example, I’ve done a lot of work for PBS over the years. They never get bad press, but every once in a while, they do. They got caught up in the Tavis Smiley and Charlie Rose #MeToo stuff. When the first one hit it was bad. But now we have a benchmark. Now every time there’s an unusual amount of bad news, we have something to compare it to. We say, “This is bad, but it’s not as bad as Tavis Smiley.” It’s, “You handled this one better than you did the last one.” Again, context is everything. You put that stuff in context.

The other thing is you have to get away from the win or lose mentality. Too many people look at measurement from an I won/I lost perspective. It’s not. It’s a gradual improvement process. And therefore, you’re not necessarily going to get fired for speaking the truth. Frankly, if you are going to get fired for speaking the truth, then you probably needed to find another job anyway. I mean, I seriously tell people to walk away when their bosses tell them to manipulate the data.

This goes for agencies as well. I had a client where we were analyzing one announcement relative to a competitor in England. They didn’t like the results. They said, “Well what if we just do these relevant publications?” I said, “Fine, the headline now says, ‘In these relevant publications, this is how you did.'” That still wasn’t good enough for them. Then they wanted me to take out a whole bunch of other stuff. I gave them the data and I said, “This data is based on four articles.”

I’m not sure whether that actually made it up to the client. Fortunately, I knew the client. I told them what was going on. I basically said, “Hey, this is the silliest exercise I’ve ever been through, because you’re now doing a headline based on what is essentially four to six articles.”

Somebody is always going to pressure you to make it look better and you just have to push back and say, “This isn’t about making you look good or winning. This should be about doing better, making it better, higher quality and all that stuff.”

I think that’s the difference between a bad manager and a good leader in terms of looking at those areas and seeing how you can use that data to springboard the company more successfully.

Yes. The other thing is dig down into the data. I’m presenting data on an internal communications research project I’m working on. This is a pet project and they’ve invested a lot money in creating a new channel. They were expecting everybody to be using it. It turns out that nobody’s using it. Then you look at, “Okay let’s look at, somebody must be using it.” We found 33% of the people using it.

Who are those 33%? You dig down into the data, right? And surprise, surprise, it’s an app. Millennials and 35-year olds don’t like it. They hate it more than anybody else. But interestingly enough, the old timers that have been with the company a long time, whose knowledge they need to transfer to these younger people are using it.

Then the question is how do you take that data point that says this app is most popular with people who have been at the company for 15 years or more, typically in their forties? Well, if you’ve been at the company for 15 years, you might have some leverage and some authority, and you might be a supervisor and therefore make somebody, put it into their improvement plan or something. You dig into the data more until you can find an insight moment that tells you how to improve.

Speaking of insight moments, what do you see as some of the key ethical challenges facing the profession today and tomorrow?

There are many. Here’s the problem, which is that public relations, internal comms, traditional, social, influencer, non-influencer…communications in general is seen as the panacea to all problems. Therefore, just slap some communications on something, and it’s going to make it better.

Nobody seems to understand that your stuff may not be believed. I mean, again, and this internal comms thing, we asked them not just what channels do you use to get information about the company, but what channels do you trust? There’s about a 40-point difference between the channels they use and the ones that they trust. Because it looks funny. It doesn’t look real. It’s not authentic. The actions don’t back it up.

The biggest challenge is not to say stuff that isn’t authentic and you can’t back up. The recent news was the Capitol One hack. The CEO apologizes. Well nice. Thank you very much. That doesn’t help. I mean the things that you and I learned in school in terms of Tim Coombs’ crisis response – even Tim Timothy Coombs has changed his advice these days, because the old things just don’t work. Abject apologies help, but you’ve got to back it up with something.

The hardest thing for PR people today, ethically is the pushback. Fortunately, a lot of millennials are pushing back. There’s a new generation of people who are much more willing to push back, certainly than I ever was. I pushed back. One of the other things is the advantage of having a talent shortage is in the past, I pushed back and I’d get fired. Today you don’t get fired, because they can’t replace you. My advice is push back. If it’s not the truth, don’t say it. If you can’t back it up with data or facts or something else, either don’t say it or be prepared to defend it.

So maybe you have translucency rather than pure transparency in a crisis. Translucency is recommended when you don’t have all the facts, and you say, “Look, here’s what I know today. As soon as I get something else, I will tell you something else. I will tell you as much as I know.” You cannot always be totally transparent, but the least you can be authentic in saying, “Hey, I just don’t know anything more than this.”

You’ve talked about data quite a bit. One of the themes I hear from people as they’re concerned about on the future is the rise of big data/AI/machine learning. What are you seeing as some of the ethical challenges with regards to these issues?

Oh, I’ll tell you exactly what the ethical challenges are, which goes back to the winning-losing mentality. Every PR person wants to look good in their monthly or quarterly or annual report. Every agency wants big numbers. Every agency wants to be able to say, “I reached this many people with positive information,” or whatever it happens to be. The hardest thing right now is that it’s very easy to collect information. It’s very hard to clean it.

I do a lot of audits of measurement systems. I see is people saying, “5 trillion people saw my messages in 2018.” You have to explain to them that there aren’t that many people on the planet to care. Relying blindly on big data, relying blindly on AI, relying blindly on any of that stuff is the biggest challenge, because it’s hard work to go in there and random sample and check and curate and make sure that the numbers you’re reporting reflect the publications that you care about, the subjects that you care about.

AI automated collection these days is the reason why I’ve been advocating against using impressions. It’s not that I’m against AI-driven measurement because I think it’s a brilliant advantage if it’s done right, but you have to screen it. You have to test it and check and make sure that you’re getting the right stuff.

For example, we were called in for an agency that was about to get fired because they reported 5 trillion impressions. It turns out that 40% of them were not about the client who happened to be a restaurant chain. They were about underground urban transit systems. You can put two and two together and figure out what that one was about.

They’re getting better all the time, and machine learning is helping. If you get just a machine learning system that is learning from well-trained, accurate human coding, you get pretty accurate results. But if you just go in there and throw some search strings together, who the hell knows what you’re going to get?

I’ll tell you, here’s a perfectly good example. I was working with Abbvie when they had just spun off from Abbott. Abbvie’s stock ticker symbol is ABV, right? They’re looking to analyze 10 different competitors. It’s a huge system. They’re looking at 20,000 items a month. Why? ABV also stands for already been vaped. All the marijuana articles were in there. ABV also stands for alcohol by volume, so all the beer articles were in there. It took four months to clear out. Basically, we ended up not using ticker symbols at all, because it turns out that the abbreviation, the ticker symbol for AstraZeneca was a derogatory word for Asians. There were abbreviations all over the place. Especially on the social side of things, the biggest challenge is putting the human capital on stuff to make sure it’s going to get accurate, because you can’t just say it’s an automated system, turn a button and have it work, because otherwise you’re lying. You’re producing fake data.

You also don’t know how it’s producing that data, so unless you understand it, can you replicate it?

Exactly. I mean, replicating research. What a concept. The way to do that accurately is you have an established set of outlets and influencers that matter. What you do is you say, “Okay, there’s a hundred media outlets, blogs, channels, whatever that matter, or there’s a hundred and there’s a hundred influencers, right? So, Mark has a podcast and a blog and a website and also contributes to PR Week (Editor Note: Not Yet), and therefore you track him because he’s influential at all of these different places. You start out with a very narrowly defined universe of things that influence your audiences. You only look in those things for whatever subjects you happen to be tracking. The only way, it’s really the only way to do it. You cannot just cast this wide net and Google and say, “It gives me mentions of something.”

The other area is tonality. A lot of people are raving about automated tonality. I’ve had fights with measurement vendors over it. What’s your take on the ethics of tonality measurement?

I don’t think it’s appropriate. I really don’t. For 90% of the stuff out there, maybe not 90%…let’s say 70. 30% of consumers stuff, right? It works really well for movies, book reviews, stuff like that. It’s gotten somewhat better in the basic consumer packaged goods world. It does not work at all for the 90% of PR that isn’t done on behalf of consumer packaged goods, but is done on behalf of nonprofits and agencies and advocacy groups and everything else. My definition of positive versus negative has always been positive is an article that leads your audience more likely to behave in the way you want it to behave, so maybe leaves you more likely to invest, work for, support. In the case of NATO, when I worked with them, the definition of positive is it leaves you less likely to oppose.

Now tell me an automated system that’s going to figure out that, right? I mean NATO does PR to reduce conflict and make sure that pitchforks don’t show up at the base gate. And so they have to have very closely curated, human curated coding, because it’s very different. I think you have to look at it that way. The idea of a positive article should leave your reader more likely to act, buy, invest, work for. You also have to take into account a negative article, which leaves you less likely to invest, work for support, purchase, et cetera. Then you end up with a fair number of articles that are balanced, that do a little bit though. Then the vast majority of stuff out there is neutral. It just says Revlon makes nail Polish that’s not positive or negative. That’s just a fact.

I mean, I think it has gotten better. If you have a system, which I believe Talkwalker does, I don’t know who else does, that can correct it, that can go in. You can sit there and say, “No, this is not positive. No, this is not negative.” If you can curate it and change it and those systems can learn from those changes, you can make them a whole lot better. Right? I’m not damning all. I don’t think all automated analysis is bad. I just think that so much of it is inaccurate to boast and brag about, “Hey, my positives are this.” It’s like basing a bridge on cotton candy. It’s not solid data. It’s not solid research.

All that matters is that you can make it accurate. For Abbvie we spent the better part of six months.  In one case we used NetBase (Editor Note: A former client) because in the initial tests they were truly at 85% accuracy against our humans versus 30% for some of the other potential vendors and 40% and 50% and everybody else. They were heads and shoulders above everybody else. We hired them. That was great. Then we discovered all of these errors. Then we fixed them. By the end, I had a fairly high confidence level that they were getting positive and negative correct.

However, change in management comes along and the new manager doesn’t care about positive and negative. They want message penetration. Guess what NetBase did not do? It does not test for message penetration. It can test for phrases. You can look for certain things. But it does not do message concept. Message concept automatically is very, very, very, very difficult. If you are very good and you spend a lot of time and you have trained humans and a lot of data trained humans have looked at and said, “Yes, your ethical messages in this one and your innovation messages in this stack over here,” and you feed all that stuff into machine learning. Can they get it right at some point? Yes. They can probably get it up to about 85%. Without all that work, no.

What other questions should you ask vendors to make sure you are accurate?

Here’s another thing to check when the vendors and everybody else says, “We could do this. We can get 85% accuracy and stuff like that.” There’s a couple of questions you get ask them. One is how long do you test the system to get it that way. Because everybody thinks, “Oh, I’m signing this contract. It’s going to be up and running tomorrow.” Well, a good measurement system takes six weeks to clean up and test and make sure it’s accurate. For one company we had 2000 “not” terms. A company like SAS, the software company in North Carolina – we had 2500-3000 not terms and we kept coming up with more, because our search stream was 5000 lines long. Sometime our terms were 5000 lines long to get it right.

What is the best piece of ethics advice you were ever given?

Very good question. The sad thing is professionally, I don’t think anybody’s ever given me much ethical advice other than the data is the data and tell the truth for the data and don’t change it.

I live in two different worlds, right? I mean, as I was growing up in the PR corporate communications world, ethics was not even talked about. I’m old. I realized it was a different time, but it really wasn’t discussed very much.

On the research side, John Gilfeather is still a member of the IPR management commission. He would come to our meetings and he would read us from the regulations, from the ethical boundaries of CASRO, the research organization. He would say, “No, you can’t do that. You can do that. You can’t.”

John Gilfeather gave advice to do research right, be ethical about your research, don’t reveal the names of people who are responding as anonymous, don’t fudge the numbers, do significance testing on things so that you’re not exaggerating things. He kept me honest. He really did. The academic community to to a large extent has kept me honest over the years, because a lot of them read my newsletter, a lot of them follow me and stuff like that. If I go too far off the rails, which I don’t dare ever anymore, but in the olden days when I really didn’t know what I was doing, it was that community that helped I strongly urge anybody listening to this podcast to reach out to your nearest PR communication to academic institution. If you’ve got a question, ask them. Chances are, there’s good advice there.

Listen to the full interview, with bonus content, here:

 

 

 

 

Mark McClennan, APR, Fellow PRSA
Follow Me
Mark W. McClennan, APR, Fellow PRSA, is the general manager of C+C's Boston office. C+C is a communications agency all about the good and purpose-driven brands. He has more than 20 years of tech and fintech agency experience, served as the 2016 National Chair of PRSA, drove the creation of the PRSA Ethics App and is the host of EthicalVoices.com

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *