Wednesday, November 02, 2005

The Baseline

It’s taken a few more weeks than I hoped, but I’m ready to get my first baseline survey in the field at my new job. And, as usual, I’m learning in the process.

Let me explain a bit about what my role is, and then talk about some different survey question formats that you might like to consider. I also want to make a point about the importance of controlling the discussion in any communication effort.

I work at a large manufacturer of consumer electronics. Although the company is an established and successful consumer brand in its own right, much of the product is sold through retailers and other outlets. These customers have enormous influence. There is a growing sense within the firm that the needs of the end-user are not being fully considered. My job is to bring that end-user “consumer voice” to life inside the firm through communications.

I’ve spent several weeks trying to understand many things. What sources of consumer input does the firm use, and how? What are current attitudes about consumer needs? How are decisions made, and by whom? Are employees even aware of the end-user as a separate group of stakeholders? The answers, predictably, are over the map.

While I now feel I have a general sense of current attitudes and practices within the firm, I need to take the next step. I need to establish a baseline of attitudes and behaviors so I can measure and guide the impact of my communication efforts.

There is an extremely valuable, somewhat hidden opportunity here that I want to point out. Because there is no established way of thinking about this topic within my company, my survey represents a chance to really define the discussion going forward. How I frame the questions will influence how people think about the topic. Plus, I can prepare the soil for my communications campaign.

Now, I happen to think that a one-note symphony of a campaign would be a bad idea. I could go out and just pound the message – Consumers! Consumers! Consumers! – and get people’s attention. I don’t think it would help. I think we need to change the discussion so that people understand the complexity of our business. We need to balance many factors to be successful, like cost control, product quality and innovation. A great consumer experience needs to be one of those factors. I want a campaign that I can sustain and that contributes to the overall company effort.

With that in mind, here are a few of the survey questions I’m using. I think these will not only deliver valuable data, but also encourage the audience to think about the topic in a new way:

Please indicate your current level of awareness of these two distinct audiences – customers and consumers – in the course of your job:

  • Strongly aware – I constantly consider the differences between these groups
  • Aware – I consider the differences regularly
  • Slightly aware – I occasionally consider these as separate groups
  • Rarely aware – I almost never consider differences between the two groups
  • Not aware – this is really the first time I’ve considered these as different groups

Customer and consumer needs are often, but not always, aligned. Both are important to our success.

  • In your function, how do you currently feel the needs of customers and consumers are balanced? Please allocate a total of 10 points between the two groups, indicating how you think each group's interests are currently weighted. (For example, 7 points for customers, 3 for consumers; or 5 and 5.) The total must equal 10.
  • Now, indicate how you personally feel the needs of customers and consumers should be balanced.

Please review the factors listed below, each important in developing and launching a new product or service. First, rank these factors in order of importance as you see them in practice today. Then, rank those same factors in the order you personally believe will best contribute to our success. You can use each ranking number only once. The top ranking is “1.”

  • Engineering quality – the product works
  • Cost of materials and development
  • Competitor activities – to counter or anticipate moves by our competitors
  • Consumer experience (packaging, user manuals, setting up the product, using it, support and service, repair, our Web site, etc.)
  • Senior leadership decisions – reasons not clearly communicated to me
  • Promised product release dates
  • Customer requirements, including technical variations driven by regional standards
  • Product design – how it looks and feels

There are other questions as well, but these best illustrate my point. The questions gather needed baseline data. The forced ranking question above will allow me, for example, to go to leadership and say “Here’s how employees currently rank our priorities in practice and as an ideal. Which ones do you think I should try to move, and in which direction?” At the same time, the questions force the audience to consider the complexity of what we’re trying to do. Once they acknowledge that these are not black-and-white issues, they’ve started to take responsibility for successful outcomes.

On a technical note, I tried to conduct this survey using existing survey tools within my company. However, internal resources could not accommodate forced ranking or constant sum questions like those I’ve shared here. At the risk of sounding like a shill, can provide these kinds of questions, and in my case, for the same price as the internal tool. This is my first use of the paid, expanded version of surveymonkey’s tool, and so far I’m impressed.

Wednesday, September 28, 2005

The Power of the Deck

I have surrendered to the ubiquity of the deck. Mostly.

I don't know where you work, but at the last few places I've been, information is not taken seriously unless it is crammed onto a PowerPoint slide. Never mind that a Word document would be better. Never mind even if it's an Excel spreadsheet with more cells than a Bin Ladin family reunion. (Ba-dum-dum-bum.) Just stick it in PowerPoint and you're golden.

(In fact, I've become so corrupted that I worry that I won't be taken seriously if I actually use PowerPoint correctly -- that is, all fonts must be at least 20 pts, etc. If there aren't a few eye-chart pages with 5 point type, I can't be doing anything really complicated.)

So when I finished the survey for my father-in-law, I naturally created a PowerPoint deck.

That's because I do think a PowerPoint deck is an important measurement communications tool. You will need to communicate the results of all the work you've been doing, none of which is resulting in a newsletter article, poster or letter to employees. A deck helps you organize your thinking and sell your analysis and conclusions so you can move forward with needed changes and improvements.

Here's what goes into my decks;
  • An executive summary: I generally open with a page that explains the purpose of the survey or other measurement effort, plus some high-level details of how it was conducted. I include the statistical validity of the study, if possible, or at least the raw numbers of what was collected and how. Any team members who helped get a mention.

    The rest of the executive summary contains brief conclusions drawn from the data. "Employees overwhelmingly prefer chocolate deserts. (2 top box = 82%)" "E-mails from the North American leader are opened by 78% of addressees, nearly double the rate of 40% for e-mails from the General Communication mailbox."

  • I usually place conclusions, recommendations and next steps at the end of the executive summary. This may include plans for changing communication activities, validation of current practices and plans for future measurement efforts. Occasionally, if I'm presenting it live and there is time and a good reason to walk people through the entire presentation, I may put them at the end.

  • After the executive summary, I place a new section with a page devoted to each question, in order. I provide the actual question wording and results, generally with a graph that appropriately illustrates the results. If there's space, I may include selected write-in comments that further illuminate the issue.

    For write-in questions, I look for common themes and summarize the number of mentions of specific topics, or provide a general sense of positive vs. negative comments, for example. This is more of a straight communication task -- read carefully and summarize responsibly. (There are Six Sigma methods for measuring and analyzing write-in data and as soon as I've done it myself I'll pass it on. Don't hold your breath...)

  • If I've done any additional work to break down comments by region or business unit or other demographic, I include a page on each of those efforts.

  • After this section I add the appendices. One appendix includes all the raw numbers and the rationale for assigning statistical significance to the data -- what were my assumptions about the overall population being surveyed, such as size, location, make-up, etc.? How did I calculate the sample size, etc.?

    The other appendix includes all the write-in comments verbatim. If this is a pretty good volume of text, I will use a small but readable font size of 10 to 12 points to avoid having 50 pages of comments. This is data that will obviously be read as a document rather than projected on a wall. I will also include the write-in comments in a separate word document. What's important to me is that I don't stand in the way of decision-makers getting to see the comments, though I will remove identifying names if necessary to preserve anonymity.

I find this whole practice very valuable. Once I get my data I love to play with it on a spreadsheet. Pretty soon the spreadsheet has 14 tabs and I can no longer find anything I've uncovered. The deck helps organize my thinking and let's me find results easily and clearly.

So, I did a short deck for my father-in-law, like I said. He was mightily impressed -- he thought a 14 page deck was huge! I haven't yet heard a report of his board meeting at his social club. I'm sure he killed.

By the way, he does not have PowerPoint. He's retired and can't work it anyway. Over the phone, I walked him through the process of downloading, installing and using the free PowerPoint viewer, which you can get here. You can't edit slides from this utility, but you can view them and print them.

I'm getting ready to put a survey in the field here at my new job, so I can get a baseline of where we are currently before I start introducing new communications tactics. More on that next time.

Wednesday, September 14, 2005

Charles is OK

Since I think at least some visitors to this site attended the Ragan conference in Las Vegas in June, you may be interested in this. Charles Pizzo, who writes the IABC and Ragan blogs, and who interviewed the Wonkette in Vegas, lives in New Orleans.

I had never met him before the convention. The organizers of the Ragan conference barely spoke two words to me. I'm sure it didn't help that I came in on Thursday, missing the opening night dinner for presenters, but I felt like an outsider. However, Charles and I struck up a conversation and he wrote some kind words about my presentation. When I decided to start this blog I wrote him an e-mail for some advice. About five minutes later, my phone rang, Charles calling in response. I've talked to him a few more times and he's always been friendly, helpful and insightful.

When Katrina hit I sent him an e-mail -- he didn't respond. His phone is busied-out. I was worried.

Yesterday he began posting on the Ragan blog. He's in Texas. He has been writing about his hurricane experience from his perspective as a professional communicator -- you can read his posts here.

So we can quit worrying about him.

Monday, September 12, 2005

Black Magic

It's been awhile since I've posted as I've started a new job. I've been busy wrapping things up at my old firm and I'm just getting started at my new one. Exciting times.

Previous readers may note that I've deliberately avoided naming the company I work for on this blog, and I'm going to continue with that policy. I hope readers respect my desire to maintain that wall so I can write openly about measurement efforts without exposing company business.

One thing of interest -- this firm is committed to quality, and measurement is a big piece of that. That's the good news. What I find surprising is that even here, the need to measure the impact and results of communications has barely found a foothold. In a place brimming with Six Sigma and quality efforts, communications is largely a collection of strategic tactics with little measure of results. Green pastures for us metrics nuts.

Anyway, as I mentioned in an earlier post, my father-in-law belongs to a small club and I agreed to help him survey its members. It's not a country club but is a similar blend of sport and social activities. I used to create the survey and sent an introductory letter and URL link from his e-mail account so the members would not think it was spam from me.

He refers to the entire endeavor as "black magic" and is in awe of my pretty pedestrian talents with the computer. As usual, I learned a few things that I'll pass on here.

He has a slight tendency to ask questions I consider rhetorical. For example, a question like:

  • When we schedule single night events for both Friday and Saturday evenings of the same weekend, both have often been undersubscribed. Members tend to subscribe to one or the other night, not both. We think there are advantages of having one fully subscribed event rather than two that are half-subscribed. Do you agree? Yes/No.

I'm not sure there's a lot of argument here, so my instinct is to eliminate the question. On a practical level, the question does make the survey longer, something I avoid like the plague. More to the point, only allows 10 questions in its free version, and in this case we were already over our limit.

The way I tackled it was to combine two questions. The next question asked if the combined event should be on Friday or Saturday. This is how the final question was posed:

  • When we schedule single night events for both Friday and Saturday evenings of the same weekend, both have often been undersubscribed. Members tend to subscribe to one or the other night, not both. We think there are advantages of having one fully subscribed event rather than two that are half subscribed. If we scheduled single night events on only one night per weekend, which night would you prefer?
    - I'd be more likely to attend on Friday nights.
    - I'd be more likely to attend on Saturday nights.
    - I'd be likely to attend on either night.
    - I'm not likely to attend on either night.
    - Other (please specify) text box provided

The respondent who doesn't agree that only one single night event should be scheduled per weekend can choose the "other" option to propose an alternative. It's still a bit wordy, but does the trick.

Similarly, several questions were variations on this:

  • If we organized X event, would you be interested?

By combining these into a single question, we were able to get under the 10 question limit:

  • We're considering several kinds of events for the upcoming season. Please check the ones that are of interest to you:
    - X event
    - Y event
    - Z event
    - Other (please specify) text box provided

In fact, we were down to nine questions, so I added a final question that asked responders to rate their overall satisfaction with their club membership. By always including that question in future surveys, the club can track whether it's moving in the right direction.

This is not just an exercise in being cheap. It's good self-discipline to make surveys short, both for the respondent and for you -- if you are inundated with data you will never finish playing with it and begin using it to make improvements.

I'll include directions for one simple thing that makes using these online surveys easier for the respondent. That is, creating a hyperlink so respondents can access the survey in a simple and non-threatening way. Being able to say: "Click here to take the survey" is much less threatening to the non-technical among us than "Click to take the survey."

This generally works for all Microsoft applications, such as Word or Outlook. It works in Yahoo e-mail. I suspect it works in nearly all applications, with some slight variation.

  1. Get the URL, also known as the Web address, of the site you want someone to go to. On, follow the directions when your survey is ready to send and click the selection for sending a link in e-mail. For other purposes, such as sending someone a link to a website, just copy the address from the Address window at the top of your browser. It usually begins with http://www. To copy the URL, highlight the text in the Address window or from your other source and go to Edit>Copy or press your CTRL button and the letter C at the same time.
  2. In your e-mail (or in Word if you're composing the e-mail there), highlight the word you want for a link. For example, if you say "Click here to access the survey" highlight the word "here."
  3. In Word or Outlook, go to the toolbar and select Insert>Hyperlink.
  4. Place your cursor in the little window and paste in the URL you copied. You can paste by pressing CTRL-P.
  5. In Yahoo mail and likely some other programs, look for the universal symbol of hyperlinks -- a little globe with some chain links over it. Click there for a hyperlink window, and paste in the URL.
  6. If it worked, the highlighted word will take on the look of a link. In most cases that means it will turn blue and be underlined, though the look changes depending on the program and settings.
  7. If you did all this in Word, you can copy and paste the text into your e-mail and the link will be pasted along with it.
  8. I always send the first draft of these e-mails to myself (and anyone else on the team) to be sure and test the link. You should too.

If this doesn't work, you may have an e-mail program that is set for "Plain Text." While some programs only allow plain text, many can be set to plain text, rich text, or HTML format. If you can't find a hyperlink option, look for a Format menu and see if you can change it from plain text to rich text or HTML. This works in Outlook, for example.

If you send a link to someone who has their e-mail set for plain text, the original URL will show up in their version and they will be able to click it to reach your survey or webpage.

This may be child's play for some of you, but here at Stand On A Box we know it's often hardest to learn the things that others take for granted.

And don't forget to enter your e-mail address on this site so you're alerted to new postings. Just scroll down on your right for the "Subscribe" window.

Thursday, August 18, 2005

Celebrity Endorsements

Last time I wrote about plans to check read receipts from e-mails sent from leaders of the firm. I was expecting a boost in opened e-mails from those leaders, compared to those sent from our "generic" mailbox. I was surprised by the size of the leap. Here's the data from an e-mail from the North American leader to all North American employees. (By the way, does everyone know that you can click on the tables to open up a large version?)

It pays to have the right messenger -- these numbers are nearly twice as good as the average. Here's a comparison between this message and all the other e-mails for which I've gathered read receipt data.

(Not sure why this is one won't open to a large version. Working on it.)

Now, there are a lot of differences among these messages aside from the messenger. They were sent to different audiences and the content was different. It's likely that the senior leader message was simply more important and interesting than the others.

However, I'd argue that the differences among the lower four messages reinforce the similarity of readership. The four lower messages all received essentially identical attention despite variations in audience and content. The North American leader message response is starkly different.

I'll do another check of messages from the leader mailbox when I have a chance. I hope to get a less compelling message to check -- something that is less interesting in itself, to see how people respond.

So, what good is this information? Here are a few thoughts:

  • It helps manage expectations on message penetration.
  • If you can isolate readership levels you can do a better job of judging what other factors are effective in your messages. Let's say I send two different messages, each asking employees to take a survey. I know from read receipts that 40 percent of employees read each of them, but one message drove 30 percent of employees to comply and the other just 20 percent. I can explore the message content, timing or other factors to see what drove the higher compliance.
  • I was asked how many dial-in ports we might need if we asked all North American employees to attend a conference call with the North American leader. If I'd had only the earlier read receipt data I may have guessed that enough ports for half the employees would be plenty, since fewer than half of most e-mails are even opened. Because I had seen that more than 75 percent of North American employees opened the leader's most recent message, I increased my estimate.
  • It sets a baseline and a goal for all messages. If I know I can reach 75 or 80 percent of employees given the right message and the right messenger, then I can be a more strategic messenger.
  • The knowledge also creates a greater responsibility. Maybe I can reach all those people, but do I want to? Are all messages equal? Would my organization be more or less effective if I learned to get everyone to open and read every e-mail message sent? My role is also to protect my audience, so I think this data would drive better targeting of messages.
  • At the very least, I now have a sense of what constitutes good and average message penetration -- and I didn't really know that before.

I've recently had HR give me a list of all employees by e-mail address, level, location and other data. I can now map my read receipts to this data to discover different response rates based on those factors. Are they more or less likely to read it in Europe? Do managers read more messages than vice presidents? As I've said before, a little data goes a long, long way, and you never know how handy it will be until you have it.

My next post will provide the remaining data from my post-conference session survey. Then, I'll be helping my father-in-law conduct a survey for a small social-and-sports club he belongs to, using That will probably be worth a few laughs here at standonabox.

After that, I'm not sure -- I'm moving to a new job at a different firm in September. I expect it to have a large measurement component, which you will be able to read about here. Wish me luck!

Thursday, August 11, 2005

The Devil's in the details

I'm jumping back to read receipts this time, to provide some new information... and a warning.

I was waiting eagerly to get some read receipt data from two messages we were planning. One from our North America leader and one from our global leader. These would provide contrast to our other e-mail messages, which have been sent from our generic corporate mailbox.

Unfortunately, I failed to anticipate a simple change in plans. Someone else did me the favor of sending out the global leader message and neglected to request read receipts. Since those kinds of chances don't come along every day, it's disappointing.

It also shows the value of not trying to do these kinds of studies on your own -- form a team of interested parties and include everyone who might have a role to play in the process.

Anyway, the North America leader message receipts were very interesting. Next time...

Wednesday, August 03, 2005

Aw, shucks.

In which we take a break from read receipts...

This blog grew out of a Ragan Conference session on metrics in communications. In that session I extolled the beauty of, a great site for creating and analyzing surveys. I asked attendees if they would take a post-session survey and gathered their e-mail addresses for that purpose.

(Here I must again cop to the fact that I managed to lose all those e-mail addresses, except the ones given to me on business cards. Let me publically state for the record, again, that I'm a dope. If any of you are out there, my abject apologies.)

Anyway, I ended up with only nine respondents to my survey, which I conducted using It is extremely easy to use -- I was able to put together a fairly sophisticated survey on my first visit. Basic features are free, and advanced features are an extremely reasonable $20/month. I've even used it for fun, silly surveys to amuse friends and family.

I promised to share the post-session results when they were in. I've hesitated because, frankly, they are embarassingly positive and I am -- deep, deep, deep, deep, deep, deep, deep, deep down -- the modest type. I guess it's my small-town upbringing.

But you may be interested in the survey questions, and I'd be interested in your thoughts on them. So in this post, I'll cover the demographic questions, and what my nine new best friends answered. (The numbers after the questions are the percent and raw number responding.) My next post will cover the questions and responses about the session itself.

In the demographic section (enticingly titled "About You") I asked these questions:

1. How would rate your experience with using metrics in your job before attending my session?
  • No experience - I'd never done any real measurement (33.3% - 3)
  • Beginner - I'd taken some small steps toward measuring my activities (55.6% - 5)
  • Intermediate - I conduct regular measurement (0% - 0)
  • Advanced - I've done considerable measurement and it's a regular and valuable part of my activities (11.1% - 1)

2. How long have you worked in communications?

  • One to three years (33.3% - 3)
  • Four to seven years (22.2% - 2)
  • Eight to 10 years (11.1% - 1)
  • More than 10 years (33.3% - 3)

3. What do you consider your core skills as a communicator? Please rate the following choices.

  • Writing
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 0% (0)
    4 Proficient: Colleagues come to me for this skill 78% (7)
    5 Expert: I'm among the best around 22% (2)
    Response Average 4.22
  • Communications strategy
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 33% (3)
    4 Proficient: Colleagues come to me for this skill 56% (5)
    5 Expert: I'm among the best around 11% (1)
    Response Average 3.78
  • Project management/organization
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 33% (3)
    4 Proficient: Colleagues come to me for this skill 56% (5)
    5 Expert: I'm among the best around 11% (1)
    Response Average 3.78
  • Relationship building
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 44% (4)
    4 Proficient: Colleagues come to me for this skill 56% (5)
    5 Expert: I'm among the best around 0% (0)
    Response Average 3.56
  • Analytics, including metrics
    1 No experience or proficiency 33% (3)
    2 Beginner: I don't do it well and largely rely on others for this skill 33% (3)
    3 Competent: I'm OK -- maybe not the best around 22% (2)
    4 Proficient: Colleagues come to me for this skill 11% (1)
    5 Expert: I'm among the best around 11% (1)
    Response Average 2.11
  • Business knowledge - general/finance
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 22% (2)
    3 Competent: I'm OK -- maybe not the best around 56% (5)
    4 Proficient: Colleagues come to me for this skill 11% (1)
    5 Expert: I'm among the best around 11% (1)
    Response Average 3.11
  • Employee/internal communications
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 22% (2)
    4 Proficient: Colleagues come to me for this skill 67% (6)
    5 Expert: I'm among the best around 11% (1)
    Response Average 3.89
  • Public relations
    1 No experience or proficiency 11% (1)
    2 Beginner: I don't do it well and largely rely on others for this skill 0% (0)
    3 Competent: I'm OK -- maybe not the best around 33% (3)
    4 Proficient: Colleagues come to me for this skill 44% (4)
    5 Expert: I'm among the best around 11% (1)
    Response Average 3.44
  • Marketing/sales communications
    1 No experience or proficiency 0% (0)
    2 Beginner: I don't do it well and largely rely on others for this skill 25% (2)
    3 Competent: I'm OK -- maybe not the best around 25% (2)
    4 Proficient: Colleagues come to me for this skill 50% (4)
    5 Expert: I'm among the best around 0% (0)
    Response Average 3.25

Sounds like a pretty proficient group to me, bless 'em.

This is my list of communications proficiencies -- is it the right list? What's missing?

Next post -- the session questions and responses.

Wednesday, July 27, 2005

Reminder -- send a reminder?

I've finished comparing the read receipts from two e-mail announcements. The first went out two weeks in advance, letting people know that a new online tool was going to launch on June 27. The second went out on June 27, telling people it was up and ready to use. Here's the time chart for the second message -- it looks very much like the first one.

Again, 60 percent of the e-mails were never opened; that's the bad news. The good news is that people who open the e-mails open them fast. Nearly 80 percent were opened on the first day; more than 90 percent by the second day; and more than 95 percent by the third day. Leaving aside the question of whether they read or absorbed the message, they did open it.

Was the reminder worth it? I think so. Here's a breakdown that shows how many people opened each message, with the percent of the total audience (1910 addressees):
  • Opened the first but not the second - 223 (12%)
  • Opened the second but not the first - 240 (13%)
  • Opened both - 551 (29%)
  • Total opened - 1,014 (53%)

The reminder e-mail accounted for 24 percent of the total opened e-mails.

The data shows 5 percent of the recipients deleted both messages without opening them. It may be interesting to know if they are doing that soon after the mailing -- that would indicate they just aren't taking our calls. But it's just 5 percent and there are bigger fish to fry.

I've still not absorbed the full impact of this data -- half of these e-mails are never opened. There are a couple of things that appear evident:

  • It looks like this particular corporate mailbox has a lousy brand. Readers don't expect value from its messages.
  • E-mail is fast, cheap and easy but it's not reaching a big segment of the population. We need to find another powerful vehicle that will appeal to the e-mail averse.

The basic data here -- e-mail never opened -- is ridiculously easy to gather. Anyone else going to try it?

I'm going to do similar experiments on mail from senior leaders to see how that changes the rate of mail opened.

Tuesday, July 19, 2005

Did I already say that I'm not an Excel expert?

I'm working through some of the read receipt stuff, which I'll post soon. However, I got some help from someone who really knows Excel (thanks, John!) and I wanted to pass on a couple of things now.

Like before, I described grabbing the little black box on the lower right hand corner of a cell and pulling it down to copy a formula. Well, turns out you can just double-click that little black box and the formula automatically copies down, as long as there are filled cells in the column to the left.

Secondly, I'm using something called a vlookup in Excel to compare two lists. Since I don't want to write an Excel Users Manual here, I'm not sure how much of that process I'll be posting. When the time comes, let me know if you need the details and we'll figure something out.

One other thing John mentioned -- Outlook allows the e-mail receiver to turn off the read receipt function from their end. How common is that, and will it mess up the data? We don't know, so that's something else we'll have to look into.

No one said this stuff was easy. It's just better than communicating in the dark.

Monday, July 11, 2005

No (big) surprises.

I already sprang the big surprise on the read receipt data -- most e-mails in this mailing didn't even get opened. Here's a chart over time of the e-mail opened, deleted without being read, and those which have still not been opened or deleted.

If you can't read it, the blue are opened e-mails, the Burgundy are e-mails deleted without being opened, and the yellow are still rotting in someone's e-mail box. You can see the launch day there on the far left. After two days most of the e-mails that were ever going to be opened were already opened.

The opened e-mails settled in at about 39 percent after eight or nine days and just stayed there. Deleted, unopened, grew a bit but there is certainly no mass exodus to clean out old e-mails. This particular e-mail was about a new on-line tool going live on June 27 -- just above the righthand corner of the legend there at the bottom.

We did a reminder e-mail the day the tool went live. I'll post that next time. Then I'm going to compare the two lists to see if the same people were e-mail openers, deleters and ignorers. I want to see if the day-of-launch e-mail reached new people or just reminded the same group that opened the first announcement.

Tuesday, July 05, 2005

Well, I feel kind of stupid here, continuing to post when no one has replied. I guess I need to be more controversial.

Or, it could be that the comments of all my readers have been secretly deleted by activist judges! I wonder...

OK, back to the great read receipt research. I want to share a couple of steps I skipped -- believe it or not -- that will come in handy as you're trying to clean up your data.

First of all, the lists of e-mail receipts and e-mail recipients may have some duplicates in them. If, like me, you use group mailing lists, or do multiple mailings, some people are going to get the message more than once. Those should be cleaned up before doing the final analysis to remove some of the error that is going to creep in. Here's how I do it.
  1. I have all the recipients, in Column A in MS Excel. In the previous post I indicated that there were 1927 names there. I'm going to filter out all the duplicate names.
  2. I click on the column header ("A") to select the column, then go to Date>Filter>Advanced Filter.
  3. Click "Copy to another location" -- otherwise the duplicate results are just hidden and can mess you up later.
  4. Create a new list range where you want the filtered records. I generally use column B on the same page, making sure it's empty, of course.
  5. Click "Unique records only" and then "OK." Column B will now have the de-duped list. In my case, I now show 1904 unique recipients instead of 1927.

With this many records, a few duplicates are not likely to change the results dramatically; with smaller samples it becomes more important. However, it's a great sniff test no matter how large your sample. If half your data disappears, you know something went wrong somewhere.

Sometimes your data will be hard to sort and analyze due to extra spaces in the records, especially at the front of a cell. Excel will not recognize that two entries are the same if one has a space or two before it and the other doesn't. For this problem, there's a great little function called Trim.

  1. Start like we did above, with all your data in column A.
  2. Make sure column B is empty, then select the first cell in column B that is next to your first entry in column A. (If A4 has your first entry, select B4.)
  3. Go to Insert>Function
  4. Type "trim" in the "Search for a function" field to find Trim. (If it doesn't come up, make sure "Or select a category" is set to "All.") Click "OK."
  5. In the new dialogue box, it is asking which cell you want to trim. Click in A4, then click OK.
  6. Now, B4 should be the same as A4, with any extra spaces removed except for one space between words.
  7. There are a lot of ways to extend this same formula to all the cells in column B so the whole list is trimmed. Here's what I do: I hold my cursor over cell B4, over the lower righthand corner of the cell where there is a black square inset in the black border. When my cursor turns from a white cross to a black cross, I click my left mouse button to grab it and just pull it straight down. It will fill all the cells as I go, until I release it.

In my next post, I'm going to look at the timing of the read receipts. Here's why:

  • I want to know if the percentage of people who open the e-mails changes as time goes by. Do people keep old e-mail to read or just to eventually delete it? (At home, when my wife puts aging fruit in the refrigerator, I call it "the fruit hospice." Fruit goes into the refrigerator not to eventually be eaten, but to die out of sight so it can be decently thrown out. Is unopened e-mail more than a week old essentially in hospice?)
  • What's my window for readership? At what point can I expect to have reached, say, 80 percent of those who will ever read it? That will tell me something about how far in advance of a deadline I should be communicating.

I'm now collecting read receipts for more e-mail communications so I have a larger data set on which to base my conclusions. Try to contain your excitement.

Wednesday, June 29, 2005

I was writing about using read receipts to measure the effectiveness of e-mail messages. The first step was getting the information into Excel, a task that looked easy, and then got a lot tougher. Here's why.

These response receipts come as an e-mail in three format flavors:

  1. Employee name / Read: Message Title /Date-Time Received / Size
  2. Employee name / Not Read: Message Title Date-Time Received / Size
  3. System Administrator / Message Delivered / Date-Time Received / Size
First, I just tried the obvious easy way -- I copied all the messages in the folder where I'd stored them and tried pasting them into an Excel spreadsheet. It worked! The e-mail headers pasted neatly into columns with the same labels as they have in Outlook: From / Subject / Received / Size.


I tried again later and it didn't work. All the message title information was in a single cell and there were no delimiters that could be used to separate the text to columns. I killed myself trying, finally figuring out how to select and export the data as an Excel file (In Outlook: File>Import and Export...>Export to file>Next>Microsoft Excel>Next>Select Folder to Export From>Save Exported File As (Browse)>Next>Finish).


I figured it out today. The key is to sort the e-mails by Subject in Outlook. If you do, you can simply cut and paste them into Excel. I then did a find-and-replace to cut out the title so I just had the names, whether it was read or deleted without being read, and when that happened.

OK, what does the data reveal?

Out of 905 read/not read replies, 85 percent had been opened. Just 15 percent were deleted without being read. Sounds pretty good, though we know what we don't know -- did the people who opened the message read and/or absorb the message.


What about those other responses -- the third "flavor" above. Turns out those are important. They show who actually received the message. You may also have the same information from your distribution list. Anyway, turns out my message went out to 1928 addresses. Now the numbers are not so hot -- 40 percent opened it, 7 percent deleted it without opening it, and a whopping 53 percent have yet to touch it.

Next time, an even deeper dive.

Friday, June 24, 2005

Hello and welcome. This is a new blog where communicators can discuss methods for measuring the impact of their work. While my experience is in internal corporate communications, I hope some of the measuring methods can be used for all kinds of purposes.

I intend to get geeky here. What I hear from communicators is that they don't get the nuts-and-bolts of how to undertake measurement from most sources. They get theory. I'll do my best to give step-by-step descriptions of how this stuff works.

Anyway -- latest measurement thought.

I recently sent out a broadcast e-mail to a subset of our employee population. The e-mail mistakenly had both the the read- and delivery-receipt options selected. Luckily, the message went out from a group mailbox so I didn't get all the responses in my own Outlook e-mail. I saved them, though, and now I have 913 bits of data, with more coming in every day. I'm trying to see if they provide a bit of a view into what happens when we send this stuff out.

Here are a few of the things I might learn from this information:

  • How many people open these messages vs. just deleting them without reading them?
  • What is the timing of this activity? How long do they wait to take either action?

This is a start. Through a follow-up survey, I could find out if the people who opened the e-mail retained any of the message. I could find out why so many people never opened it, and if they received the same message through some other channel. Is there a better way to reach them? Are there regional differences in how they treat the data? Can I improve my "opened" numbers with clever subject lines or other tactics?

This all seems worth doing -- we rely on e-mail to an amazing degree. By looking at the first two bullets I can size the problem and decide if it's worth pursuing. I don't need any additional data to tackle those two, but I do have to figure out how to get these bounce-back responses into a spreadsheet so I can easily analyze them. I have to make the machine eat the work.

More on that process -- the nuts and bolts -- in my next post. I'd like to know what e-mail programs you use, so we can determine if this will work for systems that don't use Outlook.