Home Awesome Improving Survey Analysis: 3 Steps to Get More from Customer Feedback

Improving Survey Analysis: 3 Steps to Get More from Customer Feedback

178
0

Even if you collect customer feedback, it won’t have much value if your survey analysis falls short.

From not preparing data correctly to jumping to conclusions based on statistically insignificant data, a lot can go wrong. Thankfully, there are some easy wins.

This post shows you how to improve survey analysis in three steps 😛 TAGEND

Choose the right questions to get useful data ;P repare your data for analysis; Conduct a driver analysis on your data.

Before any of that, however, you need to clarify your goals.

Know what you want to get out of your customer survey

What do you want to find out? Before considering a particular research method or survey question, you must define the research question. Which issues do you need to tackle to make more money or keep your customers happy?( Importantly, your research question is not the same question that you ask users .)

For customer surveys, the biggest divide is between quantitative and qualitative research. The common refrain is that quantitative data tells you the “what” while qualitative data tells you the “why.” But even that has its limitations 😛 TAGEND

The problem with the “data can tell you WHAT is happening but it can’t tell you WHY it’s happening” trope is that it implies that that’s a shortcoming somehow.There are NO analytical models can truly explain WHY things happen. Even the user stories we gather.

— Samuel Hulick (@ SamuelHulick) February 9, 2019

No single article can cover survey analysis for every type of research. The lessons in this post apply broadly to many types of surveys that seek to understand how people feel about your company, product, or service and why they feel that way.

Still, for simplicity, this article uses Net Promoter Score as an example. For one, it’s widely used. But it also showcases why a single-question, quantitative survey limits your analysis.

As such, it’s an opportunity to highlight how powerful your analysis can become if you add( and code and analyze) qualitative follow-up questions.

If you’ve figured out the question that your company needs to answer, you can move on to finding the right one to pose to your customers.

1. Choose the right questions

Good survey analysis starts with good survey design. Asking the right questions situateds you up for success with your analysis. For customer feedback, everyone knows the classics on the menu: customer satisfaction, Net Promoter Score( NPS ), and customer attempt score.

But, for decades, market researchers have faced a quandary. They want to maximize the response rate while also learning as much as possible. And most customers and users( 80% to be specific) who are faced with a laundry list of survey questions will bail.

If you have to reduce your laundry list down to a single item or question, what do you ask? For better and worse, the answer for many has been NPS. The limitations of NPS are a useful route to understand how survey design impacts your analysis.

nps survey

The limitations of Net Promoter Score

When introducing NPS to the world, Frederick Reichheld rightly said that existing customer satisfaction survey approaches weren’t effective. They were overly complex, outdated by the time they reached front-line managers, and frequently yielded flawed results.

Yet the idea that NPS is the “one number you need to grow” has set many on the wrong path.( Learn more about its limitations here .)

NPS’s simplicity is the primary reason why so many swear by it, but that simplicity is also its flaw. NPS should always be asked–and even more importantly, analysed–in tandem with other questions.

While NPS seems clear cut, it can be quite vague 😛 TAGEND

Why did your customer picked a rating of 6 rather than 7( stimulating them a detractor rather than a mere passive )? What is the statistical discrepancies between a score of 9 or a rating of 10? How should it influence decision-making?

customer satisfaction gauge

Questions that involve likelihood to revisit, repeat purchases, or product reuse can be just as good( if not better) predictors of customer loyalty, depending on your product or industry.

So what’s the answer?

Ladder questions to gain better insights

Thus, we get to the crux of choosing the right questions–as in multiple! You should ladder your questions to find out more information. One-question surveys are seducing. They deliver numbers promptly without lengthy analysis. But they’re insufficient.

( The good news is that it doesn’t require as much effort as you think to analyze more complex data–more on that later .)

Laddering questions lets survey respondents elaborate on written answers they devoted. This can be as simple and effective as an open-ended question after a closed question. The purpose is to determine the “why, ” which will allow you to perform the driver analysis in Step 3.

If a client dedicates an NPS of 6, you may ask as your follow-up, “What inspired you to give us a 6? ” or “What is the most important reason for your score? ”

Another option for customer satisfaction surveys is to ladder questions with a second shut question. For instance, consider the following questions for ACME, a fictional bank:

How would you rate your satisfaction with ACME bank? How would you rate the following from ACME bank:

Fees; Interest rates ;P hone service; Branch service; Online service; ATM availability.

This would give you the data to help determine which aspects of the bank’s service influences overall satisfaction.

Once you’ve supplemented your initial score with follow-up answers, you are able to clean and code the data to construct your survey analysis far more powerful.

2. Prepare your data for analysis

One of the most important components of data processing is quality assurance. Only clean data will give you valid results. If your survey employs logic, a first step is to ensure that no respondent answered questions they shouldn’t have answered.

Beyond that simple check, the process has two main components: data cleaning and data coding.

Cleaning data

Cleaning messy data involves 😛 TAGEND

Identifying outliers ;D eleting replicate records; Identifying contradictory, invalid, or dodgy responses.

Two types of respondents often muck up your data: speedsters and flatliners. They can be especially problematic when rewards are associated with the completion of your survey.

Speedsters. Speedsters are respondents who complete the survey in a fraction of the time it should have taken them. Therefore, they could not have read and answered all the questions properly.

You can identify speedsters by setting an expected length of time to complete the totality or a section of the survey. Then, remove any respondents who fall significantly outside of that time.

An industry standard is to remove respondents who complete the survey in less than one-third of the median period.

Flatliners. Flatlining, sometimes called straight-lining, happens when a respondent pickings the exact same answer for all the items in a series of ratings or grid questions.

For example, they may picking the answer 1 for a series of rating questions like “On a scale of 1-5, where 1 means’ not satisfied’ and 5 entails’ extremely satisfied, ’ how would you rate each of the following alternatives? ”

flatliner on a surveyIncluding contradictory questions can help “catch out” flatliners in your surveys.

Flatliners pollute survey data by marking the same response for every answer. Occasionally, you may want to design your survey to try to catch flatliners. You can do so by asking contradictory the issues or including a form of front/ back validation, which asks the same question twice.

For example, you may ask respondents to select their age bracket twice but place the age brackets in a different order for consecutive questions. Doing so, however, will attain your survey longer.

Questions like the one above can filter out inattentive survey takers, but they may lower finish rates.

You must balance maximizing completion rates with clean data. It’s a judgement call. How important was the series of questions they flatlined on? Did it determine what other questions they got asked? If they flatline for three or four consecutive responses, are their other responses still valid?

You can use practical considerations to help determine the balance. For example, if you want to survey 1,000 people and end up 50 over your quota, it may be easier to take a more aggressive approach with potentially dodgy respondents.

There are other quality-control measures you can implement in your survey design or check during data cleaning.

Make open-ended questions mandatory. If a respondent offer gibberish answers( random letters or numbers, etc .), review their other answers to decide whether to remove them.Put in a red-herring question to catch speedsters, flatliners, and other inattentive respondents. Include fake brands or patently fake products in an answer list.( One note of caution: Construct sure that these fake items don’t have similar spellings to existing brands .) Flag respondents who select two or more fake items.

Once you have clean data, you can start coding–manually for smaller datasets, programmatically for large ones.

Manual coding for small datasets

Coding open-ended text data

The traditional method of dealing with open-ended feedback is to code it manually. This involves reading through some of the responses( e.g. 200 randomly selected responses) and using your decision to identify categories.

For example, if a question asks about attitudes toward a brand’s image, some of the categories may be 😛 TAGEND

Fun; Worth what you pay; Innovative, etc.

This list of categories and the numerical codes assigned to them is known as a code frame. After you have a code frame, you read all the data entries and manually match each response to an assigned value.

For example, if someone said, “I think the brand is really fun, ” that response would be assigned a code of 1( “Fun” ). The responses can be assigned one value( single response) or multiple values( multiple response ).

You end up with a dataset that looks like the following 😛 TAGEND

If your dataset is too big to code manually, there’s another option: sentiment analysis.

Sentiment analysis to code big datasets

There is another method of text analytics called sentiment analysis or, sometimes, sentiment mining. Text analytics uses an algorithm to convert text to numbers to perform a quantitative analysis.

In the case of sentiment analysis, an algorithm automatically counts the increasing numbers of positive or negative terms that appear in a response and then subtracts the negative from the positive to generate a sentiment rating:

In the above examples of sentiment analysis, the top response would generate +2, while the bottom would generate -2.

While sentiment analysis seems simple and has some advantages, it also has restrictions. There will always be a degree of noise and mistake. Sentiment analysis algorithms struggle with sarcasm and often are poor interpreters of meaning( for now, at the least ).

For example, it’s difficult to develop a computer to correctly interpret a response like “I love George Clooney. NOT! ” as negative. But if the alternative is trawling through thousands of responses, the trade-off is obvious.

Not only is sentiment analysis much faster than manual coding, it’s cheaper, too. It also means that you can quickly identify and filter for responses with extreme sentiment ratings( e.g. -5) to better understand why someone had a strong reaction.

There are a few keys to induce sentiment analysis coding run 😛 TAGEND

Ask shorter, direct questions to solicit an feeling or opinion without resulting respondents. For example, replacing “What do you think about George Clooney? ” with “How do you feel about George Clooney? ” or “When you think about George Clooney, what words come up? ” The latter questions are less likely to generate ambivalent, tough-to-interpret responses. Avoid employing sentiment analysis on follow-up questions. For example, responses to “Why did you devote us a low score? ” are not suitable for sentiment analysis because the question is asked of people already leaning toward a negative sentiment.Be wary of multiple opinions in a single question. Answering multiple questions will skew your results toward the middle ground, which is why shorter and more direct questions about a single aspect are better.

Before proceeding to statistical analysis, you need to summarize your cleaned and coded data.

Summarize your survey data for analysis

NPS. Calculating your NPS is simple. Divide your respondents into three groups based on their score: 9-10 are promoters, 7-8 are passives, 0-6 are detractors. Then, use the following formula 😛 TAGEND

% Promoters -% Detractors= NPS

One additional note: NPS reduces an 11 -point scale to a 3-point scale: Detractors, Passives, and Promoters. This can restriction your ability to run stats testing in most software.

The solution is to recode your raw data. Turn your detractor values( 0-6) into -1 00, your passives( 7-8) into 0, and promoters( 9-10) into 100.

You can manually recode and calculate your NPS in Excel employing an IF formula 😛 TAGEND

Customer satisfaction score. Generally, for a customer satisfaction score, you ask clients a question like: “How would you rate your overall satisfaction with the service you received? ” Respondent rate their gratification on a scale from 1 to 5, with 5 being “very satisfied.”

To calculate the percentage of satisfied clients, use this formula 😛 TAGEND

( customers who rated 4-5/ responses) x 100=% satisfied clients

Now you can move on to driver analysis of your survey data.

3. Conduct a driver analysis

Alternatively known as key driver analysis, important analysis, and relative importance analysis, driver analysis quantifies the importance of predictor variables in predicting any results variable.

Driver analysis is helpful for answering questions like 😛 TAGEND

“Should we focus on reducing costs or improving the quality of our products? ”“Should we focus our positioning as being innovative or dependable? ”

This component of survey analysis consists of five steps.

Step 1: Stack your data

To conduct a driver analysis, stack your data. A stacked data format looks like the table below:

how to stack data for survey analysis

In the above example, the first column is your quantitative metric( e.g. NPS ), while the second, third, and fourth columns are coded responses to open-ended follow-up questions.

Step 2: Opt your regression model

There are several the different types of regression models. Software can, and should, do the heavy lifting. The key decision for a researcher is which model to use. The most common models are linear regression and logistic regression. Most studies use a linear regression model, but government decisions depends on your data.

It goes beyond this article to explain which model to employ, as the answer is specific to your data. Briefly, linear regression is appropriate when your outcome variable is continuous or numeric( e.g. NPS ).

Logistic regression should be used when your outcome variable is binary( e.g. has two categories like “Do you prefer coffee or tea? ” ). You can find a more detailed explanation in this ebook .

In the steps below, I run through an example to show how easy it is to do a regression analysis once you’ve preferred a model. If you prefer to see the process as part of an interactive tutorial, you can do so here.

Step 3: Running your data through a statistics tool

We completed a survey that asked respondents the following questions:

How likely are you to recommend Apple?( standard NPS question) What is the most important reason for your rating?( open-ended response)

We wanted to see the key brand parts that caused respondents to recommend Apple.

We coded the free-text answer to question two, which gave us the dataset below 😛 TAGEND

We loaded this into our analysis software( Displayr ), and we operated a logistic regression to see which of the brand attributes were most important in determining the NPS score.

( Most standard software packages like Stata, SAS, or SPSS offer logistic regression. In Excel, you can use an add-in like XLSTAT. There are also some handy YouTube videos, like this one by Data Analysis Videos .)

The further the t value is from 0, the stronger the predictor variable( e.g. “Fun”) is for the outcome variable( NPS ). In this instance, “Fun” and “Worth what you pay for” have the greatest influence on NPS.

Since our estimates for “Fun” and “Worth what you pay for” are somewhat close together, we operated a relative importance analysis to be sure of our results 😛 TAGEND

relative importance analysis

This second analysis is a hedge against a potential shortcoming of logistic regression, collinearity. Collinearity “tends to inflate the deviation of at least one calculated regression coefficient. This can cause at least some regression coefficients to have the wrong sign.”

Step 4: Exam for significance

How do you know when( quantitative) changes in your customer satisfaction feedback are significant? You need to separate the signal from the noise.

Statistical significance testing is automated and built-in to most statistical packages. If you need to work out your statistical significance manually, read this guide .

( You should also be aware of the limitations and common pitfalls of statistical significance, which you can read about here, here, or here .)

Step 5: Visualize your data

Finally, we are going to be able visualize the results to share with colleagues. The objective is to enable them to understand the results without the distraction of number-heavy tables and statistics 😛 TAGEND

brand attribute visualization

One way to visualize client feedback data, especially NPS, is to plot the frequency or percentage of the ratings. For example, if you want to show the distribution of ratings within each promoter group, you are able to use a bar pictograph visualization.

The pictograph bar chart below has been colour coded to make it easier to distinguish between groups. It clues us to an important observation. Among detractors, there is a far bigger concentration of scores 5-6 than 0-4.

net promoter score visualization

These 5-6 detractors are much more likely to be swayed than those who gave a score a little bit closer to 0. With that in intellect, you could concentrate your analysis of qualitative responses on that group.

Comparison visualizations. You can also compare the NPS of different brands to benchmark against competitors. The stacked bar chart below proves responses by promoter group for different tech companies.

While we lose some detail by aggregating ratings by promoter group, we gain the ability to compare different brands efficiently 😛 TAGEND

At a glance, it’s easy to spot that Google has the most promoters and fewest detractors. IBM, Intel, HP, Dell, and Yahoo all struggle with a high number of detractors and few promoters.

This visualization also shows the size of the passive group relative to the other two.( A common mistake is to concentrate exclusively on promoters or detractors .)

If the size of your passive group is big relative to your promoters and detractors, you are able to dramatically improve your NPS by nudging them over the line.( There’s also the risk, of course, that they were able swaying the other route .).

Visualizations over hour. You can track your NPS over time with a column or line chart( if you have enough responses over a long period of day ). These charts can help indicate the impact of marketing campaigns or product changes.

Be careful, though. If your NPS fluctuates( perhaps due to seasonal sales or campaigns ), it can be difficult to spot a trend. A trendline can indicate the overall trajectory.

For example, in the chart below, it would be difficult to spot which route the NPS is trending without the help of a trend line 😛 TAGEND

customer satisfaction over time visualization

Conclusion

When it comes to improving survey analysis, a lot comes down to how well you design your survey and prepare your data.

No single metric, NPS included, is perfect. Open-ended follow-up questions after closed questions can help provide context.

There are a few other keys to successful survey analysis:

Ladder your questions to get context for ratings and ratings. Design your survey well to avoid headaches when prepping and analyzing data.Clean your data before analysis.Use driver analysis to get that “Aha! ” moment. Manually code open-ended questions for small surveys; use sentiment analysis if you have thousands of responses.Test for implication. Visualize your data to help it resonate within your organization.

The post Improving Survey Analysis: 3 Steps to Get More from Customer Feedback appeared first on CXL.

Read more: conversionxl.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here