Surveys and Dirty Little Secret; Hidden Distortion, Bias– Illusion of Scientific Validity: Business Beware

Surveys are one of the primary vehicles for collecting information that companies use for making important business decisions… According to Nick Wreden; when surveys can done right they can increase a company’s knowledge, understanding about key issues that affect their business… but when done poorly (and many are), they can derail a business strategy and generate misguided business initiatives…

More business rely on surveys to learn– what customers want, need, value… or, how employees feel about their job, the company, the management… or, where and how markets and their industry are positioned for growth… the list of survey topics is endless and surveys can be effective market research tools for many areas of an organization… But there are serious issues with many surveys, e.g.: What is the sampling methodology? Is the sample size large enough? Does it accurately reflect the population of interest? Are there biases?

survey thFJ4393QP

Answers to these and other issues are critical if a survey is to provide sound and useful observations and actionable results: Used properly, surveys can be a valuable tool in helping companies understand their strengths and weaknesses, and in helping to identify areas of emphasis and focus in order to improve the customers experience… However, when surveys are designed and used improperly… they become useless, waste of time, counter-productive…

According to Swanni: often the numbers obtained from surveys are based on what people ‘say’ they’ve done and, not necessarily, what they have ‘actually’ done– these are two very different things: People often tell surveys what they think they want to hear, and often times these responses are false… While survey numbers may be interesting to observe most of them should not be taken too seriously, since the ‘real-world’ numbers don’t really back them up…

In the article Dirty Little Secret of Employee Surveys by Robert Gerst writes: Each year, hundreds of thousands of employees from the executive suite to the front lines are asked to complete an employee survey… Almost a billion dollars worth of engagement surveys are sold in North America each year, and the case for improving employee engagement is impressive, for example: The Hay Group, a major survey provider, says that; high levels of employee engagement can boost revenue growth by up to two and a half times…

Aon Hewitt notes: 20% of the organization’s (most engaged) employees create 80% of the value… and the Gallup polling company, claims that– actively disengaged employees cost the U.S. economy up to $350 billion per year in lost productivity… Numbers like these get attention, and it may be worth spending a billion dollars to get $350 billion back. But do surveys really working? Is business getting a ‘real’ return on the survey investment? For many companies, increasingly the answer is: No.

Worse, these surveys are more likely to do more harm than good, for example: The dirty little secret of employee engagement surveys is– they are largely junk science placing marketing objective of telling and selling a good story, above the practical and ethical objective of telling the truth Often statistical methods are misused corrupting survey results, while providing an air of scientific legitimacy…

Some organizations base their management bonuses on engagement score improvement, and managers are well aware that the fastest way to improving engagement is by firing anyone suspected of providing negative feedback. Many surveys are junk science, but they continue to be used in ranking engagement results, and in cherry picking factors comprising engagement models… OK; then should business get rid of surveys? No, but they must be truthful; stop promoting survey results as scientific validation…

survey TelephoneSurveys

In the article Web Surveys’ Hidden Hazards by Palmer Morrel-Samuels writes: Web-based surveys are increasingly used to measure– employees attitudes and motivation, program effectiveness, staff performance, customers experience… However, few companies that embrace online surveys are aware of their fundamental problems…

Done correctly they can produce useful insights and observations that otherwise would not be available… But done poorly, they can dramatically distort survey results, which can lead business into making bad decisions… It’s interesting to note that Web-based surveys typically yield higher scores than print surveys, which produce lower response rates, more restricted range of responses– fewer very high or very low scores, and host of other distortions… 

An important issue with online surveys is ‘skewing’ of scores, which can undermine validity and reliability of the survey’s results… A few examples of ‘skewing’ are:

  • Opting Out: Response rates for Web surveys can be as much as 80% lower than those for their print counterparts… respondents often resist for a number of reasons, e.g.; difficulty accessing the survey, problems navigating through the questionnaire, concerns about confidentiality…
  • Sugar-coating: Poorly or bias designed Web surveys usually produce implausibly favorable responses… survey results are predetermined based-on; selection, design, phrasing of questions…
  • Skimming: In workplaces, printed surveys and Web surveys usually attract distinctly different respondents. The typical Web survey user has private access to a computer, holds greater responsibility, and is better paid. When a company offers both print and Web surveys, this self-selection bias means that the Web survey tends to skim higher-level respondents off the top, while lower-level employees stay with paper…
  • Clipping: Web surveys tend to elicit responses that are ‘clipped’, that is, they are artificially compress within range between high and low scores. Clipped responses can seriously impede analysis by excluding important information…
  • Re-shuffling: Web surveys almost always reshuffle rankings of scores. That is, when you compute the average response for each question in the survey and rank those averages from highest to lowest, the ranking from the two formats will most likely be different. This is serious because when the ranking of averages is disrupted, then correlations between questions are also disrupted. And it’s these correlations that determine the outcome of any analysis…

In the article Trusting Polls: Confidence Intervals and Survey Bias by Oleh Iwanyshyn writes: To survive a pollster (i.e., people who prepare, and/or conduct, and/or analysis surveys) learns very quickly that the most important factor in polling (i.e., process of doing surveys) is the relationship between the pollster and his or her client. The pollster realizes that the client must be satisfied with the results of the poll (i.e., survey), otherwise no more business. For most surveys, and human nature being what it is, pollsters usually understands that the client is not looking for ‘negativity’ from the survey results… 

So the most critical part of the survey (polling) process is when the pollster trying to figure out, in advance of the poll; which results will make the client happy/satisfied… Once the pollster has learned this important truth, then he or she can proceed to do the survey, i.e.; provide polling (i.e., survey) data that will make the client happy, i.e., tell the client what they want to hear… As you can appreciate, this ‘understanding of happiness’ has flipped everything upside down. Instead of surveys bringing ‘truths’, they are now marketing tools that provide an unearned ‘scientific’ validation to the speculative ideas of the client…

While this may be slight exaggeration, more often than not the dynamic between pollster and client is such that there is no doubt that this is the operative equation in organizing, conducting surveys…  This is why the most important question in assessing the merits of a survey is: Who is funding the survey?

There is a mountain of research evidence that survey results tend to be biased in favor of the interests of the clients who are funding the survey. The second key question in assessing the merits of a survey as it relates to its design… Since pollsters typically want to maximize profits, they can do it by minimizing the costs of the survey’s design and implementation, which usually means the ‘sampling’ is of less quality and more bias, which increase likelihood of flawed results that are distorted and less reliable…

survey th3Q5OICJ3

In the article Hidden Danger of Survey Bias by Fred Van Bennekom writes: Perhaps the topic most inquired in the field of surveys is response rates. What’s a good response rate? What’s the statistical accuracy with a certain response rate? But, what is often overlooked is that the ‘response rate’ is not the only concern, and that ‘bias’ in the response group is of equal concern, i.e,: It’s often suggested– that the greater the number of responses the better the statistical accuracy; but this is a false sense of security when there is bias in survey sampling. So it’s not possible to have a high degree of statistical accuracy, when there is bias in sampling or other misleading or bad data in the results… Which means that the survey is not a true or real representation of the actual population of interest.

Hence, most surveys are flawed; that is, the findings from the sampling used in the survey will not match (or even come close in many surveys) with the results that you would get if you successfully got everyone in the population to complete the survey. This difference is known as a sampling error, and the statistical accuracy tells you how much sampling error there is in the data…

However there is also a survey bias, which is different but it’s just as important as statistical accuracy… According to Roberta L. Sangster; surveys have hidden dangers that can turn them into swamps of– complexity, inaccuracies, bias, useless and unactionable results that fail to provide the basis for effective decision-making… Most important and very troubling are the many major management decisions that are made based on faulty data gathered from surveys…