City's labor survey too flawed to be effective


Workers in the restaurant industry often face unfair scheduling practices.

It can be hard to distinguish between those who choose to work part time, and those who are under employed. This can make it difficult to create policy that governs the relationships between workers and firms.

As city officials consider new rules to crack down on unreasonable scheduling practices, they have seemed to acknowledge this uncertainty by commissioning a survey to “identify the potential impacts of policy changes.” Unfortunately, the city’s efforts are ultimately stymied by the methods this study employs, which are substandard to accurately represent the Seattle worker population.

Foremost, the survey commissioned by the City of Seattle relies on a “snowball” sampling method, whereby respondents are first recruited by “media outlets, community-based organizations, and the business community,” according to the press release announcing the study.  Those respondents that agree to participate are then asked to share the survey with colleagues.

The problems associated with “respondent driven” sampling methods are well documented — biased results are virtually inevitable.

Philip Garland, a visiting scholar at University of Washington and former VP of Methodology at SurveyMonkey.

In the case of sampling workers for a study about scheduling practices, having workers who are highly motivated to respond to the survey pass the opportunity along to colleagues all but guarantees that the survey findings will demonstrate an outsized proportion of unhappy workers, and also overstate the extent of their unhappiness. After all, workers satisfied with the status quo have little to no incentive to participate in a study that inherently seeks change. Accurate estimations of the prevalence of workplace abuse are simply unattainable from the sample that comprises the city’s survey.

And since the survey is political in nature, the incentives to “ballot stuff” are reasonably high. This is a possibility made more likely by the fact that the survey was only conducted online. Because of this, potential respondents that find themselves on the wrong side of the digital divide — which disproportionally impacts people along race, age, class, and gender lines — are excluded altogether.

Apart from sampling issues, there are other methodological choices made by the city-commissioned researchers that impact the validity of results. For example, the study offers a reward to the “first 500” responders. Differential rewarding in this manner is likely to exacerbate the unwanted effects of snowball sampling, as respondents of lower socioeconomic status may be motivated by rewards and over represent themselves in the sample. Worsening matters, peoples’ motivation to answer accurately is hindered when they are paid for participation.

Telescoping — the phenomenon whereby individuals over-report the recollection of events — is an issue with any task that asks people to remember events of the past. The City of Seattle survey increases the likelihood of telescoping effects by changing the reference period utilized (e.g. “in the past two weeks did your employer…”, “the last time you…”) from question to question. The likely result is an undistinguishable mix of respondent reports based on recent events and those of a distant past.

Another survey phenomenon that biases results is known as acquiescence response bias, which describes the over-reporting of ‘yes’ responses because people have a tendency to be polite, and agreeable toward researchers, especially “Important Researchers Commissioned by the Government.” The survey is loaded with yes or no questions, and the survey software doesn’t appear to flip the order of response options for every other respondent — all but ensuring an over-reporting of respondents that agree with the scheduling practice abuses described in the questions.

Taken together, these flaws suggest that the total survey error — the combined error produced from each component of survey design and analysis — is undoubtedly high.

What is needed, then, is a large-scale, highly valid probability study that measures not just the scheduling practices of employers, but also the scheduling policies desired by workers and their reasons for choosing part-time work. Only then can policy makers accomplish their stated goal of arming themselves with the pertinent and consequential information needed to make policy choices that yield the most upside for the greatest number of people.


Please support independent local news for all.

We rely on donations from readers like you to sustain Crosscut's in-depth reporting on issues critical to the PNW.


About the Authors & Contributors

Philip Garland

Philip Garland

Philip Garland, Ph.D., is a Visiting Scholar in the Center for Communication, Difference, and Equity in the Department of Communication at the University of Washington and former Vice President of Methodology at SurveyMonkey.