Language selection

Search

CSPS Virtual Café Series: Everything You Wanted to Know About Polling, with Anil Arora, Claire Durand and Nik Nanos (TRN5-V17)

Description

This event recording features a conversation with Anil Arora, Claire Durand and Nik Nanos on how the media affects polling consumption and quality, whether polls are suffering from fewer and less-diverse participants, and what intelligent consumers should look for when trying to figure out the quality of a poll.

Duration: 01:00:27
Published: January 13, 2022
Type: Video

Event: CSPS Virtual Café Series: Everything You Wanted to Know About Polling with Anil Arora, Claire Durand and Nik Nanos


Now playing

CSPS Virtual Café Series: Everything You Wanted to Know About Polling, with Anil Arora, Claire Durand and Nik Nanos

Transcript | Watch on YouTube

Transcript

Transcript: CSPS Virtual Café Series: Everything You Wanted to Know About Polling, with Anil Arora, Claire Durand and Nik Nanos

[The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, opening it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. Text is beside it reads: "Webcast | Webdiffusion." It fades away to a video chat panel It's a close up on a bald man with a trimmed beard and square glasses, Taki Sarantakis. He sits in a home library.]

Taki Sarantakis: Good morning and welcome to the CSPS virtual café, where we bring contemporary ideas to public servants to help them better think about and better execute all the tasks that public servants have to do.

[A purple title card fades in for a moment, identifying him as being from the Canada School of Public Service.]

Today, we are talking about a very interesting subject, polling, and we often think of polling in terms of elections. Let's call that the sexy part of polling. Polling is actually a huge part of the government's decision‑making process, including the Government of Canada's decision‑making process. Polling is, in some ways, a fancy way of saying: how do we know the sentiment of people? How do we measure or indicate or profess what we think they want or what they know or any other type of question. Today, we are joined by three wonderful experts in this field.

[Three more panels join Taki, putting his in the bottom left corner. On the top left, a woman with short, wavy silver hair, headphone and cat-eye glasses, Claire Durand, sits in front of long, thin panels of wall mounted art. On the top right, a grey-haired man in a suit, Nik Nanos, sits in an office with a large wood desk. On the bottom right, a man in a pinstriped shirt, Anil Arora, sits in front of a pink and white background with a bar graph and a maple leaf. The Canada wordmark "Canada" with a little flag waving over the "a" sits in the bottom right corner of his panel.]

The first is Professor Claire Durand from the University of Montreal, and she is a professor in the Department of Sociology. More important, for our purposes, she is one of Canada's, if not the world's, top experts on polling methodology. She is also a past president of the World Association of Public Research. Welcome, Professor Durand. The other two that are here are actually long-standing friends of the Canada School and also of Canada's public service. The first is Mr. Nik Nanos, who is the Chief Brainiac of Nanos Research Group. The third, and final, is Anil Arora, who is the Chief Brainiac, also known as the Chief Statistician, at Stats Canada. We're going to spend the next hour talking about ideas. Grab a coffee and join us. We're going to start with a very big question. That big first question is as follows: in the aftermath of the US elections in both 2016 and 2020, we had a lot of people talk about how we're not quite sure who the winner is, but we know who the loser is, and the loser is polling.

[Taki's panel fills the screen for a moment. All panels quickly return. A Mac system notification pops up. A cursor appears and dismisses it.]

The polling industry took one on the chin in 2016 and 2020. That'll be a good way for us to start talking about some of the issues that are intrinsic to polling. Who wants to start us off? I'll ask Nik and then Claire and then Anil. Nik.

[Taki's panel fills the screen for a moment. The others return.]

Nik Nanos: Thanks, Taki. I'm just looking at my watch because it took about 30 seconds for you to throw me under the bus as the pollster on the call, but thank you. I do feel welcome. I do look forward to crawling out from under the bus.

[Taki chuckles.]

The narrative that the polls failed in the 2020 election is a false narrative from my perspective.

[Nik's panel fills the screen.]

The fact of the matter is that on election night most of the ballots were still not counted because of the mail ballots.

[A purple title card fades in for a moment, identifying Nik Nanos as being from the Nanos Research.]

We know that empirically now, 40‑50 states were predicted correctly. The popular support for Biden was basically bang on at around 51 percent. Trump was underestimated at about four percent. When you look at the last 50 years of polling, I think the average error is about four to five percent. This was basically a standard election. The only difference was, and maybe this is one of the things we've learned, that not all the ballots were counted and there was a significant difference based on the mode of voting. If you happen to be voting in person, you're more likely to vote for Donald Trump. If you're more likely to vote by mail, you're more likely to vote for Biden. That difference accounted for the initial narrative. That's a bit of a false narrative, so to speak. I'd like to declare that I am not under the bus, at least anymore. I'd like to say that we didn't do any polling in the election.

[All four panels return to the screen.]

Taki: You've possibly escaped the bus. Claire, what are your thoughts on 2016 and 2020?

[Claire's panel fills the screen. A purple title card fades in for a moment, identifying Claire Durand as being from the University of Montreal.]

Claire Durand: First, we should not base our judgements about polls on what's happening in the US because what's happening in the US is not what's happening in UK, Canada, etc. Second, in 2016, 8 of the 13 polls that were published during the last week were within the margin of error when all the ballots had been counted. 2020, I'm surprised that Nik did not mention that, but the telephone polls, whether with live interview or IVR polls, perform very well. Most of them are within the margin of error, no problem, and the IVR polls are almost on target. The polls that did not perform well were mostly are web polls. This was not the case in Canada. In Canada in 2019, the three modes had about the same forecast at the end. This is not the case in the US. But, think about it. They have 53 different pollsters who conducted polls from September 1st. Twenty of them conducted only one poll. Contrary to Canada, where we have a group of 10‑12 pollsters, at most, who conduct electronic polls. They know how to do it. They know what to look for and what to check. You have 53 pollsters using the aggregators. They aggregate everything together as if there is no difference in modes, no difference between pollsters. We think that we have that many different estimations of the vote, but in fact many of these people use the same providers for the sample. If there is a problem with the provider, it's shared by a number of pollsters. The situation in the US is different. There is a difference in mode of administration. So that if you look- you know I have a colleague in Uruguay who says "it's probabilidad that or muerte: it's probability or death." The polls that use, or mostly use, probability samples did fare better in that election than the web polls who used...Then you have to speak about web polls. Web polls use lots of different types of sources for their sampling.

[All four panels return to the screen.]

Taki: Claire raised three interesting points that we'll come back to throughout the discussion—at least three interesting points. One is time. One is the US versus others. The other is the method of getting the sentiment of the population. Anil, you're not a pollster, but you play with numbers a lot as the Chief Statistician. In 2016 and 2020, did that change in any way, good or bad, what you thought about polling as an indicator of the body politic?

[Anil's panel fills the screen. As he speaks, a purple title card fades in for a moment, identifying Claire Durand as being from the University of Montreal]

Anil Arora: Firstly, I think you've got it right. Statistics Canada does not do polling per se. We do, from time to time, some sentiment indicator work, but polling is something that we don't usually get into. We're more into the how many, the what, and so on. First of all, I just want to make three points. Firstly, it's a very complex task. Claire and Nik have just talked a little bit about some of the things at play. The first is: in polling, you're trying to measure the intention. That's what people are usually measuring. And then you get judged by the actual vote. If you were just trying to track the intention over time, I think you would see even greater counters. People could say one thing, but they may not necessarily act in that same way.

The second thing: I think Claire is absolutely right. When you start to look at different modes, people actually have different habits and behave differently. Those modes, in fact, do target certain types of populations. The statistical rigour by which you integrate these things really does matter. Sometimes the sophistication may not be there and that's why you see differences as Claire had talked about.

The third point is: what a great opportunity to inform and educate Canadians and to give them a really good sense of when you see a poll or when you see a survey, what are the kinds of questions that you should be asking yourself when you start to look at those results and actually base decisions on them? I think it's a real opportunity to emphasise methodology, quality, timeliness, and transparency because, at the end of the day, I believe we're all in the same business. We're there to inform Canadians to make good choices based on what's going on. That means equally, we have an opportunity, in that ecosystem, to make sure that people aren't walking away with one thing when the facts are actually saying something very different.

[All four panels return to the screen.]

Taki: That's going to be a big part of our discussion. What's a good poll? What makes a good poll? We have a plethora of polls that are hitting us from everywhere, whether it's in the print media, whether it's in electronic media, whether it's in social media. We are saturated with polls. I want to go back to: when I had a little bit of hair and the hair wasn't grey. I would watch TV at night before I went to sleep. There was a gentleman named Knowlton Nash, and he would pronounce on the day's events, and he would tell me and other Canadians what happened today and what mattered today and why it mattered today. One of the marquee phrases that would come across his screen to mine would be "a new Gallup poll today was released indicating that Canadians..." We wanted more health care or we wanted less health care. We wanted child care or we didn't want child care. We were confident about the future or we were not confident about the future. Nik, was it easier back then when it was Mr. Gallup, or his organisation, that would tell us what we thought? Was it accurate? Was it more accurate? Was it less accurate? At least we know one thing for sure: there were fewer of you in the past.

[Nik's panel fills the screen.]

Nik: Some quick comments.First of all, empirically, the first 30 years in the polling industry were not the good old days. There were some big misses, historic misses, like the Dewey election. Because it was a new industry, people were still learning about methodology. What we did see was a period where the methodologies were consistent, where it was possible to reach the whole universe when everyone had a land line, when people wilfully participated in surveys, and there was direct comparability.

To Claire's point earlier about the US election, there was directly comparability. There were also obstacles to entry. It was hard to get into the polling business. You needed to have infrastructure. You needed to have your own call centre. Those barriers actually increased the quality. It increased the price, but it increased the quality. There was a period when most of the pollsters were doing it the same way. We had fairly consistent and good results. You could have a probability sample and you could reach everyone. Fast forward, now, to Claire's point, there's a diversity of methodologies. They're not transparent. Coverage isn't there in terms of the whole population. That's why pollsters are calling cell numbers and looking at other alternatives. As a result, we can't mix all these polls together when you're an aggregator. It's terrible for me to say, but if you have the money to buy a sample, you could be a pollster. Right? I think that's... You know, there was a period where there was much more certainty and greater quality and greater results, but now we're in a transition phase. The good news is that there are still a lot of pollsters that do very good work. There are a lot of researchers that do excellent work. We have to now start looking at the fine print. Was it a random sample? How is the survey administered? All those other things. We have to pay a lot more attention to those things. In the past, everybody was using basically the same methodologies.

[All four panels return.]

Taki: Claire?

Claire: Yes, lots to say. I've been a pollster. We would promise that the response rate would be 60 percent. Those were the good old days. Nobody promises that anymore. The current situation, the specificity of the current situation is that in the previous transition, like when there was a transition between face‑to‑face to telephone polls, it arrived all at once. In one election, everybody was face‑to‑face. Next election, everybody was telephoned. This one, we see it as a transition, but we have three modes that stay. Even then, we say three modes, but what I've seen in the US is that about 17 percent of the polls were mixed modes. Of course, the IVR polls have to be mixed mode because they can't call cellphones. They have to complete their sample. I've seen web plus telephone, telephone plus IVR, etc. We have a number of methods. This is very good because we can estimate, analyse the difference between modes, why they are there, and perhaps improve.

In the good old days, as you say, I remember that we did a random sample and then we looked at the demographics, and it was StatsCan. We didn't have to weight that much. It was about StatsCan. We're not there anymore. We have to look at other things. I look at a number of data files. This is very interesting. For web polls, for example, I see that if you take it globally, like Quebec or Canada, you will probably have a rather good estimate in the good composition of the electorate. But, when you go subnational, when you go regional, then...You know, if you have Anglophones in Quebec and 50 percent of them live outside Montreal, there is a problem. You have things like that, you have a good number of Anglophones, but they are not at the place that they should be. There needs to be improvement in the samples for web polls.

[All four panels return to the screen.]

Taki: Let's get into this a little bit, because we've been dancing around the notion of what is a good poll. Nik talked about how you have to look at the footnotes. Most Canadians have lives and we can't look at the footnotes. Let's talk a little bit about three of the footnotes.

[Taki's panel fills the screen. He counts off on his fingers.]

One, I want to talk about random sample. Number two, I want to talk about sample size. Number three, I want to talk about mode of asking the question. You've both hinted that telephones are this, and cell phones are that, and in‑person is that and web is a third thing. Let's start off with random sample. Nik, why is it important to have a random sample?

[All four panels return to the screen.]

Nik: I always say if someone wants to do a survey, they're not normal -if they volunteer. You want people to be randomly selected, except with the StatsCan survey. If they do a StatsCan survey, they're patriots, I'd like to say. I have a preference for random probability surveys.

[Nik's panel fills the screen.]

What we want is people to be randomly selected so that we can try to have a representative group. That said, what we find is that we have to have much shorter surveys because people's ability to share time is limited, especially now that we're calling people's cell lines. We can't be doing a 15‑20 minute survey where someone might be chewing up part of their cellphone plan. That's the way we've been adjusting in order to accommodate people. The randomness, from my perspective, is critical. The fact of the matter is that you don't always need a random sample for every single research project. We should not be holding up random samples as the panacea for everything. There are situations where you don't necessarily need a probability sample because it's a defined population and it might be from a list of certain types of individuals. You still want to probably have randomness within that, but not the pure random sample that we're talking about.

[All four panels return.]

Taki: Anil, in government, random is a bit of a bad word, isn't it? We don't do things in random ways. We do things in targeted ways, and we do things with some sophistication. You don't go around and randomly ask people questions, right?

Anil: We absolutely do.

[All four chuckle. Claire nods. As Anil speaks, his panel fills the screen.]

Random is really a math issue. What you want to do, not withstanding if you're going to go for a targeted aspect- is, you want to be able to calculate the probability of somebody being selected. Based on that, you want to make sure that everybody has an equal chance so that when you weight up the responses to the total population, you haven't skewed anything. There are stratified random samples as well. If you're going to sample an apartment building which has one bedroom, two bedroom and three bedroom apartments, you're not going to just do a random sample because you know the characteristics of those in the three bedrooms is not going to be the same as the one. Even the amount of rent that people pay in the apartment, depending on which floor they are on, is going to be different. Sometimes you have to stratify, but then within there, you want to make sure that you can calculate the probability of somebody being selected. All our surveys have elements and, as I said, it's a mathematical issue for us to be able to calculate. The footnote that you don't read is really important to read because of the math that goes into how that individual was selected. Now, not from Statistics Canada's perspective, but, having worked in a couple of policy departments, earlier to your point, the government does. Absolutely. Ministers, the cabinet, all of us. When we started to put options and what recommendations there may be, we do take into account studies and polls and so on.

To folks that are listening today, yes, you might be busy, I get it, but please take the time to differentiate before you give every single source and every single data point the same weight because they're not the same weight. If you're looking at one option and one recommendation from one source with a different set of quality indicators, and you're trying to say "they're all the same," they're not. I would say it is important for all of us as policy analysts and trying to provide the best advice as policymakers to actually get informed and understand the methodologies and understand the limitations. If something excludes a certain population, like our Indigenous population or youth or those below 18, it's important for me to know that, if the recommendation that you're making assumes the opposite. Get informed and get involved. Whether you're taking information from a poll or you're taking information from Statistics Canada, ask those questions and pay attention to those footnotes.

[All four panels return.]

Taki: That's a really important point. Because, what Anil is saying is that we, as policy people, as program people, as people who advise, and as people who put options before people that decide, we have to be critical consumers of information, including polling information. We can't just take something that's presented to us as the automatic truth because part of our job is to do quality control before it goes forward. Claire, tell me a little bit about random sample. How can it be random if I'm on a website and I self‑select to fill out a poll or if I'm on some web page and then all of a sudden it says, take this poll? Isn't the fact that I'm on this web page, doesn't that mean something about me? Doesn't that mean that the sample, almost by definition, isn't random?

[Claire's panel fills the screen.]

Claire: Look, there are lots of ways to have non‑random and quasi‑random polls. The first thing I would like to say is that I think researchers like me went from "these web polls are not random, and so they should not be giving good, accurate information." We went from that to: "why do they give good information most of the time?" Now we have to try to understand that. You're on a website and a poll pops up. This is a new way. It's been perhaps 10 years. The first web pollsters tried to have an opt‑in panel and this is fixed. Then they try to have more people on it. At a certain point, they stop recruiting because they have to get in touch with these people, and this is a lot of work and money. They try river sampling. They put surveys on a number of websites and it pops up. They try to vary these websites so that in the end, it will represent all the population. Then you have another method also, where, if you are a client of Costco, for example, you will receive a message from Costco and then at the end it will say, would you take this survey? Those are all ways web pollsters used to try to randomise because they know that their opt-in panel are a bit biased. They are biased towards people who have time, people who do not work, people who are interested in politics. We would think young people, but in fact, old people. They have much difficulty finding young people. We have all this, and some web pollsters go get three different sources of samples, plus a telephone sample, and put all this together.

[All four panels return to the screen.]

In a way, they try to balance the inconvenience and advantage of everything. This is what's happening right now. I have to raise two points.

Taki: For sure.

Claire: Nik spoke about short surveys and what I see out there, people have surveys of one hour. This is not respecting the respondent. It's not possible. You have to ask yourself: what can I ask of somebody? It should not be much more than 20 minutes, 30 minutes for StatsCan, I'd say.

[Nik laughs. Taki and Anil nod. The Claire's panel fills the screen.]

Some of these surveys are way too long. Finally, I do not understand. I have to raise this question absolutely. Why does StatsCan and other agencies put surveys on the web and tell people that they should go on the website and answer this? This is not a web survey. This is the worst of convenient sampling. They did that during the pandemic at one point. I received the link. Go on this website and answer our survey. Okay, I don't know why me, and I saw it on Twitter.

[All four panels return to the screen.]

Taki: I want to-

Claire: This should not be considered web surveys. Surveys that are conducted on Facebook. This is not a web survey. It's something like I'd like to know your opinion on.

Taki: So Anil, do you want to say something or do you want us to go back?

Anil: I'm happy to respond. First of all, context matters. Right? We first did our crowdsourcing survey back before cannabis was legalised.

[Anil's panel fills the screen]

Clearly, there are going to be times when you don't just knock on the door or phone somebody up and say "how much illegal cannabis did you purchase today? What was the quality? Who did you buy it from? What was the price?" and so on. In the absence of nothing, sometimes you have to be able to get a qualitative indication of what's going on. But you don't leave it at that. If you present it as a robust qualitative result, then that's duping the user. Sometimes the gap and allowing anybody to have a debate on whatever it is they think is happening is not the best thing.

Two: numbers do matter. For our first crowdsourcing during COVID, we got nearly a quarter million Canadians who participated in this. We were able to triangulate the characteristics of those individuals to the population and give some indication of quality. We were very up front that it was not a regular survey with the quality that we would- I think transparency is really, really important. When we're saying that we're getting an idea of what's going on here, I think that's important.

The third thing is that we were able to go back and have a look at some of those with some of our other surveys, which were based on the robust methodologies, and be able to look at consistency or inconsistency of responses. The last thing is that many of the people that participated in our crowdsourcing then further to that participated and signed up to be on our web panel. It was also a way to move forward. Now we are doing some really interesting statistical work to go back and see if we can retrofit some of those people that voluntarily took part in some of the crowdsourcing and be able to then see if there is a weight that can be assigned to them based on the responses that they gave us. It's research work. We have no idea where it's going to turn out yet, to see some reverse engineering that can get us some idea of quality. So sometimes, for us, to take a pressing issue and say "we'll be back to you in six months" doesn't really work. As long as you are up front with the users and quality. We're not pretending it's a survey. We're not pretending that it has the same quality of some of our robust surveys. It's by far the exception when you do that, not the rule. I agree with Claire. We shouldn't be doing this all the time, but there is a time and a place. As long as we're transparent about that I think that helps.

[All four panels return to the screen.]

Nik: Another thing that we should add as researchers is that one of the key principles of quality research is repeatability, that someone else can take your methodology and repeat it, or if you use that, get the same results. I know that our firm has had problems crowdsourcing in terms of repeatability.

[Nik's panel fills the screen.]

The repeatability was only 12 times out of 20 as opposed to 19 times out of 20, which is why we've opted for randomly recruiting people through land and cell lines to deploy to an online survey, as our approach, at least. In terms of what StatsCan is doing, the proof in the pudding is in the ability to repeat the study and to repeat the findings and to triangulate that. I would say, Anil, it's probably a work in progress. Right? It's not your only data point, but you're kind of "let's try to do some experiments." As long as we don't hurt the ecosystem doing those experiments, we're good with that. I think that repeatability is one of the things we need to focus on.

[All four panels return to the screen.]

Taki: Terrific. So-

Anil: If I could just add, Taki, we do about 400 program surveys, etc. We're talking about four or five. By no means is Statistics Canada getting into this business in a steady state.

[Anil's panel fills the screen.]

The second thing is, to Nik's point, there are some really neat research papers internationally now that are being prepared, where statistical agencies that are trying some of these things to see if the methodologies can, in fact, learn from some of those experiences. It is at best experimental. I think this is what we do all the time to push the science. You've got to try some things. Sometimes they work, sometimes they don't work.

[All four panels return.]

Taki: Exactly. So, back to what makes a good poll. We talked a little bit about random. Takeaway for the young policy analysts out there: if it's not random, it's almost garbage or sometimes it's worse than garbage.

[Claire and Nik shake their heads and mouth "no".]

They'll put in a couple of qualifiers in terms of stratified randomness and targeting particular populations.

[Taki's panel fills the screen.]

The other thing is: respect people's time. Because if you're out there, if you want to find out what a grain farmer wants to know about WTO subsidies, probably not a good idea to ask him or her to give you three hours of their time. Figure out what you really want to know and ask that. Second, don't bother them during harvest. They're not gonna really... Respect their time is a good rubric for that. Let's go to sample size now. This is one that's always intrigued me. What's our population, Anil? 37‑38 million?

[All four panels return to the screen. Anil nods.]

Anil: Just over 38.

Taki: 38 million. How many people do we need to get a good "sample size," or an accurate sample size of what "Canadians" think on any given subject?

Claire: It depends. Do you want to know what people in the Maritimes, Atlantic Provinces think, and what people in Quebec think, and people in Ontario think, People in B.C. think? Do you want to have this kind detail? Different regions of the country will not necessarily see things in the same way. The more detail you want to have, every time you say "I need 1000 in Quebec, 1000 in Ontario, and 1000 in BC," you end up with a lot of people. As a matter of fact, I was extremely surprised to see in the US polls at the national level with 700 or 800 people. You say "this is not possible." You should- and not random. You should really have- the more diverse a population, the larger the sample size should be.

Taki: Is there a rule of thumb that says if you really want to know what 38 million Canadians think about a given subject, if you're asking less than 1000 of them or less than 10 000 of them, it's worthless? Nik. Anil.

Nik: No. If you're doing a random sample, the results are the same, whether it's 500, 1000, 2000, or 10,000 in terms of the top number.

[Nik's panel fills the screen.]

But, to Claire's point, the power to do analysis is diminished with the smaller sample size. You will not be able to look at subpopulations. Usually whenever we're discussing the size of the sample, I focus more on what type of analysis you want to do. To Claire's point, if we want to know what Francophones in New Brunswick think, that means something else in terms of the size of the sample. We should not be thinking- you know, and also, if you look at Canadian elections, there's no correlation between the accuracy of the survey and the number of interviews. Zero.

[All four panels return to the screen. Anil puts up a hand for a second.]

Taki: You're saying something that's counterintuitive to me as a layperson. What you're saying, if I understand you right, is that with as little as 500 randomly selected people, I can tell you what "38 million Canadians" think on a given subject?

Claire: His margin of error will be larger for the number.

Nik: Yep, but the number, hypothetically, would be the same.

Taki: And the, my margin of error becomes statistically respectable at 38 million if I ask how many people? 1000? 2000? 20,000?

Claire: The population does not enter into the equation. The only thing that enters into the equation of the margin of error is the size of the sample and the proportion. If you've got something like...

[ Claire's panel fills the screen, she gestures, waving her hands to the side.]

...Years, you know, 1995 in Quebec for the referendum, it was very close. Some people had 10 000 people as a sample because it's very close. And then, the margin of error, if it's too large, you can just say that it's close and that's it. If you have a situation where it's 60/40, then your sample can be smaller.

[All four panels return to the screen.]

The 38 million or 3 million or 380 million does not enter into the calculation.

Taki: That's fascinating.

Anil: I think Claire put it perfectly when she said it depends.

[Taki laughs. Anil's panel fills the screen.]

That's really the answer. If somebody only has ten dollars to find out something, you go randomly pick somebody on the street and you ask them the question and you say "that's pretty well what Canadians think." How right are you going to be? How repeatable is that going to be? How well does it represent every single person in this country? Probably not that well. Right? It depends on what you're going to use the information for, how much time you have, what kind of error you can stomach, who are the populations you're going to exclude, what kind of sophistication you have in terms of the ability to use those results. There are a number of factors that go into this. Let me give you two- now, we don't do polls. I don't think we could ever do a survey with 800 or 1000 responses. We just couldn't do it because I wouldn't be able to give it to a policymaker to say "do something with this." I just couldn't do it. So the labour force survey that we do every single-

[All four panels return.]

Taki: Anil, can I pause you just for a sec? I want you to go back to it, but just to educate us as a parentheses, what's the difference between a poll and a survey?

Anil: A poll is usually when you're trying to get a sentiment: what does somebody think about something? And a survey is the fact. So, what is the underlying? For example, we're not asking people: what do you think the unemployment rate is? We're trying to calculate it. We're not asking you: what is the price differential between a commodity from one month to the next? We're trying to for control for size, volume, quality, etc., and giving you the standard, giving you a comparable check.

[Anil's panel fills the screen.]

Usually what we are doing is setting the baseline where there's population demographics, breakdown age groups, etc., so that when Nik does a poll, he can go and say "I pick randomly. Here's the math. 1000 people. I got from them what they think about something." Now he can say "I had so many seniors, so many people in this age group, so many in this geography, or whatever" and then says "what does StatsCan say is a baseline? That one response has a weight of 10,000, or whatever it is." And then you can now calculate if you were to pick that sample multiple times, what would be the variance in the responses that you would get in each of those samples? That's usually what the quality statement is.

And so, when you're talking about a sample size, as I said, for the Labour Force survey, for example, 56,000 households or 110,000 Canadians give us responses on their labour force situation. Yet it still struggles to get us high‑quality, stable rates in very small areas. For the census that we do, every single household has a one in five chance for the long form. We know that in certain areas we still have to suppress responses because we're not going to get the kind- It depends on the phenomena or the population that you are most interested in knowing. For example, if we're talking about an LGBTQ+ population, you know by its very nature, it's so small. If you pick up the average person on the street or you pick 1000 people and you are now trying to infer that those responses can be broken down for the LGBTQ+ population, you know the margin of error will be so high that that would probably be meaningless. It depends on what you're trying to study, what the nature of that phenomena is, how much money, how much time, and what you can stand behind in terms of the quality that is there. It's a simple question: how big? The answer is exactly what Claire said. It depends.

[All four panels return for a moment before Taki's panel fills the screen.]

Taki: I love that. Thank you. Let's go to the third one: the mode of surveying people. So we've talked: there's face‑to‑face. There is telephone. Within telephone, it's cell phone and landline. And then there's also I guess, internet. Nik, which one do you use? Why?

[All four panels return to the screen.]

Nik: We use them all. How's that? We're ecumenical. Maybe this is another thing. I always worry about any researcher that just says, "I have one way to do the survey and it is a solution for everything." We do telephone surveys. We do one‑on‑one personal interviews. We do online surveys, both probability and non‑probability.

[Nik's panel fills the screen.]

One thing that I want to put on the table for discussion, and this is especially critical for anyone that's doing research in the government sphere, is using the right mode for the right audience. Let's talk about marginalised populations in Canada. What we found, in our research at least, is to reach marginalised populations, they don't all have access to the internet. You have people that are underemployed, unemployed, or might be homeless. I think we have to be very careful because sometimes you're in a department and you have a budget to do a survey and you start going down this path and you get to the very end and someone says "Hey! are we going to be able to include marginalised Canadians in this? Because it's important for a public policy issue." Then you find out you're doing like an opt‑in online panel, which probably skews towards more affluent people who have a higher educational attainment. Then you realise that maybe that's the wrong tool for this and that you need a telephone survey in the same way that if you're doing a population that had a high educational attainment, you probably go online and you can still have a probability survey. I think one of the challenges is that we gained these populations- part of being the government of Canada is to help everybody. We have to make sure that we're administering the right mode for the right objective. I find sometimes that's not discussed until the very end of the methodology discussions, as opposed to the beginning of the methodology discussion. I'm not here to cast aspersions on any mode because I believe as researchers, we have to use everything. You have to ask: what's important to this project and is this the right mode for the target audience that we want to reach?

[All four panels return to the screen.]

Taki: Claire, what do you think of that?

Claire: I think that I would open up to, perhaps, we are heading towards a situation where there is not one right mode.

[Nik nods.]

Perhaps, what we're seeing in the US right now is that there are people who will never answer telephone calls, there are people who will never answer a web poll, and there are people who are never going to answer an IVR poll.

[Claire's panel fills the screen. Her title card fades in a moment reading "Claire Durand, University of Montreal.]

Combining gives you the possibility to reach out to a better variety of people. After that, the problem is- and Anil will sure see it, is how you weight this together. This is a problem. Concerning landlines and cellphones, all the people who do telephone polls, whether IVR or with interviewer, put the two in their sample right now. The fun thing about it is in 2016 in the US, I was on the committee who looked at the polls and we asked telephone pollsters what was the proportion of cellphones was in their samples. It varied from 25 percent to 75 percent. It's not a bell curve. It's uniform. There is absolutely no difference in estimate according to the proportion of cellphones in the sample. That was in 2016. It may have changed because there are more and more people who are cell only, and all that. I still see this disparity. The question is: what is the proportion of cellphone only people? That join only on cell? I get answers like "this percent of the population." Then I ask them "what is the percentage of all phones?" "Oh." We calculate the proportion of person for cellphone and also for the other kind. I don't want to be the statistician to weigh these polls afterwards.

[All four panels return.]

Taki: In some ways, I think what you've all said is that the mode comes back to randomness as the population becomes less homogeneous in terms of how you reach them. In the past, you could reach everybody with a landline or not everybody, but a good chunk of the population. Now, you reach more people with cellphones, fewer people etc. It comes back to randomness. I'm going to go in. It'll be a little philosophical now in terms of polling.

[Taki's panel fills the screen.]

I want to go into the notion of: do polls reflect what people feel or do polls help tell people what they're feeling? There's this notion that sometimes polls can be dangerous. I know in the past we would ban the publication of polls close to an election. In France, for a while, for a decade or two, they even banned polling entirely. Let's start a little bit of that conversation. Do polls reflect or do polls also do more than reflect? A little bit like Heisenberg. Is it the act of observation impact what you're observing? Claire, you're very eager, which scares me, but start us off.

[All four panels return to the screen. Claire shifts in her seat as Nik and Anil smile.]

Claire: Yes. This is something I researched. In France, it was one week before the election, I think two weeks perhaps. But you still have countries- there is a report on the WAPO website. I think we have 120 countries in the world where there are bans and no bans and all that. You still have I think Italy, it's one month. Things like that. What happened in France in the last presidential election? Now they only have a one day ban. On that day, there was a poll published in Switzerland and another poll published in Belgium saying "Fillon is ahead" and it was not true. The first thing is you cannot ban polls anymore because anybody can- If there is an election in Quebec, you can publish a poll in Ontario, in the US, anywhere. You cannot ban polls.

The second point is it would be extremely dangerous and undemocratic because polls would still be conducted. Who would have access to polls? Those who can pay for polls. The population would not have access to its own opinion. Some people who campaigned for it would have it. The second very important point is that when polls are public, you can check. If they are not public, you replace polls by rumours. That is extremely dangerous.

[Claire's panel fills the screen a moment before all four panels return.]

Taki: Nik, do you guys tell us what to think or do you tell us what we're thinking?

Nik: We tell you what you're thinking. What worries me is not a ban. I'll tell you, if there was a ban, I'd be rich. 'Cause Everyone, to Claire's point...

Taki: Right.

Nik: What is now in the public domain for the public good to put scrutiny on politicians would not exist. People, people would be, corporations, and hedge funds would be paying money to get the information that would not be in the public domain. I'm not worried about banning because it's not feasible. I'll tell you what I am worried about. I'm worried about the weaponization of polls. I think that's what we saw in the last US presidential election, especially at the state level. I've read a number of the state polls, and I will tell you that I was mortified at the lack of detail and transparency at a lot of these state polls. They didn't say who paid for the poll. They didn't say all the questions that were asked on the poll. They didn't explain their modelling or weighting that they were doing. To Claire's point, these things were just mashed together with other polls.

I worry, and this is exacerbated in a social media environment where we're all in an echo chamber. If someone likes a poll, they cite it. The next day, if they see another poll that they don't like, that contradicts what they personally believe, they believe that it's not credible. I'd say for the pollsters: less rock star, more researcher. Just say the numbers and let people do their own interpretation and be transparent. That's one thing I think Canada does a much better job of in the industry than the United States. For those major firms, they're much more transparent and they're much more consistent in their reporting compared to those 80 large firms in the United States that are doing stuff.

[All four panels return to the screen.]

Taki: If there's a poll that says X percentage of Canadians are worried about losing their job or Y percentage of Canadians are fearful of the future or hopeful about the future, doesn't that impact me? Doesn't that impact me? Doesn't that say to me "you should be fearful because 80 percent of your fellow Canadians are fearful?" Nick?

Nik: I don't think as many people are watching polls as you might think, Taki. More people are watching the news. When we look at what's in the news, we find that the news actually drives behaviour more than polls.

Claire: I think what is more dangerous is when governments use polls in support for their policies.

[Claire's panel fills the screen.]

There are some types of policies that are not a matter of public opinion. For example, when the government decided to stop the death penalty. It was in 1970. The first time we've seen a poll with the majority of the Canadian population against the death penalty was 2002. If at that time the government said "people are for going on with the penalty. We don't abolish the penalty," it would still be there. There are some policies that governments have to sell to the population, and it's not a matter of public opinion. This is a dangerous use of force I would say.

Taki: What a fascinating- yup.

Anil: If I could just add, if you're a policymaker and you're looking for information, one option is to look at what exists in the public domain on a particular topic.

[Anil's panel fills the screen. His purple title card fades in for a moment reading "Anil Arora, Statistics Canada."]

I would say, look at the entire poll. What was the main thrust of that poll? What were other questions around it that were asked? So that you don't take something out of context. Look at the timing of what it was, what was popular, what was in the news, and how people's thoughts were being shaped by what was going on outside. Do a little bit more digging than just take answer to one question out of context. To know when a poll makes sense, if you're going to actually conduct a poll and you're paid for something, look at and ask the kinds of questions that you have. What is the transparency? What is the pollster giving you? What sample size? How are they picking it? What's the non‑response rate? Many, many pollsters don't put that out there. You should be careful about that. You should be differentiating between when a poll makes sense and when a survey makes sense. When you do give advice, to Claire's point, caveat it. Just because a poll says something, that's not necessarily where the country may want to go. You have example after example of where the poll may say something. Because everyone wants to belong to the prevailing thought in many cases. That's not necessarily always in the best interest of a minority group or a different conversation. Don't put too much emphasis. Know enough. Inform yourself. Ask the right questions. It's about fit for purpose and using the right tool for the right job.

[All four panels return.]

Taki: What a fascinating hour. Anil has done what I was going to ask each of you to do, so Anil, you've already gone. Close us off with a word of wisdom to the public servants out there who are grappling with this. I love Anil, that you introduced the notion of a survey versus a poll. Use the right tool. Be critical. Be understanding. Claire, what's your closing wisdom to the public servants out there vis‑à‑vis this subject?

[Taki's panel fills the screen for a moment. All four panels return.]

Claire: Do not do cherry picking of polls.

Taki: Do not do cherry picking. So don't go out-

Claire: Which means don't just go take the poll that fits what you think. You need to have a number of polls. This is the theory of polls, anyway. You need to have a number of polls. You need to look at what are the questions. Do not cherry pick. "This one is okay for me, so I just keep it."

Taki: Nik, what's your closing wisdom for us?

Nik: My closing wisdom of think of the population and what you're trying to achieve and make sure that you're using the right mode, the right way to do a survey for the right population, and the right content.

Taki: Claire, Nik, Anil, thank you so much for being friends of the Public Service of Canada. Thank you for taking an hour to talk to us about a very, very important subject, and thank you for being illuminating and insightful and funny as we went along. All my very best and thank you for this hour. Take care.

Claire: Thank you.

Nik: See ya!

[The panelists smile and wave. The chat fades to the animated white Canada School of Public Service logo appears on a purple background. Its pages turn, closing it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. The government of Canada Wordmark appears: the word "Canada" with a small Canadian flag waving over the final "a." The screen fades to black.]

Related links


Date modified: