printable banner

U.S. Department of State - Great Seal

U.S. Department of State

Diplomacy in Action

101 on Political Polling - The View from Quinnipiac University

Dr. Doug Schwartz, Director of the Quinnipiac University Poll
New York, New York
August 29, 2018




FOREIGN PRESS CENTER BRIEFING WITH Dr. Doug Schwartz, Director of the Quinnipiac University Poll

TOPIC: 101 on Political Polling – The View from Quinnipiac University

WEDNESDAY, AUGUST 29, 2018, 1:00 P.M. EDT

NEW YORK FOREIGN PRESS CENTER, 799 UNITED NATIONS PLAZA, 10TH FLOOR

MODERATOR: Good afternoon, ladies and gentlemen. Thank you for coming today. We are very pleased to have Dr. Douglas Schwartz here today to talk about polling and the science behind polling. Dr. Schwartz is the director of the Quinnipiac University Poll, which many of you probably know, and he is responsible for the poll’s methodology and all aspects of the survey process. So hopefully you have lots of questions about that.

As you may know, he is a nongovernment expert, so as a nongovernment expert, he speaks in his personal capacity and does not represent the official policy views of the U.S. Government. This is an on-the-record briefing. And welcome to the Foreign Press Center, Dr. Schwartz. Thank you.

MR SCHWARTZ: Thank you, Kathy. Before we start, I just want to formally introduce myself. My name is Doug Schwartz, and I’m the director of the Quinnipiac University Poll. Thanks for meeting today, and I hope by the end of our session you’ll have a better understanding of how our public opinion polls work, and give you some information you need when you are reporting on polls.

Joining us here today is Mary Snow, who is our latest addition to the Quinnipiac University Poll, and is one of our spokespeople. She was a former CNN reporter and is based here in New York City. She can be a resource for you in the future.

I will start by telling you about us, and then open the floor for questions.

In my experience as a pollster for nearly 25 years, there’s one question that comes up very frequently, and that is: How can a sample of a thousand represent some 200 million voters? To put it simply, think about a pot of soup. You only need a spoonful of soup to know how the whole pot tastes. Polling is similar. You only need to interview a thousand people to represent the views of the entire country.

I want to walk you through the process so that you have a clear idea of what we do and how we do it. The process starts at Quinnipiac University in Hamden, Connecticut, which is roughly two hours north of New York City. On a typical week, somewhere between 100 and 200 employees dial voters either in a particular state or across the country. They are part-time employees, and they are hired to call randomly selected adults, ask a scripted set of questions over a several-day period, and record the results. Those results go into a data set that’s analyzed, published, and presented to the public.

The key to the accuracy of that sample is how the people are selected. You have to have what is known as a random sample. A random sample is one in which everyone in the population has an equal chance of being selected. If a random sample is used, the poll will accurately represent the population. That sounds nice in theory, but how do we actually make it work in practice? I’m going to try to pull back the curtain on some of the mystery behind polling.

So how do we get a random sample of phone numbers? We use random digit dialing, or RDD, which is the gold-standard method in the polling industry. This method uses phone numbers that have been randomly generated by a computer. It was originally developed back in the 1970s as a way to ensure that people who had unlisted phone numbers would be included in the sample. These days, some of the most respected polls in the country continue to use this method, including the Gallup and Pew polls.

Once we reach a household, we don’t automatically speak with whomever picks up the phone. We randomly select the person to speak with by using what’s called the next birthday technique. Basically, we ask to speak with the person in that household who has the next birthday. Why do we do this? We have found it brings the right balance between the percentage of men and women in the population. If we don’t get an answer, we try several times. If someone can’t talk right then, we schedule a more convenient time. We spread out our calls between five and six days, which is our typical polling period. During the week, we call in the evenings. On the weekends, we switch it up. We work hard to reach people, to give them a voice, and not skip over them if they aren’t able to talk at the exact time we call.

In order to reach everyone, we call both cell phones and land lines. About half the country uses only a cell phone, and they are disproportionately young people. Robo calls or automated polls are prevented by law from calling people on their cell phones. We use live interviewers who manually dial every phone number, so we are allowed to call people on their cell phones. Another thing we do to ensure we have the best random sample is we conduct interviews in Spanish as well as English.

When you are looking at a poll, one thing you want to look at is the sample size. Pollsters generally like to stay above about a 500-person sample. At Quinnipiac University we typically interview more than a thousand registered voters in our national polls. That sample size allows us to look at subgroups such as men, women, old, young, white, people of color, college-educated, non-college-educated. That takes us to another part of a poll, which is the margin of error.

Every poll, using a random sample, has a margin of error because it is based on a sample. If the entire population was interviewed, there wouldn’t be a margin of error. Let’s consider an example. Let’s say our poll finds that 40 percent of the nation approves of the way President Trump is handling his job, and the margin of error is plus or minus 3 percentage points. What that means is the actual percentage of the population that approves of the job he is doing is between 37 and 43 percent.

So what does that mean in the context of election polling? We poll in Texas, where, as you may know, there is a closely watched Senate race between Republican incumbent Senator Ted Cruz and Democratic challenger Beto O’Rourke. Suppose a poll found that Cruz led O’Rourke 52 to 48 percent with a three-point margin of error. That means Cruz could be at 49 percent and O’Rourke at 51 percent. Here is where the margin of error really comes into play. With that result, we would call that race too close to call.

If I’m not polling on a race but I’d still like to know where it stands, I have a few rules of thumb for which polls I pay the most attention to. The most important thing I look for is a gold-standard methodology. That means I look for live interviewer, RDD polls that call cell phones and are transparent about their methods. I also prefer polls that have a proven track record, particularly in similar elections, but I will still pay attention to a new poll that has these same gold-standard methods. Finally, I look for a poll that has independent funding. Studies have shown that, in general, partisan polls tend to overstate their candidate’s position during an election.

Speaking of election polls, you will see polls where some are based on registered voters and others are based on likely voters. Typically, pollsters will go over to likely voters pretty soon after Labor Day, as that is the traditional kickoff for campaigns and voters begin paying closer attention to the elections. There is no consensus for how to define likely voters. We look at a variety of factors for determining who is likely to vote by asking questions about past voting behavior, interest in the campaign, general interest in politics, and intention to vote. As you look at polls, transparency is key. You want to know the sample size, whether the poll was conducted among likely voters or registered voters, as well as what’s the margin of error, and what were the exact questions asked.

I’m going to walk you through how to find that information on our polls. So if you go to our website – as you can see on the screen, poll.qu.edu – and I’m going to look at a national poll. So you see the releases by state, where we list them on the left-hand side. If you’re interested, let’s say, in the most recent national polls, you click on “National,” and you can see all of our press releases are listed, and I’m going to choose the one from August 14th. I’m going to click on that.

And here you have our press release, where we talk about the main findings. It’s two pages. Then, after that – and also at the bottom of the second page you’ll see some important methodological detail. It tells you the poll dates – August 9th through the 13th. It tells you that we interviewed 1,175 voters nationwide, margin of error plus or minus 3.4 percentage points, and that we called land lines and cell phones. If you scroll down further, we have a trend on President Trump’s job approval rating, a graphic. We’ve been asking since the start of his administration. And then the first question is actually President Trump’s job approval, and you’ll see not only do we report the overall results, that 54 percent disapprove of the job he’s doing and 41 percent approve, we break it down among several different subgroups. So reading across you have your total, and then Republican, Democrat, independent. You can look at the numbers among men and women; white college-educated people, white non-college-educated people. And then below that you'll see we break down the results by age; white men, white women; whites, blacks, Hispanics. So all of the questions you can look at what we call the crosstabs, and a lot of times the crosstabs are really interesting and important for the stories that you’re writing about.

We also underneath that list the most recent polls that we ask the job approval question. So you see there all the times we’ve asked it. And then we list all the other questions that we asked about for this first press release: In general do you like President Trump’s policies or not? In general do you like President Trump as a person or not? I'm not going to go through all the questions, but you get the idea.

If you go to the top, there are also some links that are useful, I think. You can get the press release in PDF format. As you can see at the top, PDF format. We also have all the trend information. And I’ll click on sample and methodology detail. This is something that’s important. It goes into some of the important details for how the poll was conducted. Again, an RDD survey, when we took the survey, margin of error. We called in English as well as Spanish. We collected and tabulated all of the poll information. We also report the part identification of this sample. That’s oftentimes a question that political reports have is that they want to know what percentage of your sample consider themselves Republican or Democrat or independent. And then we talk about some more details below for how we conducted the poll.

One final thing about the Quinnipiac University poll. We are completely independent. We don’t work for any parties, candidates, or advocacy groups. Our funding comes from the university. And besides going to our website, you can also follow us on Twitter @QuinnipiacPoll. There we also provide information about the timing of a poll’s release.

And with that, I’ll take some questions.

MODERATOR: Great. Thank you so much. If you have a question, raise your hand and we’ll get to you. And of course those in Washington, please stand at the podium and we’ll get to you as well. Just give your name and media affiliation.

QUESTION: Hasan Ferdous from Bangladesh Daily Prothom Alo. Two questions. You – professor, you mention about partisan polling, that they tend to be not particularly correct.

MR SCHWARTZ: Right.

QUESTION: What about the separation between conservative and liberal polling?

MR SCHWARTZ: Yeah.

QUESTION: I understand Quinnipiac is among those who are considered to be a little conservative. And if that is true --

MR SCHWARTZ: Conservative?

QUESTION: That's how I read about you guys. And if that is true, just one question: How do you consider the element of the so-called Bradley effect, the fact that people may vote one way but say something different to pollsters? In case of New York, for example, the David Dinkins effect. You remember that. What margin of error would you set aside for this effect? And finally, finally, given what happened at the last elections, how you modeled the projections, why should we have any confidence in what you're projecting? Thank you.

MR SCHWARTZ: Sure. A lot of good questions. Let me see if I can remember them all. The first one was conservative. We don’t consider ourselves conservative or liberal. We consider ourselves right down the middle. As I stated earlier, we don’t take any funding from any parties, campaigns, advocacy groups. Our sole mission is to get an accurate poll that people can trust.

Second question on Bradley. Yeah, I know what you’re referring to about people telling the pollsters one thing and then voting another way. This, in terms of recent times, hasn’t really happened. But you're right, it did happen back in Los Angeles. It happened in the New York City election with David Dinkins a while ago. But pollsters really have not seen that in the last, I don't know, couple of decades. There’s always – there can always be an exception, but in general you can pretty much trust people in terms of telling you who they’re going to vote for.

Last question about the 2016 elections. So I think there is a misconception about the accuracy of polls, that in general the polls were on the mark in 2016. If you looked at the national polls, don’t forget Hillary Clinton did win the popular vote. Most of the high-quality national polls had her ahead by three or four points. She ended up winning by two points. As far as Quinnipiac goes, we were also fairly accurate in our statewide polling.

You also used the word about projections and modeling and all that – it’s important to emphasize that a poll is a snapshot in time and that things can change. There was movement in the final days of the presidential election. Exit polls found that the late deciders tended to vote for Donald Trump. So always keep that in mind, that we’re asking people at a specific point in time, okay, how you’re – how are you going to vote, but it’s also possible that between that point in time and the election events could intervene and affect that.

More questions.

QUESTION: Manik Mehta. I’m syndicated. My question relates to the growing influx of immigrants in this country who eventually acquire U.S. citizenship. So the question is: How do you go about selecting your candidates, or rather people whom you wish to poll? Do you also take into account minorities, for example?

MR SCHWARTZ: Absolutely. We get a representative cross-section of the population, and we know this because we can compare the numbers that we get in our samples to what the census tells us on various demographics – education, age, race. And as far as how we go about selecting the people, so we’re talking to all adults 18 years of age and older, and then typically what we do is we’ll screen for registered voters because so often our polls are about elections, so we will report on registered voters. And as we get closer to an election – as I said, in the fall – then we’ll start reporting our results among likely voters.

QUESTION: Johannes Kotkavirta from newspaper Ilta-Sanomat, Finland. I heard many times that nowadays the polls are not as accurate as they used to be, that there’s no more such resources to conduct them as maybe there used to be, that people actually used to go to – by door to door and actually talk with people, now it’s phones or online polls and things like that. So I just want to ask what do you think about the accuracy.

And then second question about coming midterm elections. How well do you think that you are – you will be able to project the outcome, and is there any kind of like similar factor as there was before the 2016 elections, that it was kind of difficult to predict what’s going to happen?

MR SCHWARTZ: Yeah. Lot of good questions there. I guess my answer would be, regarding the accuracy of polling, is that you can trust the gold standard polls that use live interviewers, random digit dialing, call cell phones – the Pew polls, the Gallup polls. Those are polls that you can continue to trust. They have a proven track record of going back many decades. So this methodology, it works. So that’s what I would say. I’d also add in the transparency part. Those are the polls I trust, is the ones that I can see how they did the poll, what questions that they asked. And the independents. The polls that are not aligned with any particular party or group, those are the ones that I trust the most. So I think you can still believe in polls as long as we’ve got that gold standard methodology.

The 2018 midterms – I’m confident in our polling on it because of the methods that we use. Our sole mission is to get it right, and not just in terms of the horse race but also to explain why voters are voting the way they are, what are the important factors, what are the important issues. I think that’s a really important function of polls. In terms of 2016, not that concerned. We just polled in the 2017 Virginia governor’s race and the 2017 New Jersey governor’s race, and our polling was spot on in both elections. So I feel very confident for how we’re going to perform in 2018.

QUESTION: My name is Mayako Ishikawa from Asahi Shimbun, Japanese newspaper. The first question is how you decide the contents of the questions, and who make the questions? And the second question is: Are you, like, modify or accommodate the contents of the questions regarding – according to the change of the society or – I see you have the data according to every month on the questions different numbers. So my question is: If you change the question during this period, how you explain to the people and --

MR SCHWARTZ: So we decide what questions to ask based on what’s in the news. That’s what drives us. So that gets to your second question. So if there are new things, new issues – and there always are – we will come up with new questions. At the same time, the trends are really important, so we like to ask many of the same questions over time just simply to see if there’s been a change in opinion, whether it’s on President Trump’s job approval or it’s on different issues. We ask people’s views, for example, on gun policy and healthcare policy and on many, many different issues. So we like whenever possible to keep the same question wording so that we can trend it over time to see if there’s been changes, but at the same time we’re always adding fresh, new questions based on what’s in the news to every poll that we ask, every poll that we do.

It’s a team process in terms of coming up with the questions. I had introduced Mary earlier, she’s part of our brain trust, part of our team that – we talk about, what are the big issues happening, whether it’s national or in one of our states, whether it’s in Texas or New York or New Jersey. We talk about what are the big issues and then we draft a survey. After we draft a survey, we vet it. We all talk about the questions. We want to make sure, a hundred percent sure, that we all agree that this is a fair questionnaire, the wordings are fair, the choice of the topics that we’ve asked are fair. So it’s a team effort. We all collaborate and that’s how we come up with a questionnaire.

But I will tell you one other sort of anecdote is that it’s been most challenging trying to write questionnaires for the national polls that we’ve been doing. Things have really changed in the last year and a half since the election of President Trump. There is so much breaking news every day that when we try to come up with questions on a Monday, if we’re starting a poll on a Friday, there’ve been many, many rewrites. And so it’s an evolving process. Each day we’re adding questions, dropping questions, modifying questions until even the day that we go into the field. If we’re going to start a poll on a Friday, we’re checking the news on Friday morning, early Friday afternoon what is the latest news, what do we need to include in the poll?

QUESTION: Thank you. My name is Manoj Rijal. I am from Ghatana Ra Bichar, Nepal. Yeah, I have one question. And I have done some economic research in the past, and I’m curious: What are the differences between an academic research and your poll research? Because when I was doing the research, then I used a Likert scale, that says strongly agree, agree, can’t say, disagree, and strongly disagree. So is it similar or different? Thank you.

MR SCHWARTZ: It is similar. We also do that on some of our questions. For example, on President Trump’s job approval, we’ll do a follow-up where we’ll say, if somebody tells us they approve of the job he’s doing, we follow it up. Is that strongly or somewhat? So we can do a four point scale, and it’s made for some of the most interesting findings, to be honest with you. Because what we have found is, people feel very strongly one way or the other about President Trump. They either – for the most part, they either strongly disapprove or strongly approve. And what we’ve also found over the year and a half is that the strongly disapprovers far outnumber the strongly approvers. So there is sort of this enthusiasm or intensity gap that we’ve found.

QUESTION: Hi, my name is Shoko Yanagawa from Japanese TV station called Fuji Television. Hi, I have actually a follow-up question to who decide the questions.

MR SCHWARTZ: Sure.

QUESTION: What kind of background do those people have, and also how many people are involved in this process of screening, deciding questions?

MR SCHWARTZ: So, we’re very fortunate to have several former journalists. Mary Snow worked at CNN for many years, as well as at other media outlets. Tim Molloy is another one of our spokespeople, also a former journalist. Peter Brown, another former journalist. So I work with several former journalists in terms of trying to decide what are the big issues that we should be asking about.

QUESTION: How many people about?

MR SCHWARTZ: In total? In terms of this particular process of coming up with questions, I’d say there’s about a half a dozen people involved. What I often like to do is ask several people in the office – we have ten full-time people – to read the questionnaire. I might think something that I wrote is perfectly clear to me but might not be clear to somebody else. So it’s always good to have several pairs of eyes looking at the script before we actually start. Or I might think a question is perfectly fair, and somebody else says, “You know that one word that you used, I think that could have an impact. I think we should come up with another word.” So that’s what we do. It’s a back and forth process with several people on the team before we actually go into the field.

QUESTION: Hi, my name is Kawachi with Nikkei, Japanese news. So I’m amazed that you are still using this method of live calling. Maybe ten years ago you can still reach over the phone, but these days it’s almost, like, impossible and I’m just kind of curious that you still stick to this live call. That’s the first question.

MR SCHWARTZ: Sure.

QUESTION: And the second question is: Are you planning or are you exploring the possibility of going to social network service as your methodology?

MR SCHWARTZ: Okay. So as far as the phone goes, yes, I’m still a big believer in this method as being the best. It may not be a perfect method. As you said, a lot of people don’t pick up the phone. It’s not a perfect method, but it’s better than all the rest. It still gives us a very accurate sample of the population. And you’d be surprised at how many people agree to do the survey on the phone. Typically about half of the people that we contact agree to do the survey. So we’re still getting reasonable cooperation rates. A lot of people have never been asked their opinion about what’s going on in the news, what’s going on in politics, and they want to take part. They’ve heard about the Quinnipiac University poll. They want to see what it’s all about, what the questions are. So we’re still very pleased with our cooperation rate.

As far as social media goes, I think it’s interesting, but it’s not representative of the population. It’s representative of people that are on social media, whether that’s Twitter or Facebook, but it’s not the same thing as a poll. So it’s the – it’s really the people that want to participate, right? It’s more of a self-selected kind of a sample that are participating on social media versus a poll which is supposed to scientifically select the respondents and try to get everybody, whether they’re super-interested in politics or they’re just a little interested in politics. We try to represent everybody.

QUESTION: Hi, sir. My name is Danielle. I’m from Brazil. The question that I have is about how do you make the decision of working in – reduces a margin of error? For example, if you have a race or a subject that is very disputed or very competitive. So how do you make the decision, oh, in this case we’ll have to work with a margin of error that is reduced? How do you make the decision?

MR SCHWARTZ: Yeah. I mean, it does come into play when you’re talking about a very close election. You’d really like to have as big of a sample as possible to get your margin of error as down as possible, as low as possible. But as a general rule, we kind of just stick with our 1,000, that that’s kind of the industry standard for what is a good sample size. That gets you a margin of error of about plus or minus three percentage points. And yes, we could interview more people to get that margin of error down, but basically, pollsters have decided that the costs are simply too prohibitive, that you’d have to interview thousands more people just to get that margin of error down a bit, and so we’ve basically decided it simply isn’t worth it.

QUESTION: Yes, sir. This is Mohammed Zainal Abedin from Bangladesh. I work for an online daily newspaper in Dhaka. This is my question, but this is – I want to know, it’s my – for my self-interest and my curiosity: Would you ever do – conduct any survey, an instant survey? An instant surveys, suppose the people who are bus or train passengers or who are passer-bys, you can ask them anything during your survey, what do you want to make polls. You can take their opinion, to whom you will vote. That will be more interesting one is because now you are doing one thing, you contact them earlier, contact them earlier and tell them we are going to call you and will talk to you. But if you ask the person instantly, just now you ask me my opinion, then you’re getting both the polling that will be more interesting and more representative. That’s because the person will not be able to get any chance to prepare his mind or make up his mind to whom he will vote. But if you ask me instantly, I think that will be more representative.

MR SCHWARTZ: I’m not sure --

MODERATOR: So I think part of that question is timing of polls and how you can manipulate the timing to have that poll come out in one way or another.

MR SCHWARTZ: Oh, do we – yeah, that is something that we don’t even think about. Again, we’re just trying to get good, solid public opinion information at the time. So we’ll say okay, we’re doing a national poll, here’s what’s going on in the news, this is when we’re doing the poll, then we’re – this is when we’re going to release it. And that is the sole consideration is how do we get a good measure of what the public thinks at that time.

MODERATOR: I think also part of that question was how do you prevent it from – how do you prevent your own poll from being positively or negatively manipulated by people. For example, keeping it – not telling people when you’re polling, et cetera.

MR SCHWARTZ: Right. That is – one of our practices is that we never share our polling schedule because we know that if a campaign knows that when we’re going to be in the field they could target announcements, ads, to try to manipulate the poll. And so that’s why we never share our schedule.

QUESTION: One more question.

MR SCHWARTZ: Sure.

QUESTION: In my country, the popularity of the parties is very kind of stable, that they doesn’t change so much, but we still do lots of polls and get lots of news about the polls. Usually the change is, like, less than the margin of error. So do you think that the polls are being, like, over reported in the news, that they are – they are making news, or we are making news about changes that are within the margin of error?

MR SCHWARTZ: Oh, I see. Yeah, that is something to keep an eye on. As a pollster, I love polls being reported in the news so I’m not going to say it’s too much, but in terms of your specific point about making too much of about a change of a point or two that we know is within the margin of error, I agree with you that that is something that the media really needs to be much more cautious about. But I think, as a pollster, it’s a fascinating time. There are so many new issues at the national level. So even if things aren’t changing, you’re always having public opinion on fresh new issues. So I think it’s a great time to be reporting poll results.

QUESTION: One more question, sir.

MR SCHWARTZ: Sure.

QUESTION: Could you tell us a little bit about the difference between pre-poll poll and exit poll and --

MR SCHWARTZ: Sure.

QUESTION: -- how one differs from the other and if the results correspond with each other?

MR SCHWARTZ: Sure. So a pre-election poll is something that we do at Quinnipiac. So we ask people about how they’re going to vote. An exit poll actually interviews people after they actually voted, and they’ll have a paper-and-pencil questionnaire; they’ll be asked a number of questions about – so they’ll say, “I voted for this candidate; this was the most important issue to me.” They’ll list their demographic characteristics, that sort of thing. Exit polls really should be used for this kind of valuable information in understanding voters – what drove them to vote for the candidates that they did, what issues were important. That’s what exit polls are for.

Pre-election polls, which is what we do, is before the election. As I mentioned, we do phone surveys; it’s not paper and pencil. But in a similar way, we also want to know what’s driving voters, but we don’t know for sure that we have an actual voter. That’s why we call them likely voters, right. Exit polls interview actual voters. Pre-election polls, the tricky part of it is trying to determine who’s actually going to vote. So those are the main differences and similarities.

QUESTION: (Off-mike.)

MR SCHWARTZ: We do look at it but not – you don’t want to, like – you don’t want to compare them too closely because they are a different animal, again, because you’ve got exit polls of actual voters versus pre-election polls, which are of likely voters.

QUESTION: Hi, my name is – excuse me, my name is Islam Dogru, working for Anadolu News Agency. I came a little bit late so you may have already mentioned about this, but I would like to ask again. Would you tell us how often you conduct polls per year, per month? And also just following up the midterm election: Are you conducting currently or working any polls regarding the midterm election? Should we expect any headline from you? And last question. We see that it’s called progressive candidate, they were upsetting midterm politicians lately. One was Ocasio-Cortez, the other one yesterday from Florida. Do you see any sort of change in your polling in the political opinion regarding the midterm election? Thank you.

MR SCHWARTZ: Sure. So we poll almost every week of the year, so you can pretty much count on a Quinnipiac University poll every week. We do step it up in the fall campaign. Probably in October you’ll see maybe multiple polls a week by us. We are trying to focus on – in on some of the hottest races. I had mentioned the Texas senate race, where there is a – it’s competitive. Texas is a red state but Beto Cruz is making it a real race – I’m sorry, Beto O’Rourke is making it a real race with Ted Cruz. You mentioned Florida. That’s also a key state for us. It’s a state that we’ve polled for a long time, and we’ve already known that there’s going to be a very close Senate race between Rick Scott and Nelson, the incumbent, but now we know we have a really exciting governor’s race, as you mentioned, a real progressive versus someone that’s a strong supporter of Donald Trump, kind of a microcosm, if you will, of a lot of races.

So it should be very interesting, I think very exciting to see which way Florida is going to go. Interestingly, I think Scott is moving a little bit towards or away from President Trump, maybe a little bit more towards the middle. Nelson is known as more of a middle-of-the-roader, and then – so that’s your Senate race, but then the governor’s race you’ve got the two wings. You’ve got DeSantis on the right, and Gillum on the left. Very interesting dynamics that we’ve got there. We’ve got other races that we’re following. In New Jersey, Bob Menendez – New Jersey is supposed to be a safe blue state for Democrats, but because Bob Menendez has had some ethical issues arise, we found that that race is also competitive. One of the sort of interesting dynamics going on is the Democrats need two seats to win back the majority in the Senate. It’s possible that they win two seats, but then they could lose one of their own, or perhaps Menendez loses, and they wouldn’t win. Obviously he’s still the favorite, he is ahead, but that is a competitive race for us.

So those are some of the races that we’re going to be focusing on this fall.

Yes.

QUESTION: Thank you. I have a very sort of academic question, but it’s of interest because of the changes that I see happening in the polling system and the methodology. One thing that really caught my attention is that foreign polling institutes had predicted almost accurately about the result of the 2016 election, whereas institutions here in the U.S. seem to have gone off the mark.

I’ll give you an example. A German polling institute called Emnid had predicted a tie between the two candidates, whereas here there was a remarkable difference at the end of the day.

Secondly, another question I have in mind: Do you also monitor and do polling about events happening overseas, abroad – for instance, elections in other countries?

MR SCHWARTZ: So yes, we do. I’m going to answer your second question first. We do questions on what’s going on in foreign policy. We’ve done questions about NAFTA, about tariffs, about Syria, about Russia, and China. We’ve done a lot of questions, actually, about Americans’ views on various foreign policy issues.

As far as the accuracy, I guess I’d come back to what I said earlier about 2016, is that the major national gold-standard polls were very close to getting it right. You can’t expect more from a poll to get than within a point or two, and that’s what the national polls did. Now there were – of course, there were some state polls that were off. But by and large, the state polls got it right. One of the things that you want to watch for – and this – someone brought up margin of error earlier – is if a poll says – and I’m going to give you an example from Quinnipiac – we found in Florida in 2016 that Hillary Clinton was ahead by one point, and she lost by one point. The poll was right. The poll was on target. It was a too-close-to-call race. Being off by two points is really good for a poll.

But if the media says, “Well, you said Hillary Clinton had a one point lead. That means you got it wrong,” no, that’s not the right way to look at a poll. Every poll has a margin of error, and coming within two points of the margin is considered being on target.

MODERATOR: But you do not conduct polls outside of the United States with foreigners?

MR SCHWARTZ: That’s right. At least so far we haven’t. You never know in the future.

QUESTION: Arul Louis from Indo-Asia News Service. I have a question first on the representative sample that you have. When you do the RDD, you may get a sample that’s skewed in terms of race, political opinions or income. I don’t really see much in terms of the income differentials in the polls. So how do you deal with that? Do you sort of reject certain people you surveyed and rearrange?

And the second question is somewhat more hypothetical and theoretical. Since you deal with opinions, I’m asking this question. We’ve had reports of manipulation of opinions from abroad through the social media and the like. Is any metrics that you or somebody maybe developing to look into the impact that those factors might have had? And is there any research into that sort of thing in terms of quantifying or getting some idea of the impact and some kind of metrics for that? Thanks.

MR SCHWARTZ: It’s an interesting – your second one. Not something that we’ve done. I will tell you the kinds of questions, though, that I like asking and might get at this to some extent in the future, is we do belief questions. So do people believe certain things that they’re hearing in the news. We’ve done that on several occasions. So that might be the kind of approach that we take, is asking do people believe things that they might be hearing in social media that might be controversial.

Your first question about the representativeness of the RDD polls – so we do ask the income question. It’s actually the last question that we always ask on every poll. And we can break down our results by income. The results really aren’t skewed in terms of one direction on income or any of the other demographics that I had mentioned, age or education or race or region or gender. We do compare our results to the census to make sure we’re on target, so it’s not a – yeah.

QUESTION: Yeah, so what I meant was that, if say you have a thousand people on your RDD, and you find that initially, the sample skews to a certain kind of a demographic, would you rearrange and go back?

MR SCHWARTZ: I say – yes, so we do use a statistical technique in every poll that we do called weighting the data, W-E-I-G-H-T-I-N-G, and this is a statistical adjustment. So we know, for example, that the population is 52 percent women, and suppose our – in our sample, we ended up with 53 or 54 percent women, we would weight down the responses of women to make sure it’s exactly 52 percent. It very, very rarely has an impact. It – so the way I like to view it is it’s a minor statistical adjustment juts to make it perfectly clean, our data matching the census demographics, but it really – this kind of waiting really doesn’t affect the poll results on the substantive questions that we’re asking about issues and how they feel about elected officials. So does that answer your question? Yeah? It’s a routine thing that we do on age, education, race, on several different demographics. We will weight our data to make sure it matches with the census.

MODERATOR: Okay. I think I’m going to take the moderator’s privilege and ask the last question, which is: Can you tell us a little bit about the people who conduct the polls and maybe approximately how long the polls are when people are being interviewed by them?

MR SCHWARTZ: So we like to keep the surveys to about 15 minutes, not much longer because we find that if you go too much longer, people grow fatigued and don’t want to continue with the survey. So about 15 minutes in length. As far as who is doing the calling – so when the students return in the fall, about half of our staff is going to be student interviewers. A lot of political science majors, psychology, sociology majors. We’ll have roughly equal number of students and non-students, very roughly about 200 students, 200 non-students who will be doing the calling for us.

MODERATOR: Well, thank you all very much for coming today. Thank you, Dr. Schwartz for I think a lively and interesting session, and hopefully we’ll see you at the next briefing.

MR SCHWARTZ: Thank you.

# # #