Title Philometrics | Surveys Reinvented

Text Schedule a Demo








success
fail


























Feb
MAR
Apr




19




2017
2018
2019







1 capture19 Mar 2018












About this capture



COLLECTED BY



Organization: Internet Archive


The Internet Archive discovers and captures web pages through many different web crawls.

At any given time several distinct crawls are running, some for months, and some every day or longer.

View the web archive through the Wayback Machine.




Collection: Live Web Proxy Crawls


Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org.

Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.




TIMESTAMPS











body {
margin-top:0 !important;
padding-top:0 !important;
/*min-width:800px !important;*/
}
.wb-autocomplete-suggestions {
text-align: left; cursor: default; border: 1px solid #ccc; border-top: 0; background: #fff; box-shadow: -1px 1px 3px rgba(0,0,0,.1);
position: absolute; display: none; z-index: 2147483647; max-height: 254px; overflow: hidden; overflow-y: auto; box-sizing: border-box;
}
.wb-autocomplete-suggestion { position: relative; padding: 0 .6em; line-height: 23px; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; font-size: 1.02em; color: #333; }
.wb-autocomplete-suggestion b { font-weight: bold; }
.wb-autocomplete-suggestion.selected { background: #f0f0f0; }


__wm.bt(575,27,25,2,"web","https://philometrics.com/method","2018-03-19",1996);


philometricsSchedule DemoResourcesPricingSign upLoginHow it worksPricingAbout usSchedule DemoSign upContact UsContact UsRead our White PaperInterested in learning the validation details behind Philometrics? Check out our white paper.What is SurveyExtender?The basic idea behind surveys is 120 years old: To get a response, a person has to answer your questions. This is both expensive and time consuming. Not surprisingly, most surveys don't have many respondents. And those that do, are astronomically expensive.Yet we know that surveys with few respondents usually do not accurately reflect the population. So survey researchers are typically left with poor quality data and poor quality answers. To date, this has been acceptable because no better alternative existed. SurveyExtender changes this. It's built on the idea that we can forecast how hundreds of thousands of people would respond to a survey without actually needing them to take the survey. This idea is provocative and a major shift in how we think about doing survey research. We have spent the last few years figuring out how to do it, and building a rigorous science to validate the method and give us the confidence that this is the way to do survey research.The result: We take your survey of 1,000 respondents and give you back responses for over 100,000 anonymous respondents across the United States. Survey Extender works by using state-of-the-art machine learning technology that automatically builds models predicting responses to your survey questions using people's a special survey of demographic and psychological questions.The rest of this page shares in detail the scientific validation of SurveyExtender and the basic principles of how it functions. We are ultimately skeptical scientists and have worked hard to convince ourselves that survey forecasting works. We hope you find our evidence convincing. But the best way to actually see if it works is to test it yourself. Run a survey, have it extended, and play with the results.Basics Behind Survey ForecastingSurveyExtender works by (a) taking a sample of participants (at least 1000) who have answered your survey and a special "Rosetta Stone" survey of ours, (b) uses machine learning to build models of how the "Rosetta Stone" survey maps onto your responses, and then (c) feeds through the "Rosetta Stone" survey responses of 100,000+ anonymous people through these models to generate forecasts for how they might answer your survey. Below we discuss each step in detail.Step 1: Your Survey + Social Media from 1000 respondentsFor SurveyExtender to work, it will need data to build its machine learning models. This data has two parts: (a) Responses to your survey (which will be used as the output), and (b) A special all of our participants take which we call the "Rosetta Stone" survey (which will be used as the input). Philometrics makes it easy to get both parts of the data. Simply build your survey using our survey engine and then turn SurveyExtender on in the study settings (or when creating the study). We will automatically add the necessary questions to the beginning of your study to collect the "Rosetta Stone" survey. Once you have your survey ready, simply recruit at least 1000 people to do your survey. We need at least 1000 people to answer each question to ensure the models have high enough accuracy rates. For best results, we recommend recruiting even more--1500 to 2000 respondents.Step 2: Automatically Building ModelsOnce you have collected at least 1000 participants who have both (a) Responses to your survey and (b) the "Rosetta Stone" survey, SurveyExtender applies machine learning algorithms to build a model for every question that is either continuous (likert, slider) or categorical (multiple choice, drop down, checkbox) in nature. The models take the "Rosetta Stone" survey responses as inputs--the Xs in a regression. Then they set each survey question as the output--the Y in a regression.Step 3: ForecastOnce the models are ready, we feed the "Rosetta Stone" survey data of over 100,000 anonymous people through the models. This generates a forecast for each person how they might respond to each survey question. This forecast is based on each person's psychology and demographics behavior.To learn more about the logic behind why this works--and why it provides better results than just surveys--keep reading!The Need for Large SamplesResearch is about generalizing: We want to take something we've learned about our participants and generalize the insight to the broader population. This has been historically difficult to do without a very large number of participants. Let’s take an example: Imagine you want to conduct a political poll. You recruit 1000 Americans and ask them their political preferences. In the best case scenario, this sample is nationally representative.What you can confidently learn from these 1000 people is how on average, Americans are leaning in their political preferences. However, this is a national level snapshot, and misses much of the rich differences that exist between states. So now let’s imagine you want to go deeper and ask what are people's political preferences in each state. The 1000 participants provide poor answers to this question because you have very few people from each state - smaller states, such as Vermont, may not even have a single person represented in your 1000 participants. You quickly run into the small N problem: Too few participant data points on the population (each state's voters) you are trying to generalize about.To get a good estimate by state, you would need to go out and get 1000 participants in each state, quickly ballooning the sample size to 50,000. The problem only escalates further as you zoom in deeper (say to the district level) or start comparing different groups within states (say men versus women, or younger versus older voters).We've never had a good solution to this problem other than to go recruit massive samples - something that has been prohibitively expensive and thus rarely done. For the most part, we've simply been unable to explore the rich variability we know is there, but simply cannot access.This problem stems from us being bound by a simple rule: The only way we can access how people respond to surveys is by asking them, and so our sample size is equal to how many people we recruit. Philometrics aims to change this through SurveyExtender.How Survey Forecasting WorksLet's start with a relatively uncontroversial (we hope!) claim: All of human behavior is driven by what's going on in our heads and the social environments we live in. If we could understand perfectly how a person thinks and the environment they are in, then we should be able to predict how they would answer a survey (which is just a form of behavior).Survey forecasting is based on this principle - but it’s far more approximate than the ideal case. We first say let’s ignore the social environment and just figure out how a person thinks. This would get us quite far as it would remove some of the situational biases that often occur in our surveys (though potentially limit our ability to understand the role of situational forces in shaping behavior). Then, we try to figure out an approximation of what’s going on in their heads by using a questionnaire designed to estimate their personal psychology.But people are extremely complex and our "Rosetta Stone" survey is unlikely to capture this richness fully. So the information we extract os imperfect: It is only a rough approximation of a person. At the same time, the information does carry some level of insight about the person.What we would expect then is for the "Rosetta Stone" survey data to be able to provide us rough approximations of how people think - and in turn, how they answer surveys. These should be rough with lots of noise.Accuracy of Forecasted DataHow do we know our models and forecasts are any good? To answer this, we test all the models using people who were not part of the training process (so their data in no way shaped what the model looks like), but have both actual survey responses and our "Rosetta Stone" survey responses. We take these people's "Rosetta Stone" survey responses, feed them into our models, and generate forecasted survey responses. We then compare these forecasted survey responses to their actual responses.There are two typical accuracy metrics. For continuous variables (e.g., Likert responses, slider responses, age), we simply correlate the forecasted response with the actual response. For categorical variables, we use Area Under the Curve (or AUC), which is a measure of the percentage of time you would accurately classify a person as being part of a category assuming that you had equal number of folks part of that category and not part of that category (so complete guessing should lead to a 50% success rate). The AUC number ranges from 0 to 1, with .5 being the chance guessing level and 1 meaning you classified every person correctly.In our system, we do the training and testing using the participants that actually complete your survey. For all these participants, we collect the "Rosetta Stone" survey responses behind the scenes. Since the sample sizes are relatively small, we use resampling techniques to generate the test group. The ultimate result though is a model for every question you asked that can be used to forecast. For each model, we tell you the accuracy rates using either R (correlation for continuous variables) or AUC (for categorical variables).The building model part is only step 1. The really exciting bit happens next: We take 100,000+ real people, we feed their "Rosetta Stone" survey responses through the models to forecast the best guess of how they would have answered your survey. These scores are noisy - in no way are they actually spot on in forecasting how a particular individual would actually respond. But they are better than chance, and far more importantly, they have several properties that make using them to generalize to the original population extremely powerful - in fact, far more powerful than using the self-report! Keep reading to find out how this happens and how we validate this claim.How Do Biased Samples Affect Forecasted Scores?In working with survey forecasting, we discovered two very important properties of forecasted scores. First, we found that even though the scores had a lot of noise, they were unbiased: Our models are just as likely to over-predict as under-predict a person's score.The mean of the errors is nearly 0 and there's roughly a normal distribution around the 0 point. This is extremely important as it tells us our models are not making systematically wrong forecasts. Now this is where the power of a large sample comes in. If these were actual survey responses, and we were attempting to get an average for the population they came from, we would average the survey responses and get a pretty great estimate of the true population average: This is because with a large sample, and assuming there is no systematic bias in people's scores, the average across all people is close to the true average. In the case of forecasted scores, if we take a large group of people with forecasted scores and average them together, our average error becomes nearly 0 and we get a population average that should be very close to the truth.What is amazing here is that since we can forecast for many more people than in our original training sample, we can get far better population averages from the forecasted data than the actual survey responses we collected! Think back to our political poll example. If we had surveyed 1000 people, there would be too few data points per state to say anything meaningful about the average in the smaller states. But through forecasting, we now have estimates for 100,000+ anonymous people - plenty of data points per state. And since the forecasts are unbiased, when averaging them together, we should get pretty close to the true state average (e.g., what we would have found if we had surveyed everyone in each state).The second important property of forecasted responses is that they tend to correlate with each other sensibly. If you take a correlation matrix of actual survey responses and compare it to a correlational matrix of the same variables, but forecasted, the correlation matrixes are often quite similar. The directions of the correlations are almost always the same, with the only difference being magnitude. We are continuing to work on making the models even better and making the correlation matrixes as similar to the self-report as possible, but already it’s quite good. Why this is important is that ultimately, we often care about two things in our surveys: Population averages and how variables covary with one another. We've already covered the high accuracy at the population average from the forecasted responses; the correlation matrix similarity property provides shows us that forecasted data is also highly useful for understanding relationships.IntroductionBasics Behind Survey ForecastingThe Need for Large SamplesHow Survey Forecasting WorksAccuracy of Forecasted DataHow Do Biased Samples Affect Forecasted Scores?2015-2017 Philometrics Inc. All content is copyrighted.Terms of ServicePrivacy PolicyAbout UsTop







Calendly.initBadgeWidget({url: 'https://web.archive.org/web/20180319211357/https://calendly.com/philometrics', text: 'Schedule a Demo', color: '#00a2ff', branding: true});



modify_calendly = function () {
if ($('.calendly-badge-content').html() !== undefined){
$('.calendly-badge-content').text("Schedule a Demo");
$('.calendly-badge-content').css("background", "rgb(70, 180, 175)");
} else {
setTimeout(modify_calendly, 10);
}
};

modify_calendly();




Highligther

Un-highlight all Un-highlight selectionu Highlight selectionh