Frequently Asked Questions

What is this project called?

The name of this project is the Electoral Violence Prediction Market (EVPM).

What organization is the sponsor of this project?

The sponsor of the project is the Bureau of Conflict Stabilization Operations (CSO) of the United States Department of State.

What is the purpose of the of the project?

The purpose of the project is twofold. First, its purpose is to develop the integrated forecasting tool for electoral violence using a quantitative analysis of electoral violence data, combined with a prediction market of experts, providing a qualitative dimension to the integrated analysis. Second, the findings from the use of this tool will be employed by CSO to formulate its policy and programmatic priorities to prevent, manage, or mediate the electoral violence which has been forecast.

Who are the implementing partners on the project?

Working with CSO on the project implementation are Creative Associates International and King's College London.

What is the prediction market?

A prediction market is simply a way to aggregate people's forecasts and output an ongoing likelihood of an event happening. In this case, it is the probability and profile of electoral violence in given elections. Those invited to participate have been drawn from a global source of experts in electoral violence, security and conflict studies, and election generalists. Participants represent the United States governmental sector as well as academic, nongovernmental organizations and think tanks as well as independent consultants.

What is my role?

As a participant in the prediction market, you will be asked to forecast on two sets of elections. First, the project is conducting a global scan of elections to be conducted in 2018, 2019, and 2020. The first set of forecasts are the probability for violence to occur in elections to be conducted during those years. Second, with these findings in hand, CSO will select seven elections to be conducted in 2018 and ask for forecasts for each election on the likelihood of violence during different phases of the electoral cycle.

What will be the use of the findings from the project?

CSO will employ the findings from the forecasts to inform its policy and programming decisions for interventions to prevent, manage, or mediate the violence.

How does the prediction market mechanism work?

By comparison, if want to ask a question about the likelihood a candidate is going to win an election, we might ask in the prediction market: "Will Candidate A win?"

In a survey or poll, you might simply say "yes" or "no," but in a prediction market, you are using an allotment of points to make a weighted forecast.

In this prediction market, you have two ways of making those weighted forecasts. In "Easy" mode, you can simply provide a probability for the answer happening. Based on this probability, the system will use some of your points and make the weighted forecast for you. In "Advanced" mode, you can see and change the number of points being used for your weighted forecast after you assign a probability.

The more points you use, the more influence you are exerting on the "crowd" forecast. This crowd forecast is a "live" probability, meaning after you make a forecast, someone else could make a different forecast and move the probability to a different value.

Can I change my forecast?

Yes – you can change your forecast at any time. The recommended timeframe to review and change your forecast is weekly, based on new information that may be available about the questions we're asking, but you're free to pick your own interval to update your forecast, even if it's multiple times in a single day.

What happens when the answer to a forecasting problem is known?

When the answer to the question is known, the forecast problem is "resolved." Each answer is judged as either have happened or not happened, and probabilities for that answer moves to either 0% or 100%. Based on your previous forecasts, you will earn or lose points depending if you forecast in the right direction.

How were the starting probability factors next to the responses determined?

The starting probability percentages were derived from a quantitative review of elections in the Countries at Risk for Electoral Violence (CREV) database of King’s College London through a Random Forest analysis. For countries conducting elections and not found in the dataset, the probabilities were imputed based upon the history of electoral violence in that country.

Am I anonymous in this system?

You are anonymous to your peers using the system, unless you choose a username that is revelatory of your identity. We do not share your email address with other users, and any additional information that may be asked of you in a profile, is optional to complete.

The stakeholders of this project do have access to your email address and all associated actions you take in the system. These stakeholders include members of the core project team at CSO, the project team at Creative Associates, the project team at King's College London, and the project team at Cultivate Labs, the makers of the prediction market platform we're using on this project. Any personal information we have about you and any individual activity on this system will never be shared outside these groups.

How easy is it to manipulate the results?

There are ways to manipulate prediction markets using multiple accounts, collusion, pump and dump schemes, etc. However, given this is an invitation only prediction market, manipulation is unlikely. The prediction market point system also minimizes the opportunities for manipulation, especially in an invite-only prediction market, because one must regularly forecast in the same direction as the answer or as the crowd to earn points, otherwise, the loss of points minimizes someone's influence on the crowd forecast.

How do you judge how accurate we are?

We judge both the accuracy of the site's forecasts and your own forecasts using a Brier Score.

The Brier score was originally proposed to quantify the accuracy of weather forecasts, but can be used to describe the accuracy of any probabilistic forecast. Roughly, the Brier score indicates how far away from the truth your forecast was.

The Brier score is the squared error of a probabilistic forecast. To calculate it, we divide your forecast by 100 so that your probabilities range between 0 (0%) and 1 (100%). Then, we code reality as either 0 (if the event did not happen) or 1 (if the event did happen). For each answer option, we take the difference between your forecast and the correct answer, square the differences, and add them all together. For a yes/no question where you forecasted 70% and the event happened, your score would be (1 – 0.7)2 + (0 – 0.3)2 = 0.18. For a question with three possible outcomes (A, B, C) where you forecasted A = 60%, B = 10%, C = 30% and A occurred, your score would be (1 – 0.6)2 + (0 – 0.1)2 + (0 – 0.3)2 = 0.26. The best (lowest) possible Brier score is 0, and the worst (highest) possible Brier score is 2.

Can I suggest new forecast problems to be posted?

We are following a specific framework for the forecast problems we are posting, but we are also open to suggestions for ancillary questions we may pose. You can suggest new forecast problems by emailing support@cultivatelabs.com, and those will be forwarded to the project team. Please state when you write that you are participating in the CSO project.

I know someone else who would be interested to participate, can I invite them?

Yes, just send them to the URL cso.cultivateforecasts.com to sign up.