Why predict
Maybe you had a potential problem with your methods, but on the flip side, maybe you have just discovered a new line of evidence that can be used to develop another experiment or study. Academic Skills. Understanding Hypotheses and Predictions. Research Questions Descriptive research questions are based on observations made in previous research or in passing.
Prediction On the other hand, a prediction is the outcome you would observe if your hypothesis were correct. Example Let us take a look at another example: Causal Question: Why are there fewer asparagus beetles when asparagus is grown next to marigolds? Hypothesis: Marigolds deter asparagus beetles. A final note It is exciting when the outcome of your study or experiment supports your hypothesis. Training in probability can guard against bias.
Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not. Interestingly, a smaller group were trained in scenario planning , but this turned out not to be as useful as the training in probabilistic reasoning. Rushing produces bad predictions. The longer participants deliberated before making a forecast, the better they did. This was particularly true for those who were working in groups.
Revision leads to better results. Forecasters had the option to go back later on and revise their predictions, in response to new information. Participants who revised their predictions frequently outperformed those who did so less often. Together these findings represent a major step forward in understanding forecasting. Ultimately, a mix of data and human intelligence is likely to outperform either on its own. The next challenge is finding the right algorithm to put them together. You have 1 free article s left this month.
You are reading your last free article for this month. Subscribe for unlimited access. The kind of excessive regulations Ehrlich advocated, the Simon camp argued, would quell the very innovation that had delivered humanity from catastrophe. Both men became luminaries in their respective domains. Both were mistaken. When economists later examined metal prices for every year window from to , during which time the world population quadrupled, they saw that Ehrlich would have won the bet 62 percent of the time.
The catch: Commodity prices are a poor gauge of population effects, particularly over a single decade. The variable that both men were certain would vindicate their worldviews actually had little to do with those views. Prices waxed and waned with macroeconomic cycles. Yet both men dug in. Each declared his faith in science and the undisputed primacy of facts. Ehrlich was wrong about the apocalypse, but right on aspects of environmental degradation.
Simon was right about the influence of human ingenuity on food and energy supplies, but wrong in claiming that improvements in air and water quality validated his theories. Ironically, those improvements were bolstered through regulations pressed by Ehrlich and others. The pattern is by now familiar.
In the 30 years since Ehrlich sent Simon a check, the track record of expert forecasters—in science, in economics, in politics—is as dismal as ever. In business, esteemed and lavishly compensated forecasters routinely are wildly wrong in their predictions of everything from the next stock-market correction to the next housing boom. Reliable insight into the future is possible, however. T he idea for the most important study ever conducted of expert predictions was sparked in , at a meeting of a National Research Council committee on American-Soviet relations.
The psychologist and political scientist Philip E. Tetlock was 30 years old, by far the most junior committee member. He listened intently as other members discussed Soviet intentions and American policies. Renowned experts delivered authoritative predictions, and Tetlock was struck by how many perfectly contradicted one another and were impervious to counterarguments.
Tetlock decided to put expert political and economic predictions to the test. With the Cold War in full swing, he collected forecasts from highly educated experts who averaged more than 12 years of experience in their specialties.
To ensure that the predictions were concrete, experts had to give specific probabilities of future events. Tetlock had to collect enough predictions that he could separate lucky and unlucky streaks from true skill. The project lasted 20 years, and comprised 82, probability estimates about the future. The result: The experts were, by and large, horrific forecasters.
Their areas of specialty, years of experience, and for some access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 percent of them occurred nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. Read: What was the worst prediction of all time?
Even faced with their results, many experts never admitted systematic flaws in their judgment. When they missed wildly, it was a near miss; if just one little thing had gone differently, they would have nailed it. Some experts usually liberals saw Mikhail Gorbachev as an earnest reformer who would be able to change the Soviet Union and keep it intact for a while, and other experts usually conservatives felt that the Soviet Union was immune to reform and losing legitimacy.
Both sides were partly right and partly wrong.
0コメント