More data isn't always better, says Nate Silver

Top statistician warns that an abundance of data lets statisticians 'cherry pick' data points to get the results they want

Big data may seem to promise big insights to users, but more isn't always better, cautions statistician Nate Silver, who became one of America's most well-known faces of data analysis after his FiveThirtyEight blog accurately predicted 2012 presidential election results in all 50 states.

The more data there is, "the more people can cherry pick" data points that confirm what they want it to show, he said.

Abundant data is a notable problem in politics, where many have an interest in the outcome. But it's also an issue in fields ranging from medicine -- where many researchers and journals would rather see studies showing an interesting result than a confirmation of no news -- to earthquake prediction.

It turns out that along with real insight, Big Data can bring "a lot of spurious correlations" -- what appear to be relationships between things that are just random noise, Silver said at the RMS Exceedance conference in Boston today, where RMS announced a new cloud-based RMS(one) risk-management platform..

In addition to writing the FiveThirtyEight blog, now seen at the New York Times, Silver is the author of the book, The Signal and the Noise: why so many predictions fail -- but some don't.

In his presentation, Silver offered four tips for more effectively gaining -- and sharing -- insight from data:

  1. "Think probabilistically," he urged. "Think in terms of probabilities and not in terms of absolutes."

    Don't be afraid of communicating the level of uncertainty that comes with your predictions -- just as most public opinion polls include margins of error -- even if not all of your audience will understand. Some criticised the FiveThirtyEight conclusions of stating the confidence level Silver had in his election predictions, but conveying uncertainty is "important and good science".

    Not doing so can have serious consequences, he noted, such as in 1997 when the National Weather Service predicted a 49-foot flood level for the Red River in Grand Forks, ND. Many in the town were reassured by that, since the city's levees were designed to withstand a 51-foot flood.

    Unfortunately, what was not communicated to Grand Forks residents was the likely margin of error based on past forecasts: plus or minus 9 feet. In fact, the river crested at 54-feet and much of the community was flooded.

    Today, the National Weather Service is much better about noting the uncertainty level of its forecasts, Silver said, citing the "cone of uncertainty" that comes along with projected hurricane paths. Showing uncertainty "in a visual way is important" in helping people evaluate forecasts.

    Probability forecasts are a "way point between ignorance and knowledge," but they are not certainties.

  2. "Know where you're coming from" -- that is, know your weak points, the incentives to reach certain conclusions and the biases against others. "You are defined by your weakest link," he said.

    He noted an experiment on gender bias where people were shown similar technical resumes -- one with a female name and one with a male name. People who claimed to have no gender bias were in fact more likely to discriminate against the female's resume. Why? Those who were aware of their tendencies toward bias were more likely to take action to counteract it, Silver said.

  3. Survey the data landscape, and make sure you have some variance in your data before having confidence in a forecast. (In other words, accurately forecasting the weather in San Diego is not as impressive feat as doing so in Buffalo.)

    Likewise, forecasting a stable economy is easier than in times of a lot of booms and busts, which helps explain why many forecasters were unprepared for the most recession. The forecasters were creating models based on data from 1986-2006, when the economy was unusually stable. A detailed and sophisticated model based on silly assumptions won't do you much good, he noted.

  4. Finally, trial and error are helpful.

    Models tend to work well when they are developed slowly with a lot of feedback. As with many things in life: "You should be suspicious of miraculous results."

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

More Videos

Google collects as much data as it can about you. It would be foolish to believe Google cares about your privacy. I did cut off Google fr...

Phil Davis

ACCC launches fresh legal challenge against Google's consumer data practices for advertising

Read more

“This new logo has been noticed and it replaces a logo no one really knew existed so I’d say it’s abided by the ‘rule’ of brand equity - ...

Lawrence

Brand Australia misses the mark

Read more

IMHO a logo that needs to be explained really doesn't achieve it's purpose.I admit coming to the debate a little late, but has anyone els...

JV_at_lAttitude_in_Cairns

Brand Australia misses the mark

Read more

Hi everyone! Hope you are doing well. I just came across your website and I have to say that your work is really appreciative. Your conte...

Rochie Grey

Will 3D printing be good for retail?

Read more

Very insightful. Executive leaders can let middle managers decide on the best course of action for the business and once these plans are ...

Abi TCA

CMOs: Let middle managers lead radical innovation

Read more

Blog Posts

The obvious reason Covidsafe failed to get majority takeup

Online identity is a hot topic as more consumers are waking up to how their data is being used. So what does the marketing industry need to do to avoid a complete loss of public trust, in instances such as the COVID-19 tracing app?

Dan Richardson

Head of data, Verizon Media

Brand or product placement?

CMOs are looking to ensure investment decisions in marketing initiatives are good value for money. Yet they are frustrated in understanding the value of product placements within this mix for a very simple reason: Product placements are broadly defined and as a result, mean very different things to different people.

Michael Neale and Dr David Corkindale

University of Adelaide Business School and University of South Australia

Why CMOs need a clear voice strategy to connect with their customers

Now more than ever, voice presents a clear opportunity to add value to an organisation in many ways. Where operational efficiencies are scrutinised, budgets are tighter and discretionary consumer spend at a low, engaging with an audience is difficult.

Guy Munro

Head of innovation and technology, Paper + Spark

Sign in