Friday 10 October 2014

Keep It Simple, Silly


Random, caught-my-eye post of the day from http://science.slashdot.org/story/14/09/22/1312251/how-our-botched-understanding-of-science-ruins-everything?sdsrc=popbyskid

An anonymous reader writes

"Pascal-Emmanuel Gobry writes at The Week, "If you ask most people what science is, they will give you an answer that looks a lot like Aristotelian 'science' — i.e., the exact opposite of what modern science actually is.

Capital-S Science is the pursuit of capital-T Truth. And science is something that cannot possibly be understood by mere mortals. It delivers wonders. It has high priests. It has an ideology that must be obeyed. This leads us astray. Countless academic disciplines have been wrecked by professors' urges to look 'more scientific' by, like a cargo cult, adopting the externals of Baconian science (math, impenetrable jargon, peer-reviewed journals) without the substance and hoping it will produce better knowledge.

This is how you get people asserting that 'science' commands this or that public policy decision, even though with very few exceptions, almost none of the policy options we as a polity have have been tested through experiment (or can be). People think that a study that uses statistical wizardry to show correlations between two things is 'scientific' because it uses high school math and was done by someone in a university building, except that, correctly speaking, it is not.

This is how you get the phenomenon ... thinking science has made God irrelevant, even though, by definition, religion concerns the ultimate causes of things and, again, by definition, science cannot tell you about them. ... It also means that for all our bleating about 'science' we live in an astonishingly unscientific and anti-scientific society. We have plenty of anti-science people, but most of our 'pro-science' people are really pro-magic (and therefore anti-science). "

END of Article

At least for now, this holds true, the deluge of jargon in a publication is quite unwarranted, in my humble opinion. However, we are in a vicious cycle that starts with papers getting rejected because they are too "easy to understand" or are doing "simple experiments" that a 12 year old could do. The reviewers think that if it is too simple then it doesn't behoove them to publish it in their journal, "what is the big deal?" they ask.

The question of "what is the big deal?" gets answered by academics in the form of massive amounts of data generated from high-throughput experiments using expensive and cutting edge equipment followed by an application of every form of statistical testing procedure known to man. If there isn't a hypothesis then we'll let machine learning algorithms loose on the data. Something will have to bunch up, cluster or have significance somewhere. We can make up the story after we have the results in front of us.

Why do we do it? because everyone else is doing it and we won't get publications and the grants that follow them if we don't do what "everyone else is doing".

What does this lead to eventually? A flood, thousands of papers buried under thousand others. Filled with jargon, complex mathematical notation, statistical tests that were probably cherry-picked because they gave the answers that were what the investigators wanted.

Finally, you the science newbie has to enter this world, you pick a topic and are rewarded with a deluge of papers. How does one get around to reading all of this? A FeedReader helps, but only in aggregating papers from topics that you would like to read. They still have to be read by you, understood, comprehended and the methods have to be tried out to see if they work.

Then you stumble upon all the limitations of the published method when you start to work on it. It will be quite some time before you realize that the best stories that could be told using the method are already published and that you are looking at the dregs. It's too late to back out, you've spent a considerable amount of time trying to understand it and get it working and you can't abandon all the work just like that. You try to mix and match different things to see if something comes, anything at all. Your desperation colours your perception and you attempt to justify any result by bending your "unbending scientific principles". Then it becomes an exercise in torturing the data with statistical tests until you get the answer that you want, even a hint of it supported by a P-value cut-off that you chose after the the application of the statistical test and the P-values came. Then you try to wrap it up into a story and get it published.

Why bother? I don't know...maybe because we have nothing better to do.