investigating the nature of fact in the digital age

Posts Tagged ‘journalism’

“He-said, she said” journalism vs “Why-he said, why-she-said” journalism

In Uncategorized on March 9, 2014 at 9:14 am

“He-said, she-said” journalism vs “Why-he-said, why-she-said” journalism

In an increasingly crowded world of instant information the need for journalism that explains – not just reports – what is going on is crucial. It’s crucial to an informed population, to democratic process, to holding those in power to account for their actions. Being first with the news <is> important but depth and accuracy must always trump speed. Mathew Ingram at GigaOm recently took a look at some US media start-ups that focus on explanatory journalism. Check out his piece here.

Malcolm Turnbull on the news media

In Uncategorized on March 8, 2014 at 6:25 pm

Malcolm Turnbull on the news media

The federal Minister for Communication reckons there’s still value in printed news. This is an extract from the speech he gave at the launch of The Saturday Paper

When 1+1=1: Journalism and the trouble with “facts”

In Uncategorized on September 6, 2013 at 9:28 pm

By Gordon Farrer, RMIT University

A posse of fact-checkers has been riding the boundary of the federal election. Not happy with the standard of honesty in political discourse, the ABC, this website and PolitiFact.com.au, a localised version of a US format, staffed mostly by ex-Fairfax journalists, set up operations to check facts in statements made by politicians and others during the campaign.

Isn’t fact-checking what journalists are meant to do already?

Of course it is. And they do. Facts are the building blocks of good reportage, the substance upon which a true and full record of history is built. They are gathered, checked and double-checked before being published in print, on television and radio, and online. At least, that’s the theory.

Journalism has changed. The conversation with the media audience has changed. The competition to be first with news means there is less time to check and confirm every line of a public figure’s statements. The multitude of new avenues for politicians to deliver their unfiltered message to its audience by going around the traditional gatekeepers of the media have changed the nature of information and of the political conversation.

Politicians know this and take advantage of these changes. Facts are spun, taken out of context, cherry-picked or cunningly applied to create a false impression. The fact-checkers’ challenge is how to strip away the noise, lay bare how facts are distorted and to expose the deceit built into the rhetoric of politics.

People think they know a fact when they see one. People should think again.

The truth is that “facts” can be tricky, elusive things.

The theory is facts are gathered, checked and double-checked before being published, but that’s not always put into practice. Image from shutterstock.com

Here are three facts most would accept on face value:

1+1=2

Clive Palmer is overweight

The unemployment rate is 5.7%.

The equation 1+1= 2 is self-evident; simple observation and experience tell us it is true. But some cheeky mathematicians take delight in proving that 1+1=1 is also true.

They conjure this surprise result by using a numerical sleight-of-hand known as a “mathematical fallacy”. This is, essentially, a well-camouflaged false step, and if you don’t spot the false step or know how to go through the mathematical working to pinpoint where it was introduced, you might be tempted or feel compelled to accept that 1+1=1.

A mathematical fallacy can be created on purpose, as a party trick to impress or challenge fellow numbers geeks at mathematics soirees. A mathematical fallacy can also be accidental, a simple, subtle miscalculation buried in the working that leads to an incorrect result. If not discovered, such mistakes could have potentially fatal consequences – for example, if the error is made by a designer of nuts and bolts used in bridges or space shuttles.

So, it is possible to believe that 1+1=1 is a fact if you don’t think to look for an error, don’t know how to look for an error, or if you don’t know there’s an error to be found. Everyone will know it is wrong, but only a few have the skills to know how to prove it is wrong.

Hold that thought.

Mining magnate Clive Palmer is a larger-than-life character whose physique matches his personality. Even a casual observer can see that he is overweight. But we don’t have to trust the observation of casual observers to know that the statement “Clive Palmer is overweight” is a fact because medical science gives us a definitional tool for the classification of body weight: the body mass index.

The BMI correlates height and weight to arrive at a number. A person is considered to be underweight, overweight or to have a healthy body weight depending on where that number sits on a spectrum.

Technically, St Kilda captain Nick Riewoldt is overweight. AAP Image/Dave Hunt

It’s safe to say that Clive Palmer’s BMI would categorise his weight as
above the ideal for his height. He would sit in the overweight or (according to my dietitian) the obese section of the spectrum.

But consider this: at 193 centimetres and 96 kilograms, Nick Riewoldt –
captain of St Kilda AFL club, superb athlete and fine specimen of a human
being – has a BMI of 25, categorising him as overweight. “Nick Riewoldt is overweight” is as much a fact as “Clive Palmer is overweight” is a fact. Crazy, I know.

Hold that thought, too.

According to the government department responsible for measuring unemployment, the current jobless rate is 5.7%. The statisticians in
the Australian Bureau of Statistics are experts, independent of political
influence, so we have good reason to trust that they know how to measure
unemployment in Australia. The 5.7 figure should be one we can accept as
fact.

But what is being measured? There is considerable debate about the value of unemployment figures. According to a recent column in the Fairfax press the rate does not include roughly 100,000 people who have been moved from the unemployed queues into training schemes. It also does not include those who have given up looking for work, those who work for a family business, or those who do just one hour of paid work each week. Include these categories and you get an unemployment rate of 6.2%.

These three examples help us understand that no fact is an island. Facts are constructed and constrained by social, historical, cultural, scientific and economic factors and cannot exist or be understood outside the context and connections created by those factors. Change the context or the connections and you change the fact.

Fact-checking operations know this and so parse context and connecting factors to arrive at their shades-of-truth rulings, with the tested fact sitting on a spectrum from True through to False via a range of incremental stages (for example PolitiFact’s ratings Half True/False, Partly True/False and Mostly True/False).

Epistemicism is the sub-branch of philosophy that deals with the question of vagueness and inexactness, that border area in which something is going from being one thing to being another. It considers such questions as: At what point does a thin thing become a not-thin thing? Is there a tangible, identifiable definitional line that separates these states?

Image from shutterstock.com

If there is, we might ask, is there also a line between non-physical states such as “fact” and “not-a-fact” (or between “fact” and “not the fact supposedly being presented”)? That is, is there a “truth mass index” we can turn to for help, a version of the BMI that can be applied to fact?

And if there isn’t an easily defined line between “fact” and “non-fact”, on what basis do the fact-checkers think they can make judgements about factual accuracy?

The fact-checkers operate in this zone of vagueness and, in practice, they do an effective job. As experienced journalists they know how to examine and expose the rhetorical equivalents of mathematical fallacies. They can identify how definitions and assumptions around, say, unemployment figures have been warped or constructed to achieve a desired result.

Of course, there is argument about the nuances of fact-checkers’ rulings; in the real world that is where subjectivity enters proceedings, and there is no hard and fast way to calculate the impact of personal preference or opinion.

But even without a truth mass index, the checkers could rule that Nick Riewoldt is as healthy a specimen of a human being as you will find. They could also rule that Clive Palmer should stop eating hamburgers.

Because sometimes facts speak for themselves.

Gordon Farrer was a Fairfax journalist for 13 years.
Fairfax also holds a stake in his current employer, Metro Media Publishing.

The Conversation

This article was originally published at The Conversation.
Read the original article.

A piece I wrote for The Conversation

My life as a journalist: a confession

In Academic reflection on April 4, 2013 at 8:54 pm

A Public Self-Criticism

Early in my professional development I stood on the True Path that led to the Higher Purpose of Academia. I was young and my instinct was to seek to serve that Greater Good and its Pursuit of Truth. But I was weak.

One day, I came to a fork in the road: on one side the path continued towards Glorious Truth; on the other lay the cracked and crooked road to Journalism. I was tempted, and I succumbed.

When I stepped off the true path I left behind the History and Philosophy of Science; the great, unmined riches of the Annals School of History; and the unfathomable depths of a thousand schools of Chinese thought. Pursuing these interests might have filled ten lifetimes with intellectually rigorous investigation. Instead, I was drawn into a false and superficial world populated by shifting shadows, moveable ethics and impure motives.

I convinced myself that Journalism had value and that by practising it I could serve a Public Good. For more than 20 years I believed I was serving the ideal of Quality Journalism by applying the noble art of Research to inform a knowledge-thirsty readership. But my eyes were recently opened to the Glorious Truth by the Honourable institution of RMIT and the School of Media and Communication.

Thanks to the re-education I received while absorbing the clarifying philosophy of Professional Research Methods and Evaluation I now understand the enormity of my crimes. As a journalist I perpetrated a fraud on the people and institutions that trusted me: my family, my colleagues, the Honourable Public and even Democracy itself. What I did was not Research. At its least damaging it was a shallow pretence; at its worst an evil approximation that distorted Truth and misled weak and gullible readers unwittingly indoctrinated by the tricks and deceptions of the mainstream media.

The “research” perpetrated by journalists is an abasement. Mainstream journalism uses a catch-all approach to gather “facts”, crudely meshing them with the words of “experts” and/or “witnesses” (usually obtained at short notice, via the telephone, with no opportunity afforded for reflection or consideration!) and publishing them under the constraints of time, space and relative importance in a news agenda defined by commercial parameters. Mostly it is primary material, occasional “informed” by secondary sources. But it is almost always dashed off, stitched together to last just long enough for the next news cycle to sweep it away. Thus is worship of Objectivity and Eternal Truths slyly and carelessly supplanted by the creeping menace of Subjectivity and Ephemeral Interest, a process rarely admitted to by the perpetrators.

Driven by commercial imperative, most mainstream media rely on and also take advantage of weaknesses in human psychology. Like bowerbirds, we are compulsively attracted to shiny things; like kittens we cannot look away when something intriguing catches our eye. Thus coverage of even the most serious issues – those “boring” but worthy subject areas that deserve deep and considered analysis – must compete with the populist appeal of sport, manufactured political soap operas or the birth of an achingly cute baby elephant at the local zoo.

True Knowledge does not lie on the ground in convenient nuggets to be picked up and thrown in a sack, then taken to the market place and turned into bankable cash by the seller of metals. Yet this is what the media do. It is what I once did. The glittering clumps the media gather are Fool’s Gold – Fool’s Knowledge, you could say – useful to catch the light for a moment to attract the easily amused, or be used to decorate the fringes of some item of passing popularity, fadsome concerns alien to Deeper Understanding.

True Research takes time. It requires structure, careful thought, rules and parameters of practice to ensure consistency of application so that standards are maintained across and within all fields of Academic Inquiry. Standards are correctly kept high and maintained by the selfless gatekeeping process of Peer Review. The sacred mantra of “Problem Context Method and Outcome” is a beacon to Illuminate Truth. Journalism’s guiding trope of ”Who What When Where How and Why” suggests a fine ambition but it is one that always falls short because it is inconsistently applied and too easily bent out of shape such as to be useless.

I confess to such practices and confess to being a ringleader. As an editor at The Age I recruited and groomed others to similarly debase their intellectual energies.

For this, and for all my crimes against True Academic Research while a member of the Gang of Mainstream Media, I am ashamed and I apologise.

Signed …

On Research

In Academic reflection on April 4, 2013 at 9:12 am

WHILE doing the Professional Research Methods and Evaluation course at RMIT I gave a lot of thought to the nature of academic research. I’ll concentrate on that rather than on professional research because the “pure” nature of academic study appeals to me more – with some caveats.

Academic research is clearly an inexact science but its aim to reveal the faces of some kind of sacred truth or ultimate understanding of the world really is a noble one. Whether the subject is “hard” (engineering, chemistry) or “soft” (the humanities), academic research has gleaned insights into the world and ourselves that have driven us forward, for good or ill.

That said, I’d like to offer some critical thoughts.

“Did we really need to spend three years and $100,000 to learn THAT?”

The results of some research seem obvious, while some research topics are plain crazy. Three examples of eyebrow-raising studies (all winners of the Ignoble Award):

Suicide rates are linked to the amount of country music played on the radio (“The Effect of Country Music on Suicide”, Social Forces, 1992)

Dog fleas can jump higher than cat fleas (“A Comparison of Jump Performances of the Dog Flea, Ctenocephalides canis (Curtis, 1826) and the Cat Flea, Ctenocephalides felis felis (Bouche, 1835),” Veterinary Parasitology, 2000)

Rats can’t always tell the difference between Japanese spoken backwards and Dutch spoken backwards (Effects of Backward Speech and Speaker Variability in Language Discrimination by Rats,” Journal of Experimental Psychology: Animal Behavior Processes, vol. 31, no. 1, January 2005).

The media love to criticise what they see as irrelevant, useless research, and often present academic research in a poor light, oversimplifying complicated research to the point that any value it has becomes invisible. Perhaps there is great value in knowing that rats can’t always (can’t always!?) tell the difference between backwards spoken Japanese and backwards spoken Dutch. We can’t know until we know, if you know what I mean.

On the other hand, it seems obvious that suicide rates can be linked to country music (especially Billy Ray Cyrus) without research telling us so.

Is there value in confirming apparently common sense observations? Should something be considered not true until observed/proven/validated by research? Should grants be given out for any old research project just to be able to tick off that, yes, we now know that less smoking and more exercise leads to a longer life (I’ve seen that study)?

Good research is vital but bad research – that is, research based on flawed assumptions, imprecise questions, questionable motives – can do great damage. When public confidence in academic research is eroded to the point that people claim that scientists warning about climate change are only in it for the research funding, we’re in for a world of pain.

Was Darwin a plagiarist?

At university it is drummed into us that in all assignments we submit for assessment we must acknowledge the work of others, including “thoughts, ideas, definitions or theories”. Neglecting to do so is considered a breach of academic integrity, a serious matter.

If you’ve been reading and thinking about an issue for a long time you will have absorbed many concepts and themes and can easily believe those ideas are your own. It is also possible that you will reach conclusions that others in the same field of study are coming to, will come to, or have already come to.

Several years ago I wrote a newspaper column about Twitter. At the time there were two common analyses of the microblogging platform. The first focused on the potential of Twitter to become a substantial source of crowd-sourced citizen journalism that could supplant traditional, professional journalism. The other concentrated on how many followers users had; whether you were a “thought leader” on Twitter, how much influence you had among the cognoscenti in your field, how you used the platform to extend and strengthen your “personal brand”.

In the column I presented my theory that Twitter is just another broadcast medium, like radio: a minority of voices (equivalent to radio broadcasters) are “listened to” by a majority (the audience). To get value from Twitter you don’t need to be one of the big voices, you just need to be a wise listener and, if inclined, you can chip in with the occasional contribution (like listeners do on radio talkback).

Since that column was published, this has become a common third analysis of Twitter. Had I read an early version of this analysis before writing the column and forgotten about it, or did I come up with it independently? It’s unlikely I was the first person to think of it but I feel sure I’d never encountered it before. Had I unwittingly plagiarised someone? If, in a research paper, you present a similarly “original” idea that someone else has already come up with, are you guilty of plagiarism?

Before starting a research project a thorough literature review should uncover whether certain theories or findings have previously been unearthed. But sometimes a new idea can coalesce in the ether and drop to earth, in different places, simultaneously.

English scientist Isaac Newton and German mathematician and philosopher Gottfried Leibniz developed calculus around the same time. Charles Darwin and Alfred Russel Wallace both “discovered” the theory of evolution. History is full of such examples of simultaneous invention. When a set of facts is in wide circulation it’s likely that more than one mind will put two and two together and come up with similar conclusions around the same time. In a super-fast, digitally-connected world it’s likely new ideas and inventions will arise simultaneously more often, in more places, created by more people. Original ideas will be harder to come by and the competition will be to be first to get them in front of the public or your academic peers.

Alfred Russel Wallace came up with the theory of natural selection before Darwin, but whose name is associated with it?

Should we tolerate this?

When setting out the intellectual/philosophical framework of their research context and method, the authors of many papers blithely accept the conclusions of previous research without critiquing them. “Following the analysis of Bifur, Bofur and Bombur, we base our research framework on their finding that …”

This practice is a useful convenience, but very few academic papers offer simple black-and-white conclusions. There’s usually a lot of “on the one hand” followed by “but on the other”. Yet in paper after paper such qualifications are glossed over and a simple conclusion is plucked out to support a position, and used to build new conclusions which, in turn, are simplified and used by subsequent researchers in their research.

Nuance and inconveniently conflicting views are thereby erased in the hope of building a singular truth. In mechanical engineering such discrepancies are known as “tolerance” – the amount of permissible “wiggle room” or tiny amounts or error by which a designed object will still function. (For example, beyond a certain degree of tolerance a bolt will not screw into a nut; metal needs to be able to withstand certain amounts of weight/pressure.)

It’s an inexact but necessary process: we shouldn’t – and can’t – reinvent or test every element of every piece of research we rely on. But when doing our own research we need to be aware of the wiggle room we build into our set-ups and arguments (context and method) and acknowledge it. As researchers we should also be aware that allowing too much tolerance weakens the whole edifice. As precision declines, so do solidity and reliability. This applies as much to academic research and argument as it does to bridges and skyscrapers.

Some thoughts about abstruse language in academic papers (I’m looking at you, Cultural Studies)

Journalism students are taught to avoid using jargon because it is the enemy of clear expression. But much academic writing seems to revel in being obscure. If you’ve put years of study and effort into your research, why make the results of your work difficult to understand for a broad audience by using words in combinations not found in any dictionary or guide to grammar?

Academics argue – as do all industry specialists who develop a secret, specific language of their own – that existing language and vocabulary cannot capture the depth or complexity of the concepts they need to express. In that case, I’d suggest, such research approaches the realm of the mystical, and its “results” are unavailable to anyone whose world-view is rooted in the practical and the solid. What tangible benefit can come out of this kind of research? How can it be used? Is it just talk and wordplay and esoteric insight for its own sake? Is its only value that it can be cited by the next study in a continuing procession of abstruse studies written by other members of the same club? I wonder.

I blame the French. Foucault, Latour, Baudrilleux and Bordieu would be easier to understand if they’d spent more time in primary school learning Euclidean geometry rather than Existentialism. Once you get the French writing and thinking as those blokes do, everyone in the Cultural Studies business starts doing it because they don’t want to appear dumber than the French.

There’s another possibility: perhaps I’m dumber than the French and those who follow in their intellectual footsteps. Still, even if these studies do mean something and have great (though hidden) practical value, my bottom line is this: I’m not willing to spend the time to work it out. The studies’ conclusions aren’t worth the effort of unpacking the language. As far as I can tell.

When he addressed us in Week 9, Tony Jaques suggested research should build on previous findings, it should teach us something new and it should lead to action. He was referring mainly to professional research rather than theoretical cultural studies-style research, but the point was a good one.

Tony Jaques also recalled an occasion when a paper he wrote was criticised as “too journalistic”. He took this to mean that the paper was readable and comprehensible – both good things – rather than high falutin’ and obscure.

Despite my rant at the top of this paper about the failings of journalism, I’ve come to realise that my preference in research methods sits somewhere in the middle of the continuum that stretches from quality journalism to research that is rooted in obscure theories only available to certain members of an academic club. Real numbers, trends and behaviour you can identify, results you can use to improve understanding and apply to improve policy (government, corporate, social): these are the kinds of research practice that interest me.

Lights! Camera! Action Research!

I was always marked highly for the Systems Theory papers I wrote when I dabbled in the subject at Monash a decade ago – even though as far as I could tell my essays meant nothing, I was making it all up as I went along, couching wads of nonsense in language that made sense only if you were the kind of person who habitually wears a colander on your head with an array of wires connected to a tin foil body suit beneath your clothes.

Did the lecturer (the late, great Frank Fisher) dare not mark me low because he wasn’t sure whether my papers contained great insight or were utter drivel? I don’t know. The thing is, I now realise that what I was doing was Action Research, which turns out to be a legitimate kind of research, not a subsection of the airy-fairy French wankery referred to above. Cool.

I quite liked doing the Systems Theory version of action research but do wonder: how can we be sure everyone is on the same page if experiential-based research is acceptable? Won’t subjectivity dominate? What are the common parameters that enable comparison of studies? Are such questions relevant?

I’d like to end with a passage from Jean McNiff’s Action Research for Professional Development:

In traditional forms of research – empirical research – researchers do research on other people. In action research, researchers do research on themselves. Empirical researchers enquire into other people’s lives. Action researchers enquire into their own. Action research is an enquiry conducted by the self into the self. You, a practitioner, think about your own life and work, and this involves you asking yourself why you do the things that you do, and why you are the way that you are.

Permission to rabbit on after a bit of self-reflection? Granted. Plus we’ll validate your parking as thanks for coming to the conference.