I’m taking a little holiday from watching the news. I do this sometimes. I turn off the volume to watch all those mouths move, then let all of the frustrated and angry people float away, sealed in their lovely bubbles. Escapism? Yes – and no. Sometimes it’s the only way to imagine myself outside my own bubble of news and views, to try to see how people get sealed off from each other in their internally coherent mini-worlds. If I quiet my own rage at the world and stop myself from yelling about “truth”, I think I can see that the people inside all the bubbles are a lot alike, and are using similar ways to create their different versions of the world. It’s those ways that grab my attention for Theory of Knowledge. The following story is likely to grab your attention as well.
Three reporters ran what they called an “experiment” in news consumption during the recent American election. Although in TOK we might reserve the term “experiment” for a foray into knowledge constructed more rigorously, they did gather some gripping stories of the power of filters in social media to reinforce what people believe already. As we know, in the intuitive cognitive bias known as confirmation bias, we are inclined to accept what is in harmony with our thinking and reject what jars with it, without even being conscious that we’re doing so. OK. Got it. This isn’t new. But would we expect our own biases to affect the technology you use to get access to information? Isn’t the internet neutral, just the product of an objective machine?
Contradictory communities: versions of the world
I recommend this account of their “experiment” in Facebook bubbles, simply because one of the most effective ways of opening an issue for students is to use stories – to catch interest with human responses and then move to the larger knowledge questions. I like this particular article because it humanizes sympathetically both the people who hold left wing perspectives and the people who hold right wing perspectives. It suspends questions of who is right and who is wrong in order to look at how they are receiving their information – and how they are channeled by their initial political perspectives into self-reinforcing flows of information: “Bursting the Facebook bubble: we asked voters on the left and right to swap feeds”.
Recognition of the bias built into algorithms on the web is not, of course, Breaking News. Indeed, the term “filter bubble” seems to have worked its way into our language. Yet our students, digital natives though they be, may never have thought critically about what the web offers them as they take their information and perspectives from it. Clair Cain Miller, writing in the New York Times in July 2015, argues that “algorithms discriminate”:
There is a widespread belief that software and algorithms that rely on data are objective. But software is not free of human influence. Algorithms are written and maintained by people, and machine learning algorithms adjust what they do based on people’s behavior. As a result, say researchers in computer science, ethics and law, algorithms can reinforce human prejudices.
It is certainly useful to have google do the sifting for us to prioritize the sorts of articles we like to read, as our past preferences influence our future options. Yet the bubbles we are building around ourselves, as contrary perspectives and unsought topics increasingly fall away, carry serious implications. For instance, right after the Brexit referendum in the UK, British internet activist Tom Steinberg posted the following urgent message on Facebook as he tried to burst out of his own bubble of exchange of news and views:
I am actively searching through Facebook for people celebrating the Brexit leave victory, but the filter bubble is SO strong, and extends SO far into things like Facebook’s custom search that I can’t find anyone who is happy *despite the fact that over half the country is clearly jubilant today* and despite the fact that I’m *actively* looking to hear what they are saying.
This echo-chamber problem is now SO severe and SO chronic that I can only beg any friends I have who actually work for Facebook and other major social media and technology to urgently tell their leaders that to not act on this problem now is tantamount to actively supporting and funding the tearing apart of the fabric of our societies. Just because they aren’t like anarchists or terrorists – they’re not doing the tearing apart on purpose – is no excuse – the effect is the same, we’re getting countries where one half just doesn’t know anything at all about the other.
It’s in the power of people like Mark Zuckerberg to do something about this, if they’re strong enough and wise enough to swap a little shareholder value for the welfare of whole nations, and the world as a whole.
As this Facebook example suggests, awareness of the problem is necessary for viewers to want to burst out of their own bubbles but not in itself sufficient to counter the problem. (Steinberg identifies the problem, but may still be trapped in there, calling, “Let me out!”)
Should TOK bother with bubbles?
So what can we do about it? Is this a problem relevant to the Theory of Knowledge course? I’d say, most definitely, YES.
First, understanding the role of perspectives in knowledge is central in three of the five TOK course aims, as given in the IB Theory of Knowledge Guide. In our course, we aim that students will:
- develop an awareness of how individuals and communities construct knowledge and how this is critically examined
- develop an interest in the diversity and richness of cultural perspectives and an awareness of personal and ideological assumptions
- critically reflect on their own beliefs and assumptions, leading to more thoughtful, responsible and purposeful lives
Second, the concept of truth still has some meaning in Theory of Knowledge as an ideal and a goal, even if the word and the concept are a bit shop-worn in an era that has been repeatedly called “post-truth” in the media. Students benefit from looking closely at how the word is flung about, and seeing what’s discarded from knowledge if “true” is accepted to mean “true-for-me”. As Katharine Viner comments in “How technology disrupted the truth”,
Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.
Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.
What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there are no truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.
Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago).
Students benefit, I’d say, from recognizing that there are different approaches to the search for truth:
- an approach based on pragmatism that accepts what “works” (works for whom?),
- an approach based on coherence that depends on the internal harmony of information and interpretations (bubbles of agreement, no contradiction allowed!), and
- an approach based on correspondence between knowledge claims and the world.
The approach through correspondence may be the only way to test knowledge claims with evidence and try to get outside the bubble-effect of an approach based on internal coherence, as people within bubbles agree with each other and circulate only information that is in accord with their views.
Third, TOK has to tackle the bubble-effect – of both technology and our minds — if it is to teach critical evaluation of knowledge claims at all. HOW do we take the approach of finding the best justifications – the best evidence, the best reasoning – for factual claims? HOW do we find the best justifications – the best arguments, the most thoughtful understanding – for claims of values? It’s not easy if the information most readily accessible is inaccurate, pre-selected, or heavily interpreted with emotions and values. Students need to be given awareness and taught some skills.
We can do it
Nevertheless, in Theory of Knowledge we are right on track to give students the awareness and the critical skills they need.
As we teach how ways of knowing work, we can engage students in reflection and critique on their use and interaction, stressing how to use them critically ourselves and to recognize their critical use by others in society and in areas of knowledge. As we deal with the concept of “shared knowledge”, we can encourage them to recognize how knowledge is shared – and see the pitfalls in the flow of knowledge. I consider it essential that students gain an understanding of how we use intuition and reasoning, for instance, and how emotion affects sense perception, language and memory. It is very effective to tie the ways of knowing to recognition of common errors – common fallacies and cognitive biases.
As we teach how areas of knowledge work, we are reinforcing every basic critical skill. We are introducing students to ideas of methodology – or in other words, of the expectations within each area of knowledge for the best practices and procedures of thinking critically. We are introducing ideas of conscientiously making knowledge claims that are true (to the best of our knowledge!), being self-critical, sharing knowledge in a way that is open to group scrutiny, and being willing at least to try to break out of interpretive bubbles to understand an alternative way of thinking.
In our efforts, we have the cooperation of our colleagues as they, in their own classrooms, are sharing our goals of making students understand and appreciate how knowledge is constructed. As we deal with large overview concepts of how knowledge is constructed in their areas, they are tackling the close-up applied topics of research and findings, methods and results. They are dealing, in different ways according to their fields, with topics of methodological care and responsibility. Can we encourage our students to apply some of this to their everyday google searches and consumption of social media and news?
As teachers, we are all in the position to help students deal with the information bubbles that surround them in social media, and to engage with two topics whose knowledge questions are urgent to explore:
perspectives: How do we recognize, understand, and evaluate different perspectives as we consider their influence on knowledge?
sources: How do we know what sources of information to accept as most reliable on particular topics? On what bases are organizations, publications, or individuals treated as expert sources?
These are central knowledge questions for both areas of knowledge and everyday knowledge of the wider world.
Without importing into our classrooms the strident antagonism of conflicting perspectives, can we make our students aware of the extent to which people who share a community can be living, in their minds, in mutually contradictory versions of it? Can we alert them to the need to check out the biases and bounds of their own sources of knowledge and encourage them to be sympathetic to others who, like themselves, are believing what they are told by sources they’ve accepted as reliable? I believe, to a significant extent, we can.
Looking for more thought-provoking resources to support your TOK teaching? Subscribe today
References
Claire Cain Miller, “When Algorithms Discriminate”, New York Times. July 10, 2015. http://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?emc=edit_th_20150710&nl=todaysheadlines&nlid=58394318&_r=1&abt=0002&abg=1
Katharine Viner, “How Technology Disrupted the Truth”, The Guardian. July 12, 2016. https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth
Julia Carrie Wong, Sam Levin and Olivia Solon, “Bursting the Facebook bubble: we asked voters on the left and right to swap feeds”, The Guardian. November 16, 2016. https://www.theguardian.com/us-news/2016/nov/16/facebook-bias-bubble-us-election-conservative-liberal-news-feed
Images (with thanks) from Creative Commons: https://pixabay.com/
It was very informative article
Thank you for the information