Tagged: fake news Toggle Comment Threads | Keyboard Shortcuts

  • Mark 10:47 am on April 19, 2019 Permalink | Reply
    Tags: fact checking, factfulness, , fake news, , , , , ,   

    You can’t have your ‘facts’ back 

    My notes on Marres, N. (2018). Why We Can’t Have Our Facts Back. Engaging Science, Technology, and Society, 4, 423-443.

    “We want our facts back” is a semi-joking remarking Noortje Marres overheard an academic say which captures a wider response to what has been called ‘post-truth’. Many feel increasingly inclined to take a normative stance in support of ‘facts’ and feel nostalgic for “a time when experts still seemed to have unquestionable authority and felt secure in this authority, when government ministers didn’t say things like ‘people have had enough of experts,’ and the notion that evidence should play a central role in public debate and politics had widespread, even taken-for- granted, institutional support” (423-424). Appealing though it might be, Marres points out that this position ignores the fact that not only were partisans of evidence were a minority in public life in the 90s and 00s, it was also widely recognised that evidence-based debate was not in itself as solution to political problems and could even be problematic by putting politics at risk through an over reliance on experts. While recognising the growing indifference of public speech to factfulness and the lack of consequences attached to outright lies, Marres argues we need to look more deeply to the “changing architectures of the public sphere” (424). The many initiatives which seek to restore the place of factfulness within public life (disinformation awareness campaigns, knowledge literacy programme, fact-checking services) risk reinstating an outdated strategy for securing facts in public debate which is based on authority. It entails a divide between knowing and unknowing subjects, those with facts and those without, which runs contrary to any aspiration for a knowledge democracy. Achieving this will require institutional, media and technological arrangements which are very different to those from the much claimed golden age of factfulness.

    Social media has become a battleground for these debates, with fact checking initiatives using techniques ranging from ‘human moderation’ through to automated fact verification in order to apply journalistic procedures to online content. The platforms themselves have invested increasingly in moderation teams, as well as using automated tools to seek to demarcate problematic and unproblematic material. This has led inter alia to ‘dispute contented’ banners which can now be attached to certain pieces of content on Facebook, highlighting that a third party fact checking operation has cast doubt upon it. There have been questioned range about the working conditions of those undertaking this epistemic labour in click farms, but less scrutiny of the epistemology and methodologies underpinning them. The rely for their legitimacy on ideals of public knowledge and scientific citizenship but operate on a basis which is in tension with these, assuming that “quality is an attribute of information itself” (426). This runs contrary to what had become an increasingly dominant sense of information as *social*, defined by its circulation and connections. In contrast now what is at stake is seen to be the properties of content itself: “What is said to be in need of attention and intervention is the “veracity” of online statements and the potential duplicity of online sources” (427). For instance Factmata seeks to “cross-reference any claim circulating online onto a database of so-called verified statements, in order to validate or invalidate it” (427). So for instance a claim about immigration would immediately be linked to public data about the issue, allowing users to ‘become their own fact checkers’. In this it embodied logical positivism, seeking to decompose statements into units which could be matched against experience or other verifiable statements. Marres makes a particularly interesting point here about how logical positive and computer science shared a common inspiration in Frege’s logic and similar work, going some way to explaining the tendency for positivism to be reinstated by the turn to AI in systems like Factmata.

    Fact checking systems implement a methodology and perform a service, but they also carry a distinction: “that between legitimate and illegitimate claims to knowledge” (428). These putatively technical procedures in fact draw normative boundaries, ones which its important we understand. She references Rorty’s account of demarcationism: defining validity or its absence as a binary attribute of atomistic statements i.e. can be they be traced back to observational statements or not? The normative dimension comes from the question of how to police this boundary between different types of statements. It also entails a sense of actors as being responsible for the epistemic quality of debate, by drawing attention to the character of their statements. In this world view, ‘good’ sources reliably produce valid statements, with ‘good’ users capable of discerning their presence. This is what Marres calls the politics of demarcation. This seeks ‘fake news’ as something which emerges from outside the technology: “it is the type of information sources that the Internet makes available, on one hand, and the users’ lack of skills capable of discerning the difference between valid and invalid statements, one the other, that are said to be responsible for the prevalence of dodgy content in this media environment” (428). Fact vs fiction pages were part of internet culture in the 1990s and demarcationist technologies predate the rise of ‘fake news’. But whereas the blame was once attributed to deviant online subcultures such as vaxers or flat-earthers, it’s now increasingly marked in social terms such as education levels. This dichotomy of responsible and irresponsible users roughly maps onto a broader “opposition between educated progressives and, on balance, less educated supporters of populist and nationalist causes” which is at the heart of contemporary debates about ‘fake news’ i.e. it has the potential in practice to position nascent ‘populists’ as the epistemic crisis, who need to beaten back by and suppressed through technological means in order to ensure the health of the public sphere. They might even reinforce the distinction in a way that furthers the political project of the latter, as can be seen in the far-right backlash against social media firms ‘deplatforming’ leading figures.

    Demarcationism can’t account for the role that digital media has played in undermining respect for knowledge in the first place, instead externalising it into the figure of deviant users and deviant content producers. The mechanism undermining this is simple, as algorithms for content selection are designed to ensure maximum circulation in order to build the widest possible audience. This account of this on 431 was excellent:

    “Online platforms, then, reward messages that spread instantly and widely with even more visibility, and, as tabloid newspapers invested in maximizing advertising revenue also found out in previous decades, sensational rather than factual content turns out to satisfy this criterion of maximal “share-ability” best. A commercial logic here gives rise to a circular content economy, one without referent: content that gets shared a lot is rewarded with more visibility, thereby increasing its share-ability.”

    Fact checking services address the bias of sources while veiling the role of this content economy in conditioning the behaviour of those sources. They render opaque the role played by “technologies of source selection that regulate content circulation online” (431). The network structure of online communities is another source of limitation, as groups spreading ‘fake news’ barely overlap with groups interested in debunking it. How do we make sense of these differences between knowledge communities without invoking the facile distinction of literate and illiterate? Fact checking and demarcation do not help us understand the problem with knowledge we face in digitalised societies, instead actually actively keeping us from this. This concern doesn’t mean we deny there is a “crisis of public evidence in today’s digital societies” but rather that we recognise it “goes well beyond da disregard for facts in digital media environments” (433). It’s crucial that we recognise how “the insertion of computational technologies into public infrastructures have resulted in deception and manipulation of the empirical record” (434) by undermining institutional architectures which ensured accountability across social life. The correspondence model of truth embedded in fact checking is inadequate to address the broader social challenges which these developments are posing for us. Its reliance on looking back, checking claims against a corpus of established facts, fails to grasp today’s “dynamic information environments, in which readings and behaviors are constantly adjusted as conditions change” (434). Marres argues for a dynamic conception of truth in debate to replace this retrospective one.

    The behaviourism around which platforms have been designed uses a concept of users as “influenceable subjects, not knowledge agents”. It has facilitated a social science which does without interpretation, but this does not mean it is a knowledge free environment. It is, as Marres puts it, “a research-centric apparatus, in that their design directly reflects the epistemic needs of the data scientists whose analytic operations are key to their commercial model: to target information to groups of friends, to track shares and likes in the aggregate” (435). It is built around the influencibility of users, with an empirical register which is predicated upon this. This is the final problem which Marres raises with demarcationist fact checking: “the normative opposition between knowledge (good) and non-knowledge (bad) that it imposes makes it difficult to see that epistemic ideals––like behaviorism––themselves have played a role in encouraging a disregard for knowledge on the Internet” (437). Not least of all in the fundamental assymetry at its heart. From 437:

    “social media present an environment in two halves, where, on the one side, we find users with “influence-able” and “target-able” opinions, tastes, and preferences, while, on the other side, we have authoritative data analysts who “know” the population’s fine- grained and ever-changeable preferences and tastes. Scientists––the proponents of knowledge–– haven’t been by-standers but active participants in the crafting of a media architecture designed to enable the influencing of users’ actions.”

    Demarcationism reflects this bifurcation, with the knowing subjects seeking to redesign the information environment to correct the unknowing subjects. The “veritable army of social data scientists who monitor, measure, and seek to intervene in this behavioral theatre” do so on the basis of facts, but outside of the public sphere and in a way which precludes engagement between experts and citizens.

    Fake news might be problematic in itself but it attaches itself to issues which matter to people, tracking controversies which define political life. Fact checking fails to address this connection for the reasons cited above, but Marres argues that ‘experimental facts’ might be better served for this purpose. This doesn’t entail a rejection of stable facts, well establish ed and stable conditions which play an important role in public debate. If I understand correctly, these “statements whose veracity is unstable and whose epistemic status changes over time” (438) because they reference a changing reality, can be interrogated in real time in order to facilitate debate about their character and implications, as opposed to being demarcated in relation to an established body of fact. But I found the example of the £350 million on the NHS claim slightly confusing. There’s so much in this paper to think about, I’m going to come back to it at a lot. I think the point is that ‘experimental facts’ in this sense are more common given the epistemic dynamism which characterised digitalised society. So in essence the argument is to find ways to stay with the difficulties these cause, rather than trying to shut them down in ways likely to be be epistemically short-sighted and politically counter-productive. This is a move from a politics of demarcation to a politics of selection: “while demarcation concentrates on the retrospective establishment of correspondence of public statements with presumably stable, pre-given atomistic statements, a politics of selection progressively establishes a referent for claims through an iterative process of locating and evaluating statement-networks in formation.” (441).

     
  • Mark 7:54 am on March 18, 2019 Permalink | Reply
    Tags: fake news, , , , social media firms, trust   

    Can social media firms remain popular while being perceived as untrustworthy? 

    In the last few months, I’ve been thinking a lot about the popularity of social media firms amidst mounting scandal. It has often seemed that there’s a new common sense opening up in which these firms are seen as fundamentally untrustworthy, built around a business model which means the scandals they generate are a feature rather than a bug. But how widespread is this point of view? There is low trust in social media with the Edelman Barometer finding 34% in Europe and US/Canada which is markedly lower than trust in traditional media. In Zucked, Roger McNamee reports on a corpus study looking at associations with tech firms. From loc 3298-3313:

    To get a sense of the impact, I asked Erin McKean, founder of Wordnik and former editor of the Oxford Dictionary of American English, to study changes in the nouns and adjectives mostly frequently associated with each of the largest tech companies: Apple, Google, Amazon, Facebook, and Microsoft, plus Twitter. Prior to the 2016 election, the tech leaders enjoyed pristine reputations, with no pejorative word associations. For Google, Amazon, Apple, and Microsoft, that is still true. For Facebook, things have changed dramatically. The word “scandal” now ranks in the top 50 nouns associated with Facebook. “Breach” and “investigation” are in the top 250 nouns. With adjectives the situation is even worse. Alone among the five tech leaders, Facebook had one pejorative adjective in its top 100 in 2015–2016: “controversial.” In 2017 and 2018, the adjective “fake” ranked in the top 10 for Facebook, followed by “Russian,” “alleged,” “critical,” “Russian-linked,” “false,” “leaked,” and “racist,” all of which ranked in the top 100 adjectives. Apple, Google, Amazon, and Microsoft do not have a single pejorative noun or adjective on their lists. Twitter has two nouns on its list that may or may not imply brand issues: “Trump” and “bots.” The study was conducted using the News on the Web (NOW) corpus at Brigham Young University. The top 10 US sources in the corpus, ranked by number of words, are Huffington Post, NPR, CNN, The Atlantic, TIME, Los Angeles Times, Wall Street Journal, Slate, USA Today, and ABC News. Despite all the political fallout, Facebook continues to go about

    But are they still popular? In an important sense, the evidence would suggest yes in so far as that neither user growth nor engagement rates seem to be in decline. Does this behavioural popularity co-exist with an affection for the brands themselves? I’d love to know of any research on this if readers have encountered it. But what seems clear is that continued use can co-exist with a pervasive sense that what is being used is not trustworthy. This seems like a potent psychic mix to throw into the already strange relationship which many of us have with these platforms.

     
  • Mark 8:58 am on August 19, 2018 Permalink | Reply
    Tags: , , , fake news, , , , , , , , , , , ,   

    Social ontology amidst the wreckage of techno-progressive orthodoxy 

    I found this review of Trump and the Media by Nicholas Carr in the LA Review of Books immensely thought-provoking. His focus is on the book’s historical contribution, contextualising the enthusiasm with which social media was greeted in terms of long term concerns about the centralisation of mass media. We can’t understand the ideal of a radically decentralised media without understanding the anxieties provoked by its initial centralisation:

    Trump’s twitter stream may be without precedent, but the controversy surrounding social media’s political impact has a history stretching back nearly a century. During the 1930s, the spread of mass media was accompanied by the rise of fascism. To many observers at the time, the former helped explain the latter. By consolidating control over news and other information, radio networks, movie studios, and publishing houses enabled a single voice to address and even command the multitudes. The very structure of mass media seemed to reflect and reinforce the political structure of the authoritarian state.

    It is against this backdrop that social scientists began to “imagine a decentralized, multimedia communication network that would encourage the development of a ‘democratic personality,’ providing a bulwark against fascist movements and their charismatic leaders”. Fred Turner traces these initial speculations from their originators, through the 1960s counterculture and the incipient computer industry, before it became an article of faith within present day Silicon Valley:

    In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

    Carr frames Trump and the Media as “orbiting” around “the wreckage of techno-progressive orthodoxy”. These are the terms in which I’ve recently tried to analyse ‘fake news’ and ‘post-truth’, as solutionist framings by technological, media and political elites which circumscribe a much broader set of transformations and shape likely responses to them. It’s often struck me that these represent a peculiarly populist form of reasoning in their own right: isolating an incoming element which is seen to undermine a previously stable system, whether this is ‘populism’ or ‘social media’ itself. In the process, the claims of populists and social media firms are taken at face value, vastly inflating the power they have:

    One contentious question is whether social media in general and Twitter in particular actually changed the outcome of the vote. Keith N. Hampton, of Michigan State University, finds “no evidence” that any of the widely acknowledged malignancies of social media, from fake news to filter bubbles, “worked in favor of a particular presidential candidate.” Drawing on exit polls, he shows that most demographic groups voted pretty much the same in 2016 as they had in the Obama-Romney race of 2012. The one group that exhibited a large and possibly decisive shift from the Democratic to the Republican candidate were white voters without college degrees. Yet these voters, surveys reveal, are also the least likely to spend a lot of time online or to be active on social media. It’s unfair to blame Twitter or Facebook for Trump’s victory, Hampton suggests, if the swing voters weren’t on Twitter or Facebook.

    This is not to say that social media doesn’t exercise influence, only to dispute the assumption that it works through one-to-many communication. The media elites bemoaning the rise of fake news and filter bubbles in the dawning post-truth age are themselves complicit in the dynamic they see as being ‘out there’:

    What Hampton overlooks are the indirect effects of social media, particularly its influence on press coverage and public attention. As the University of Oxford’s Josh Cowls and Ralph Schroeder write, Trump’s Twitter account may have been monitored by only a small portion of the public, but it was followed, religiously, by journalists, pundits, and policymakers. The novelty and frequent abrasiveness of the tweets — they broke all the rules of decorum for presidential campaigns — mesmerized the chattering class throughout the primaries and the general election campaign, fueling a frenzy of retweets, replies, and hashtags. Social media’s biggest echo chamber turned out to be the traditional media elite.

    What this short review suggested to me is the necessity of revisiting basic concepts (such as centralisation, gatekeepers, publics and influence) in response to the wreckage of techno-progressive orthodoxy. We need a bleak social theory for bleak times and if it doesn’t begin by examining the assumptions inherited in core concepts, as well as their implications for making sense of the present conjuncture, it is unlikely to get very far.

     
  • Mark 7:32 am on May 8, 2018 Permalink | Reply
    Tags: , , , fake news, , , ,   

    Call for abstracts: DQComm2018 The Deliberative Quality of Communication Conference 

    #DQComm2018 The Deliberative Quality of Communication Conference 2018
    Citizens, Media and Politics in Challenging Times: Perspectives on the
    Deliberative Quality of Communication

    November 8 – 9, 2018
    Mannheim Centre for European Social Research (MZES), Mannheim, Germany

    Keynote Speaker: Kaisa Herne (University of Tampere)

    Roundtable on the Future of Deliberation Research with:
    André Bächtiger (University of Stuttgart)
    Céline Colombo (University of Zürich)
    Christiane Eilders (University of Düsseldorf)
    Hartmut Wessler (University of Mannheim)

    Call for abstracts

    Western democracies nowadays face a number of challenges induced by
    political developments. These challenges have been affecting the way in
    which citizens, the media and political elites communicate about politics.
    Critical observers witness a deteriorating quality of political
    conversations between ordinary citizens. It appears no longer possible to
    discuss politics normally. A high-choice media environment facilitated by
    online and in particular social media enables citizens to refrain from
    exposing themselves to counter-attitudinal information and engaging in
    cross-cutting political talk. The polarization of opinions within society
    is promoted by increasingly fragmented media systems and a reporting style
    that favors sensational and scandalous over a balanced and multifaceted
    reporting. Rapid media cycles shorten time for balanced and thorough
    argumentation and media outlets are steadily confronted with the accusation
    of producing fake news. Political actors adapt to the media logic by
    employing ever more simplified and emotionally arousing communication.
    Instead of deliberating publicly on complex problems and finding
    compromises or solutions, political elites rather prefer to communicate
    through short soundbites and populist messages to promote their positions
    and eventually attract voters at election time. Overall, these dynamics
    indicate a deteriorating deliberative quality of political communication
    among and between citizens, the media and political elites. While this
    phenomenon has caused concern among scholars from both political and
    communication science, it still needs further empirical substantiation and
    demand a reflection on extant theories.

    This conference aims at addressing the deliberative quality of
    communication among and between citizens, media and political elites.
    Within this research context, we welcome both theoretical, empirical and
    methodological contributions focusing on the deliberative quality of
    communication. The proposals can address – but are not limited – to the
    following questions:

    • To which extent does ordinary citizens’ talk about politics come close to

    the genuine type of deliberation? Who participates in political talk, who
    does not and why? Do citizens talk to those with viewpoints that conflict
    with their own? What are the underlying motives and condition that give
    rise to homogenous or heterogeneous talk about politics? Which variables
    affect the quality of informal civic discussions? Do citizens’ daily
    exchanges resemble reasoned and well-argued debates or harsh fights at the
    expense of proper justification?

    • To which extent does the online sphere of political communication promote

    respectively impede deliberation? Are platform interventions (e.g.,
    Facebook’s proposed policy of removing hate speech and fake news) a panacea
    to improve the quality of online deliberation and to save deliberative
    democracy?

    • To which extent do different features of the media systems influence

    mediated deliberation? How does the increased polarization and
    fragmentation of media environments translate into the deliberative quality
    of the media? How deliberative is the media system as a whole? How
    deliberative are individual media types, formats, or programs?

    • How do political, national and cultural climates shape deliberation? To

    which extent do different types of the political system affect the
    deliberative quality within the public sphere? How does the increased
    polarization of the political environments affect formal deliberation? How
    do political elites engage with populist actors who decline to engage in
    reasoned and constructive dialogue?

    • Which opportunities and challenges do big data offer for the analysis of

    deliberation? What are the methodological challenges and pitfalls when
    measuring deliberation? To which extent, and if so how, may computational
    methods help in identifying the criteria for deliberation?

    Submissions are due by June 15, 2018 (23:59 CET) and must be submitted via
    this Google Form.

    https://goo.gl/forms/xazX7B2E9C64drhB3

    Abstracts must not be longer than 500 words (excluding title and
    references). A committee composed of communication and political science
    experts in deliberation will review each abstract. Only one proposal per
    first author can be accepted. Notifications of acceptance will be issued in
    July 2018. Limited funds are available to cover accommodation and travel
    expenses of conference presenters. In order to host a family-friendly
    conference, the parent and child room of the University of Mannheim can be
    used for self-provided childcare.

    Further questions, please visit the website
    http://mzes.uni-mannheim.de/DQComm2018/

    or contact the organizers directly: dqcomm2018@mzes.uni-mannheim.de

    Christiane Grill, Anne Schäfer, Charlotte Löb and Chung-hong Chan
    Organizing Committee of The Deliberative Quality of Communication
    Conference 2018

     
  • Mark 6:05 pm on March 27, 2017 Permalink | Reply
    Tags: , Chantelle Mouffe, fake news, , , , , ,   

    The Ontology of Fake News 

    What we are seeing with the growth of ‘fake news’ is perhaps the weaponisation of epistemology. In other words, ‘fake news’ as a construct is becoming a discursive component of our repertoire of contention. Far from entering a post-truth era, we are seeing truth becoming a mobilising device in a new way, encouraging ‘us’ to defend ourselves from ‘them’ predicated on the absolute falsity of their worldview. It’s the playing out in an epistemic register of what Chantal Mouffe, drawing on Carl Schmitt, describes as a friend/enemy distinction. Rather than the political other being an adversary to be struggled against, nonetheless regarded as legitimate, they are cast as an enemy to be destroyed. Rush Limbaugh offered a pure expression of the epistemological logic of the friend/enemy distinction in this 2009 rant:

    What this fraud, what the uncovering of this hoax, exposes,” he said, “is the corruption that exists between government and academia and science and the media. Science has been corrupted. We know the media has been corrupted for a long time. Academia has been corrupted. None of what they do is real. It’s all lies!

    We live in two universes. One universe is a lie. One universe is an entire lie. Everything run, dominated, and controlled by the left here and around the world is a lie. The other universe is where we are, and that’s where reality reigns supreme and we deal with it. And seldom do these two universes ever overlap.

    http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

    The origins of this can be understood agnotologically: neo-sophists, with corporate funding, seeking to manufacture doubt where none previously existed. What’s being described as post-truth emerges at the intersection between corporate agnotology, political polarisation and post-democracy. The possibility to weaponise epistemology emerges coterminously with the breakdown of social solidarity. Agnotology contributes to the erosion of shared certainties in cumulative ways. It creates the conditions for what David Roberts calls tribal epistemology:

    Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

    Now tribal epistemology has found its way to the White House.

    http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

    What I’m suggesting is that at this point we see epistemology move from being an elite weapon of war to part of the repertoire of contention. Once Trump begins to seriously struggle, how easy is it to imagine Whitehouse statements being dismissed as ‘fake news’ by the grassroots they used this notion to mobilise? How effectively could a nascent leader use this epistemic playbook against those who have brought it into the mainstream? As Roberts points out, this is a cultural tendency which has been present in American politics for quite some time:

    That is the classic, some might say naive, view. But there has always been a powerful strain in conservatism (think the John Birch Society) that resists seeing itself as a participant in the game at all. It sees the game itself, its rules and referees, as captured by the other side, operating for the other side’s benefit. Any claim of transpartisan authority is viewed with skepticism, as a kind of ruse or tool through which one tribe seeks to dominate another.

    That’s the view Limbaugh and others in right-wing media have consistently articulated. And it has found an increasingly receptive audience. Over time, the right’s base — unlike the left’s fractious and heterogeneous coalition of interest groups — has become increasingly homogeneous (mostly white, non-urban, and Christian) and like-minded (traditionalist, zero-sum values).

    http://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology

    The friend/enemy distinction is, for lack of a better term, viral. At least under current conditions. Once people begin to think in these terms, it’s hard to counter it. Not least of all because reluctantly accepting the ‘rules of the game’ inevitably comes to be coded as either giving up or buying in. The reason for this is in part epistemological because tribal epistemology destroys the possibility for syncretism: people can no longer see A and B as elements that can be combined, even if unstable and contested ways. Instead and become an absolute disjunction. One sees the social world in terms that allow for no choice other than to choose between positions. The playing out of this, in the digital capitalism of 2017, rather terrifies me.

     
    • Dave Ashelman 3:44 pm on March 28, 2017 Permalink

      Mark Carrigan reply

      The issue scares me as well – to my very core, but perhaps through a different means. The dynamic that you describe was a process, and not an event; and a very long, 40-year process at that. We (in the academy) have to shoulder a large proportion of the blame. This is the basis of my insistence that Sociology is in need of an identity crisis.

      For 40 years, post-modernism (mainly à-la Nietzsche and Foucault) have told us, and we have thus told the world, that reality is relative; that what is morally good doesn’t matter; that man has no essence of substance; that the only knowledge there is, is the knowledge of power; that truth is unknowable.

      In essence, the death of metaphysics at the hands of Nietzsche & Foucault told people that their lives didn’t matter. It literally took away the essence and substance of what it means to be human. We no longer needed to inquire, we only needed the axiom of the invisible “apparatus of power” which is all-hegemonic, and will alway prevail. We assumed that truth lay in truthlessness; that morality lay in anomie; that there were meaningless social structures and stratifications in creating chaos.

      I wasn’t just enough to consider this aspect of philosophy, but we had to put it into practice in the Social Sciences as a whole. We started telling people what their social, economic, and psychological condition were instead of inquiring. Suffering from exploited labour? Here’s a pill for anxiety. Jobless? Well, that’s the fault of [fill in the blank]. Reality is relative in post-modernism. Truth does’t exist as an object of inquiry anymore.

      And now we complain that the world has heeded our teachings. In addition, those “elites” that the populous is complaining about are us – us academics. While we sit scratching our heads on the ontology of relative truth, it turns out that reality really is real. People really do suffer and die en mass. It turns out that suddenly truth matters. It turns out there is such as a thing as knowledge beyond power. It turns out that the lived realities of everyday people in everyday life really does have essence and substance.

      This British philosopher seems to agree with me, though I don’t think he goes far enough.

      http://www.bbc.com/news/education-38557838

      For 40 years we have largely supported the idea that there was no philosophy before Bentham. Aristotle may have come up for discussion during a graduate seminar on Marx, but no one is really required to dig that deep. I am moving into my 50s, and I may have been the last generation required to take well-grounded philosophy courses as a young undergraduate. Many of my younger Sociology Ph.D. cohorts have never read Kant, Locke, Aquinas, Descartes, or even know that Adam Smith wrote that “other book.” Today, people can hold a “Doctor of Philosophy” degree without ever having picked up a philosophy book.

      When I was working on my first undergraduate degree, all majors were required to take not only an Intro to Philosophy course, but also an entire year of Philosophy of Ethics, Philosophy of Logic, and something along the lines of a Philosophy of Religion (I took comparative religions).

      Yet we wonder why post-truth has become a culture; an essence and substance unto itself in a world that no longer believes in essence or substance because we told them that they don’t exist.

      My apologies for the long response, but the ontology is us (the academy). We are guilty. What scares me deeply, is wondering if we will ever have the humility to look in the mirror – before the masses with torches and pitchforks rushes the Ivory Tower.

    • Mark 10:23 am on March 30, 2017 Permalink

      I think the backlash will be organised rather than aggregative. Do most people really care that much? But a crisis in another institution, particularly one filled with liberal elitists, could be very useful.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel