Tagged: data Toggle Comment Threads | Keyboard Shortcuts

  • Mark 11:40 am on May 8, 2019 Permalink | Reply
    Tags: data, data events, deleuze, rupture, testing,   

    The ontology of (digital) testing 

    My notes on Thompson, G., & Sellar, S. (2018). Datafication, testing events and the outside of thought. Learning, Media and Technology, 43(2), 139-151.

    In this paper Thompson and Sellar cast a Deleuzian lens upon the data hungry character of contemporary educational institutions. As they put it on 139, “Education institutions, and the people who work and learn in them, are constantly generating and using more and more data”, much of which is digital. This proliferation of digital data reflects and in turn encourages complex forms of software, in turn driving the development of interoperability systems to ensure they can work together. This interoperability makes ‘big data’ techniques possible in a way they otherwise wouldn’t be. This is an important reminder that data analytic possibilities don’t unfold naturally from new technology but rather require institutional work with influential sponsorship in order to make them happen. They caution that the “the conceptual tools that we have for thinking through these new technologies in assessment and learning, particularly in relation to implications for politics, policy and practice, remain caught between dystopian visions of technology undermining human values, culture and subjectivity, or utopian fantasies about continual improvements to the efficiency and effectiveness of learning.” (139-140).

    Deleuze and Guattari help that revolutions in institutions such as education proceed through challenges to orthodox images of thought (“the set of implicit presuppositions about what it means to think”) which “create new possibilities for thought that can bring about new activities, behaviours, organisations and connections” (140). However “thought that is formed in digital learning environments such as CATs ultimately conforms to a model: an information ontology” and cannot be revolutionary in this sense (141). The production of ever more data produces events, in so far as the data leads things to happen which otherwise wouldn’t, but it is constitutively unable to produce a new image of thought. This runs contrary to the disruptive rhetoric of educational technology which promises innovation and change.

    Data can be produced in educational settings through a variety of means: deliberate production in assessment and management procedures, volunteered through the use of digital technology or produced through covert monitoring. It assumes a form as information through models encoded within hardware and software architectures at work in its production, as well as the processes at both levels at work in its generation. Informational ontologies are crucial to the operation of computing processes and facilitating interoperability between processes.

    But where does the desire for data come from? Their Deleuzian approach sees desire as primary, productive of subjectivity. But this is where their philosophical approach seems weak to me, in comparison with the digital sociology approach they distinguish it from. It makes it difficult to think systematically about who desires it, how they desire it and how the context frustrates or facilitates it. The turn towards the para-subjective makes it hard to pin down subjects in any concrete way. They reflect on how “Groups and individuals come to believe that they need something to solve their problems and this lack is felt as a lack of data” (143). But their concern for the “abstract essence of this desire” immediately leads them away from this empirical specificity towards a ‘bargain with modernity’ in which data helps fortify people against the uncertainty which is endemic in modernity. Well sure but how does this specifically play out in educational systems? Or is this dynamic manifest uniformly across all spheres of society? The philosophical heterogeneity transmutes into an analytical monism and this is a problem for me. I’m not sure how it helps us make sense of education at all, as opposed to providing a vast panorama through which to sketch the questions about education we have in much grander terms.

    I find the analysis much more useful when it comes to standardised tests, consisting of sample tests (a representative sample of pupils) and census tests (all within a population). These are adminsitered uniformly and scored in a predetermined way. They predate neoliberalism and the authors observe that many critics of these ‘neoliberal instruments’ fail to pay attention to the process by which they are constructed:

    • The domain is “the specific area of interest that is being measured, whether it is a body of knowledge, skills, abilities or attributes” (144). Sub domains have a relationship with each other. What are tested are constructs which are designed to track unobservable characteristics through evaluating observable behaviours.
    • This involved sampling from within the domains, drawing on a selection of potential questions within a domain in order to construct a test which can be done in a finite amount of time while still facilitating inference in the way described above. This is guided by a test specification plan.
    • Inference involves using test scores to assess achievement within a domain. Standardisation means this can be undertaken at the level of the individual, group, school or nation. Not all inferences are valid.

    The computerisation of these tests promises capture transactional data, facilitating big data analytics. Selection can be built into the test itself, allowing branching or pathways depending on the student’s response and aptitude. But what sort of event is this? They suggest three concepts to classify these:

    1. Break-events involve the movement from one pre-existing category to another. As they put it, “Labelling a student as above-average, a school as failing, or a system as excellent based on aggregated data, are all examples of break-events that express potentiality as information” (146). These categories matter and they bring some possibilities to the fore while suppressing others.
    2. Crack-events are forms of change that lack the perceptibility of moving between pre-constituted categories. They occur all the time (“such as feelings of bewilderment, anxiety or elation when sitting a test”) but often don’t manifest in a recognisable way.
    3. Rupture events occur when the coordinates of thought are lost, as cracks aggregate into a potentially transformative moment of change. Revolutionary technologies could only be such if they produce rupture events.

    Standardised testing can generate breaks and cracks but not ruptures becauset they are “created according to an image of thought that limits (a) what tests should be and (b) what the purposes of schooling are” (148). They cannot break with the past because they are premised on a sense of correct answers and simple errors, moving people between discrete categories on that basis.

  • Mark 10:20 am on March 19, 2019 Permalink | Reply
    Tags: data, , deborah lutpon, , small data   

    How do data come to matter? 

    My notes on Lupton, D. (2018). How do data come to matter? Living and becoming with personal data. Big Data & Society, 5(2), 2053951718786314.

    In this paper, Deborah Lupton extends her work on the quantified self into a broader theorisation of how people come to live with data. It foregrounds the voluntary dimension of this process, in which “many novel digital technologies offer any interested person the opportunity to document, monitor and measure details of their bodies”, equipping them with capacities which were previously confined to specialised instruments and trained experts (1). These techniques render visible what was previously unseen. From 2:

    Elements of their bodies that people may not otherwise have con- sidered to any great extent – the number of steps they take per day, their sleep patterns, the kilometres and geographical locations they move through, their brain waves, moods and so on – are brought into sharp relief.

    This renders the body as a series of interlocking digitised elements which “demand new ways of interpreting these signs and signals of bodily function and movement” (2). Her focus is on the existential and epistemic predicament this confronts people with: how do we make sense of this information, determine its value and put it into practice? This takes place in a context where there is pervasive cultural pressure to know our bodies better and live in a way deemed more efficient. Her focus is on how this data is experienced and understood in everyday life.

    This has been studied through the frame of data sense-making i.e. how people engage with and learn from information. It has been tied to data literacy, a concern for capacities to select, analyse, visualise and learn from data. Lupton highlights how these approaches tend to focus on cognitive and technical forms of interpretation, ignoring the role of the situated body as a means through which we learn. She uses the concept of data sense to this end, which incorporates the “entanglements of the digital sensors with the human senses in the process of sense-making” (3) with the body as the site of sensation and response.

    This project draws on a range of approaches from agential realism, new materialism and the anthropology of material culture. They share a more-than-human approach which “demands that the human subject is always considered permeable and open to the material world rather than closed-off and contained” (4). They share the following characterised described on 5:

    • an approach that recognises that humans and non- humans are entangled in hybrid, unstable and generative ways;
    • the importance of considering the distributed agency and vital capacities (‘thing-power’) of human non-human assemblages;
    • an emphasis on the embodied, sensory and otherwise material nature of meaning, knowing, growing, perceiving and making as part of human embodiment;
    • the changing meanings of artefacts as they move into different assemblages and the work required to articulate these assemblages; and
    • the importance of identifying and tracing the ways in which humans and nonhumans are intermeshed, the enactments and practices that are involved, and the effects of these on human lives.

    These inform Lupton’s conception of a human-data assemblage within which data learns about humans but humans in turn “may find themselves asking to what extent their data speak for them, and to what extent their data are different from other elements of embodiment and selfhood” (5). Digital devices and software for personal data necessarily seek to make data intelligible to users. Through such intelligibility personal data has agency in relation users, exercising an influencer over their behaviour and leading them to do things on the basis of these new understandings. But data can also ossify if it’s not found useful or actionable, freezing into a latent state which could be rendered lively again at a later date. Her work on self-tracking provides an illustrative example of why these distinctions matter. Described on 6-7:

    My research participants often described collecting and reviewing data about their bodies as generating agential capacities that are suffused with affect. These data can motivate them, encourage them to move their bodies more, persist with weight-loss efforts or self-management of chronic conditions. The ‘numbers’ can make them feel good if they demonstrate that people are achieving goals set for themselves, or if the data demonstrate good health or higher levels of fitness. Positive feelings can be generated by the buzzes, flashing lights, badges and other notifications that communicate a goal has been achieved. Alternatively, however, biometric data can have demoralising effects, generating disappointment, frustration, guilt and anger. Notifications can be experienced as annoying or pestering, making unreasonable demands.

    To make sense of data involves “connecting the metrics with the lived sensory experiences of one’s body and the other elements that are important in data sense-making” (7). This is a contextualised process of building a human-data assemblage, shaped by the environment but also contributing to it. This stress on the meaning of the context is crucial if we want to understand, for instance, how particular professional groups might engage in specific ways with personal data. As Deborah puts it on 7:

    When people review their data, they actively relate them to the contexts in which they were generated. People consider such aspects as the time of day, the weather, how their bodies felt, whether they were lacking sleep, were hungry, feeling stressed, drank too much the night before, what place and space they were inhabiting or moving through when the information were generated.

    Oddly though I think an individual is reproduced here, in spite of the theoretical sources. My point of departure is the claim that personal “data are meaningful because they are about and for them” (8) and I think this insufficient to account for mattering in Andrew Sayer’s sense. Mattering always points beyond the relationship between the agent and what matters, something which I’m not sure the concept of the assemblage can account for because it squeezed human and data together in a dance of co-constitution. This is something I’ve tried to analyse in Katherine Hayles work and I’m thinking I need to seriously explore the issue I persistently see in co-constitution theorising, which I take to be a novel form of central conflationism in Margaret Archer’s sense.

  • Mark 12:31 pm on June 14, 2016 Permalink | Reply
    Tags: , data, , , , ,   

    A wonderful analogy by @Elinor_Carmi 

    I love the analogy offered by Elinor Carmi at the start of this excellent Open Democracy piece:

    Yesterday I walked to the supermarket, like I do every Tuesday morning. All of a sudden I started noticing a few people starting to follow me. I try to convince myself that it is probably just my imagination, and carry on walking. After a few minutes, I cross the road and make another turn, but then I look behind me and see that now there are dozens of people starting to follow me, taking pictures of me and writing rapidly, documenting my every move. After a couple more steps, they became hundreds. My heart was racing, I could hardly breathe, and I started to panic. Freaking out, I shouted at them, “Who are you? What do you want from me?” I tried to get a clearer view of this huge group – some looked a bit familiar but I didn’t remember where I’d seen them before. They shouted back at me, “Don’t worry, we don’t really know who you are, we just need some information on you, so we can show you different ads on billboards”. Puzzled by their response I scream, “What do you mean you don’t know who I am!? You know my gender, skin/eyes/hair color, height, weight, where I live, the clothes and glasses I wear, that I have 10 piercing in one ear and that I shop at Sainsbury on Tuesday mornings!” They smile and try to reassure me, “But we don’t know your NAME, silly! So stop being so paranoid, we do this to everyone walking on the street, it’s public space you know…”.

    This scenario might seem science fiction to some people, a dystopian reality, horror film or a South Park episode. But for the others that recognise this situation, this is actually what happens every day when you browse the internet.


  • Mark 5:04 pm on April 29, 2016 Permalink | Reply
    Tags: data, data ownership, john locke, , ,   

    A Lockean case for the ownership of personal data  

    An interesting idea from Craig Lambert’s Shadow Work loc 3116 which deserves to be explored in greater depth:

    As noted earlier, philosopher John Locke argued that labor creates property; taking his view, if your shadow work made some information, it is your possession. In fact, who owns your data—your informational body—may some day be as contentious an issue as the ownership of petroleum, water, or any other natural resource.

  • Mark 9:31 am on December 10, 2015 Permalink
    Tags: data, , , ,   

    The Lives and Deaths of Data 

    This looks superb:

    Open Track: The Lives and Deaths of Data

    Convenors: Sabina Leonelli and Brian Rappert, Department of Sociology, Philosophy and Anthropology & Exeter Centre for the Study of the Life Sciences (Egenis), University of Exeter, UK (see also the Exeter Data Studies group: http://www.datastudies.eu

    Abstract: This track investigates the relational constitution of data: how stages in the life of data articulate to one another and the challenges involved in storing, moving, classifying, manipulating and interpreting them. The session is intended to explore the collectivities emerging through data collection, dissemination, assemblage and analysis. Analysing the ways in which information becomes taken as given things, the manner in which data and their varying contexts of use are co-constituted, and the means by which utility is invested and divested in them provides a platform to explore and challenge the powers attributed to “Big” and “Open” data by governments, lobby groups and institutions around the world. With its long standing attention to the conditions of knowledge production, STS scholarship is well positioned to reflect on the value(s) attributed to data under a variety of different circumstances, how such attribution changes in time, and what this indicates about the properties of the objects being identified and used as ‘data’ and of the communities involved in such identification and use. Questions to be addressed include: What would it mean to speak of the birth of data? How do they develop, especially when they are used for a variety of purposes by different stakeholders? Do they ever cease to be data, and how can we conceptualize situations in which data are dismissed, forgotten, erased, lost or regarded as obsolete? This session will be organised as a set of individual presentations encompassing several different aspects and areas of data use.

    Process: The deadline for submitting an abstract is 21 February 2016. If you want to participate in this open track then you will need to select it when you submit your abstract to the 4S/EASST Conference. Instructions for submission of your abstract are available at http://www.sts2016bcn.org/

    . If you would like to discuss the relevance of your paper to the open track, then please contact either or both of us: s.leonelli@exeter.ac.uk

     and b.rappert@exeter.ac.uk

  • Mark 9:02 pm on November 26, 2015 Permalink
    Tags: data, , data security, information security, , social psychology   

    social psychological approaches to understanding why people are blasé about data security 

    I’m reading Untangling the Web, by Aleks Krotoski, as an accessible precursor to beginning to engage with the social psychological literature on online behaviour. It’s proving to be an enjoyable read so far, though maybe not quite as much of a pop social psychology book as I had hoped it would be. It’s more of a collection of thoughtful tech journalism than anything else. But I just came across a good example of what I was initially looking for: different (mutually compatible) social psychological explanations for why people are so blasé about their data. From page 133-134:

    There are indeed a few things that are psychologically unique about interacting via machines. First, we don’t expect consequences. The web feels ephemeral, separate from so- called real life. What happens online stays online. That’s totally untrue, of course. As we continue to intertwine our lives with technology, our virtual and physical selves evolve into the same beast, and therefore it’s impossible to separate the consequences that affect one from the other. Something said or done in one place can easily be taken out of context and dropped into another. Ask the many people who’ve been fired from their jobs for posting party pictures on their Facebook timelines.

    Second, according to the Ohio study, online we experience an extreme version of the so- called “third person effect”: we rationalise, through our infernal, eternal human nature, that if something’s going to go wrong, it’ll happen to the other guy. So we won’t change our privacy settings on a social network or turn off cookies on our browsers to keep the details of our surfing away from advertisers: only when we experience a personal violation will we be more careful to protect ourselves and our information.

    Third, we’re unable to imagine the vastness of the potential audience we communicate with when we’re online, so we treat the computer like a confidant, a confessor. We have an intimate relationship with our computer terminals; our laptops, mobile phones, desktops and tablets feel private, and the networks we hang out in feel closed. In order to make a connection with others, we feel it’s OK to share private information. “We think the web is a kind of conversation,” explains Dr Kieran O’Hara, a philosopher and web scientist at the University of Southampton. “It feels a bit like writing letters, a bit like a telephone conversation. But it’s all that and much more.”

  • Mark 3:32 pm on August 21, 2015 Permalink
    Tags: data, , ,   

    spotify’s ultra-creepy new privacy policy 

    As the article suggests, this initiative may be the result of the threat posed by Apple music. What interests me is how totally open-ended this is: how do we perceive and evaluate risks when policies take such a form?

    Sections 3.3 and 3.4 of Spotify’s privacy policy say that the app will now collected much more data about its users.

    The section ‘Information Stored on Your Mobile Device’ says that: “With your permission, we may collect information stored on your mobile device, such as contacts, photos or media files.”

    In the next section, it says: “We may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g Bluetooth).”


  • Mark 8:51 am on August 21, 2015 Permalink
    Tags: , data, , ,   

    data fetishism and the elites of digital capitalism  

    From Plutocrats: The Rise of the New Global Super-Rich pg 46:

    Carlos Slim, who studied engineering in college and taught algebra and linear programming as an undergraduate, attributes his fortune to his facility with numbers. So does Steve Schwarzman, who told me he owed his success to his “ability to see patterns that other people don’t see” in large collections of numbers. People inside the super- elite think the rise of the data geeks is just beginning. Elliot Schrage is a member of the tech aristocracy— he was the communications director for Google when it was the hottest company in the Valley and jumped to the same role at Facebook just as it was becoming a behemoth. At a 2009 talk he gave to an internal company meeting of education and publishing executives, Schrage was asked what field we should encourage our children to study. His instant answer was statistics, because the ability to understand data would be the most powerful skill in the twenty- first century.

    How does this intersect with the (purported) rise of the data scientist as the ‘sexist job of the 21st century‘?

  • Mark 6:49 am on April 20, 2015 Permalink | Reply
    Tags: , data, data-driven policing, , , ,   

    Sociological questions about the coming era of data-driven privatised policing 

    This insightful article paints a worrying picture of the growth of data-driven policing. The technical challenge of “building nuance” into data systems “is far harder than it seems” and has important practical implications for how interventions operate on the basis of digital data. What I hadn’t previously realised was how readily investigators are using social media on their own initiative above and beyond the systems that are being put into place with the help of outside consultancies: only 9% of police using social media in investigations had received training from their agency. Furthermore the discussion of the life span of data raised some really interesting (and worrying) questions about the organisational sociology of data-driven policing given what seems likely to be increasing involvement of the private sector in policing in the UK:

    For the kid listed in a gang database, it can be unclear how to get out of it. In the world of human interaction, we accept change through behavior: the addict can redeem himself by getting clean, or the habitual interrupter can redeem himself by not interrupting. We accept behavior change. But in the database world, unless someone has permission to delete or amend a database record, no such change is possible. Credit agencies are required to forgive financial sins after 7 years. Police are not—at least, not consistently. The National Gang Center, in its list of gang-related legislation, shows only 12 states with policies that specifically address gang databases. Most deny the public access to the information in these databases. Only a few of these twelve mention regular purging of information, and some specifically say that a person cannot even find out if they have a record in the database.

    This permanence does not necessarily match real-world conditions. Kids cycle in and out of street gangs the way they cycle in and out of any other social group, and many young men age out of violent behavior. Regularly purging the gang database, perhaps on a one-year or two-year cycle, would allow some measure of computational forgiveness. However, few institutions are good at keeping the data in their databases up-to-date. (If you’ve ever been served an ad for a product you just bought, you’re familiar with this problem of information persistence and the clumsiness of predictive algorithms.) The police are no worse and no better than the rest of us. Criminologist Charles Katz found that despite a written department policy in one large Midwestern police gang unit, data was not regularly audited or purged. “The last time that the gang unit purged its files, however, was in 1993—approximately 4 years before this study was conducted,” he wrote. “One clerk who is responsible for data entry and dissemination estimated, ‘At a minimum, 400 to 500 gang members would be deleted off the gang list today if we went through the files.’ Accordingly, Junction City’s gang list of 2,086 gang members was inflated by approximately 20% to 25%.”


    This suggests to me that any adequate evaluation of data-driven policing needs to take questions of organisational sociology and information technology extremely seriously. What matters is not just the formulation of data management policies but what we know about how such policies tend to be implemented under the specific conditions likely to obtain in policing. Given the broader trend towards the privatisation of policing, it is increasingly important that we understand how sharing of data operates across organisational boundaries, how it is prepared and how it is perceived by end-users.

    My fear is that a form of inter-organisational ‘black-boxing’ could kick in where those utilising the data for interventions trust that others have elsewhere taken responsibility for ensuring its reliability. What scrutiny would the operations of outside suppliers be subject to? Could privatisation intensify the rush towards data-driven policing in the name of efficiency savings? Would a corresponding centralisation of back-office functions compound the aforementioned epistemological risks entailed by outsourcing? These are all urgent questions which could easily be marginalised as budgetary constraint drives ‘innovation’ in policing: data-driven policing and privatised policing will likely go hand-in-hand and we need to analyse them as such.

  • Mark 5:16 pm on April 1, 2015 Permalink | Reply
    Tags: data, , ,   

    CfP: The Politics of Data (Science) 

    The Politics of Data (Science)

    This special issue of Discover Society will explore the political implications of ‘big data’ and the systems of expertise emerging around it, including though not limited to Data Science. In doing so it will aim to bridge the gap between the methodological discourse surrounding data science and the political discourse beginning to emerge around ‘big data’. Here are some of the questions the issue will address:

    – How is ‘big data’ understood and acted upon? How should we understand its cultural power?

    • How is ‘big data’ reconfiguring the social sciences? Do we risk all science becoming data science?
    • How and why has the ‘data scientist’ come to be seen as the ‘sexiest job of the 21st century’?
    • Is the ‘data scientist’ just a ’Statistician who lives in Shoreditch?’ Or is this a genuinely new intellectual role?
    • Can ‘big data’ address ‘big questions’? If not, is this a problem?
    • What are the precursors of ‘data science’ within the academy and/or within corporations?
    • What implications does corporate data science have for the relationship between corporations & consumers?
    • What implications does national security data science have for the relationship between the state & citizens?
    • Can the use of digital data lead to efficiency savings in public services? How does this relate to the politics of austerity?
    • How could predictive privacy harms emerging from data analytics be addressed politically?
    • Can the opacity of algorithmic processes be challenged? Or are we heading inexorably for a ‘black-box society’?
    • How are new forms of digital data reconfiguring activity in particular social environments?

    However these are just suggestions and ideas beyond the scope of this list are very welcome.

    The deadline for contributions is June 15th. Contact mark@markcarrigan.net to discuss a potential contribution.

    The article will constitute the July issue of Discover Society. Most articles will be 1500 words however there are a number of special sections in the online magazine.

    Front line – 1500 words
    View point – 1500 words
    Policy briefing – 1500-2000 words

    If you would be interested in writing one of these thematic sections, please get in touch asap.

    The issue will follow the usual formatting guidelines of Discover Society. Please consult the notes for contributors.

  • Mark 8:10 am on December 3, 2014 Permalink | Reply
    Tags: , data,   

    Call for papers: Internet of You: Data Big and Small 

    This looks really interesting – if I wasn’t drowning under the weight of existing writing commitments, I’d love to try and write something for the final topic suggestion:

    Call for papers for special issue of IEEE Internet Computinghttp://www.computer.org/portal/web/computingnow/iccfp6

    Internet of You: Data Big and Small

    Final submissions due:  1 March 2015
    Publication issue:  November/December 2015

    Please email the guest editors a brief description of the article you plan to submit by 1 February 2015.
    Guest editors: Deborah Estrin and Craig Thompson (ic6-2015@computer.org).

    We are at a great divide. Where our ancestors left behind few records, we are creating and preserving increasingly complete digital traces and models of almost every aspect of our lives. This special issue of IEEE Internet Computing aims to explore technologies and issues from small user-centric models of individuals to real-time analytics on huge aggregations of user data. At present, some are aspiring to create immortal avatars by letting you record everything about yourself and convert it into a model that’s queriable, conversant, and possibly even active in gaining new experiences for itself. Meanwhile, others are equally concerned with stemming the tide of third-party data aggregation of individual models to mitigate risks that can evolve from this kind of near total information awareness.

    This special issue seeks original articles that explore both small data (individual-scale data sources, processing, and modeling) and big data (community level aggregation and analytics). Topics include

    • diverse data sources and digital traces, including email, Facebook, financial, health, location, images, sound, consumer transactions, and interests;
    • methods to combine trace data into complete models; data liberation; kinds of user models, such as the physical self, memories, aspect-limited versus comprehensive models; and data quality, including managing history, change, comprehensiveness, and accuracy;
    • methods to aggregate and process heterogeneous data sets, stages of life ontologies, the scope and purpose of these data collections, available public data sources;
    • usage models for experience sampling — proximity, context, activity sensing, quantified self, situation-aware modeling, activities of daily living, my immortal avatar, workflows, and pattern learning;
    • representation technologies such as agents, smartphones, wearable computing, personal sensing networks, pattern representation and adaptation, and natural language;
    • new kinds of applications that draw insights from data analytics— including, recommendation systems, personalized health, real-time marketing, and predicting elections from twitter feeds;
    • open architectures for personalization, the role of cloud computing, relevant emerging standards;
    • concerns regarding privacy and surveillance, the extent of privacy erosion, taxonomy of privacy threats, and incentives and disincentives for sharing, the right to forget, and status of legal safeguards;
    • privacy and security technology safeguards, including identity management, disclosure control, privacy-preserving data mining, de-identification, new security models, mechanisms that audit and control personal information flows and usage; and
    • social and philosophical implications for humans’ conception of self. Is there a natural boundary between user models and world models?

    Submission Guidelines

    All submissions must be original manuscripts of fewer than 5,000 words, focused on Internet technologies and implementations. All manuscripts are subject to peer review on both technical merit and relevance to IC’s international readership — primarily practicing engineers and academics who are looking for material that introduces new technology and broadens familiarity with current topics. We do not accept white papers, and we discourage strictly theoretical or mathematical papers. To submit a manuscript, please log on to ScholarOne (https://mc.manuscriptcentral.com:443/ic-cs) to create or access an account, which you can use to log on to IC’s Author Center (http://www.computer.org/portal/web/peerreviewmagazines/acinternet) and upload your submission.

  • Mark 8:01 pm on October 3, 2013 Permalink | Reply
    Tags: data, GCHQ, , nsa, ,   

    “When I first read Foucault’s account of the panopticon. I thought it was brilliant but overheated. Now, it actually seems like somebody’s plan” 

    An absolute must read essay in the Guardian by the novelist John Lanchester who was given access to the GCHQ files by the Guardian:

    The totalitarian state in Orwell’s Nineteen Eighty-Four would need no broader legal justification than that: it really does allow a government to do anything it likes. It was at this point that I became convinced that Snowden’s revelations are not just interesting or important but vital, because the state is about to get powers that no state has ever had, and we need to have a public debate about those powers and what their limits are to be.

    Lanchester reads Snowden files:

    At a moment of austerity and with a general sense that our state’s ability to guarantee prosperity for its citizens is in retreat, that same state is about to make the biggest advance ever in its security powers. In public, the state is shrinking; in private, it is shrinking until it gets just small enough to fit into our phones, our computers, our cars, our fridges, our bedrooms, our thoughts and intentions.


    People misunderstand what a police state is. It isn’t a country where the police strut around in jackboots; it’s a country where the police can do anything they like. Similarly, a security state is one in which the security establishment can do anything it likes.

    We are right on the verge of being an entirely new kind of human society, one involving an unprecedented penetration by the state into areas which have always been regarded as private. Do we agree to that? If we don’t, this is the last chance to stop it happening. Our rulers will say what all rulers everywhere have always said: that their intentions are good, and we can trust them. They want that to be a sufficient guarantee.


Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc