1. Turn the camera off at the request of the participant (Rachel Boynton, DOC NYC, 2018).
  2. Show the participant the finished film before the public (Rachel Boynton, DOC NYC, 2018).
  3. Love the people that you film. (Rachel Boynton, DOC NYC, 2018).
  4. Respect the participants wishes at all times, it’s important to have a positive relationship with the participant – and that they don’t walk away disappointed or unhappy with you
  5. Always offer the participant food or beverage if the participants are in your space.
  6. Ensure their facilities (i.e. Toilets, Change-rooms etc.) are acceptable if the participant is in your space.





DOC NYC PRO: Casting Case Studies 2016, streaming video, DOC NYC, New York, viewed INSERT DATE HERE 2019, <>.



Jessie Caesar
The assessment declaration. [I declare that in submitting all work for this assessment I have read, understood and agree to the content and expectations of the assessment declaration – (Links to an external site.)]
ANNOTATED BIBLIOGRAPHY (word count: 1,786)
Norman, D 1999, ‘Affordance, conventions and design (Part 2)’, Nielsen Norman Group, viewed 11 March 2020,

This article explores the concepts of ‘affordances’, ‘constraints’ and ‘conceptual models’ – and is particularly concerned with attempting “to understand how we managed in a world of tens of thousands of objects, many of which we would encounter only once”. The article provides us with an insight into Norman’s concepts ‘affordances’ (which “was invented by the perceptual psychologist J. J. Gibson (1977, 1979) to refer to the actionable properties between the world and an actor (a person or animal). To Gibson, affordances are relationships. They exist naturally: they do not have to be visible, known, or desirable.”); ‘perceived affordances’ (“which are not at all the same as real ones…The designer cares more about what actions the user perceives to be possible than what is true. Moreover, affordances, both real and perceived, play very different roles in physical products than they do in the world of screen-based products…In product design, where one deals with real, physical objects, there can be both real and perceived affordance”)‘conventions’ (“A convention is a constraint in that it prohibits some activities and encourages others. Physical constraints make some actions impossible: there is no way to ignore them. “) and ‘constraints’ (which can be broken into three types, Physical (“Physical constraints are closely related to real affordances: For example, it is not possible to move the cursor outside the screen: this is a physical constraint. “); Logical (“Logical constraints use reasoning to determine the alternatives. Thus, if we ask the user to click on five locations and only four are immediately visible, the person knows, logically, that there is one location off the screen. Logical constraints are valuable in guiding behavior.”) and Cultural (“Cultural constraints are conventions shared by a cultural group. The fact that the graphic on the right-hand side of a display is a “scroll bar” and that one should move the cursor to it, hold down a mouse button, and “drag” it downward in order to see objects located below the current visible set (thus causing the image itself to appear to move upwards) is a cultural, learned convention.”). Furthermore, the article discusses that “understanding how to operate a novel device had [has] three major dimensions: conceptual models, constraints, and affordances” arguing that the “most important part of a successful design is the underlying conceptual model“. The article states that the ‘conceptual model’ “is the hard part of design: formulating an appropriate conceptual model and then assuring that everything else be consistent with it…The power of constraints has largely been ignored.”

This article is posted to a website, in a blog-type format, making the information within the article extremely condensed, yet packed full with content and concepts, making it extremely accessible to readers. The article is published to Don Norman’s (the author and professors) website, making the information within reliable. The article makes no mention of SNSs due to both its publication date, and that it is more interested with the foundational concepts of affordances, conventions and constraints, which enable us to understand and analyse the difference between good and poorly made products and software – which we can then apply to softwares such as Instagram, itself.

While the article makes no reference to Instagram or any other SNS, it regardless provides us with the foundations required to understand and respond to the courses prompt – How do the affordances of Instagram affect the way photos and videos are authored, published, and distributed in the network?

Lister, M et al 2009, New Media: A Critical Introduction. Routledge, New York. (Sections: Networks, Users and Economics pp 163-169; Wiki Worlds and Web 2.0 pp 204-209; The Long Tail pp 197-200; User-generated content, we are all fans now pp 221-232.)

This text is an introduction to ‘new media’ (which “actually refers to a wide range of changes in media production, distribution and use. These are changes that are technological, textual, conventional and cultural… a number of concepts have come to the fore which offer to define the key characteristics of the field of new media as a whole…These are: digital, interactive, hypertexual, virtual, networked, and simulated.) and technology, presenting us with the “conceptual frameworks for thinking through a range of key issues which have arisen over two decades of speculation on the cultural implications of new media” and the “questions, the ideas and debates – the critical issues – that the emergence of new media technologies have given rise to”. The selected chapters within this text take an exploration into the concepts ‘Web.1.0’ (the original and first form of the internet created by DARPA); ‘Web2.0’ (“is allegedly characterised by co-creativity, participation and openness, represented by softwares that support, for example, wiki based ways of creating and accessing knowledge, social networking sites, blogging, tagging and ‘mash ups’”); ‘Networks’ (“Defense Advanced Research Projects Agency (DARPA) in the US, and it was this Pentagon-funded agency that eventually developed the protocols…in order to allow computers to form networks that could send small packets of data to one another.”); ‘The Long Tail’ (“the economic basis of production is changing in ways that unlock market diversity on an unprecedented scale.”); ‘Users’ (“this shift from ‘audience’ to ‘user’ in media studies…it also brings within our reach the possibility of becoming producers in our own right.”) and ‘User-generated content’ (e.g. “every SNS post, or conversation in a chat room, every home page and downloaded MP3 playlist facilitates the individual communicating in a pseudo public mode of address. What is clear is that a great deal of web use facilitates a feeling of participation in media space”) and presents us with the theory and definition behind these concepts as a means to evaluate how technology has developed over time, and the way it may continue to develop in the future. Authors place more importance on the Web2.0, rather than the Web1.0 and are particularly concerned with “the ways in which the desire for communication and the pressures of commercialisation have interacted to bring us Web 2.0 and its expression in the form of social networking sites (SNS)”. To understand these concepts, authors break down the information into easily accessible chapters with in depth summaries of each chapter provided at the introduction of the text.

This second edition text, due to the aforementioned summaries of each chapter above, the text itself is not only highly accessible, but reliable. It’s written in clear and concise language, that’s easy to understand, and is a guide/introduction to ‘new media’ studies for students – meaning it was written in a way that is easier to digest. Not only this, but the text provides an explanation for almost all concepts, bringing in the explanation of each concept in the order that they appear in web history or as they are brought up in previous explanations, and provides us with a quiet up-to-date analysis of ‘new media’ concepts and the technology that surrounds them. While the text does make some analysis of SNSs (such as Flickr), its release date was prior to the release of Instagram, thus no mention of that social networking site is made. However, the theories presented in the text can be applied to Instagram and other newer SNSs, as the author does does make note that the studies presented within the text can be used in future as a means of understanding newer technologies, if used with other newer resources that make in-depth research of the platform (Instagram) itself.

Despite the text not making reference to Instagram, it does offer a plethora of information that’s relative to the concepts that are presented in the course, as well as the courses prompt – How do the affordances of Instagram affect the way photos and videos are authored, published, and distributed in the network? With selected chapters ‘Networks, Users and Economics’; ‘Wiki Worlds and Web 2.0’ and ‘User-generated content, we are all fans now’ in specific providing plentiful amounts of information and discussions relative to the courses prompt and concepts. This, paired with its easily accessible chapters and chapter guides at the beginning of the text, makes for a great starting point when commencing further research into the courses prompt.

Hinton, S & Hjorth L 2013, Understanding Social Media. Sage Publications, London 2013. (Section: pp. 1-31).

This text provides an exploration of the concepts ‘new media’ and ‘social media’ in relation to the relationship between technology and society and economics. The text is particularly concerned with how social media arose, whilst considering ‘new media’; how social media can re-define our idea of what ‘social’ is; “the commercialisation of the web… marked by this change in attitude, which is described in business literature as the emergence of Web2.0(“Web 2.0 doesn’t refer to any changes in the internet’s architecture. Rather, it refers to the types of software employed and changes at the level of user practices”)” and questioning the “changing nature of what is public and what is private, and where work ends and life begins, as social media infiltrates every facet of everyday life”. To do this, the authors take a look into ‘Web1.0′(nobody talked about Web1.0 until the term Web2.0 emerged. The tag ‘2.0’ evokes the idea of software versioning and its associated marketings, and so suggests that Web1.0 was less evolved, less sophisticated and less refined”), ‘Web2.0’, ‘new media’, ‘social networking sites (SNSs)’, how technology continues “to move into mainstream everyday life in many urban settings globally” and how “SNSs evolve, the term ‘social media’ (“social media bleeds across platforms (desktop computers, mobile phones, tablets and on modern network-capable televisions) across social and media contexts, and creates various forms of presence”) is developing to encompass the growing and often unwieldy sphere of contemporary online media practice”.

While this text is mildly clunky, as the information you are looking for is embedded among other related, but not necessary information. The text itself, however, is presented neatly, and into chapters that attempt to make it easier to find the information you are looking for (i.e. ‘Web2.0’ on pp. 16), making it quiet accessible to the reader. The information presented is up-to-date and applicable to the courses prompt, as the publication date supersedes the launch of Instagram in 2010. It is also a reliable source as the text is a first edition book through a publication, written by a lecturer and a professor from the universities – University of Canberra and RMIT University.

Authors makes reference to many new SNSs including Google and Facebook but makes no reference to Instagram. It does, however, provide us with critical information regarding social media and gives a great foundational point for us to commence further research on the platform. The text explores concepts that are directly related to the courses content and prompt – How do the affordances of Instagram affect the way photos and videos are authored, published, and distributed in the network? Despite the text making no reference to the SNS in question, it does explore concepts such as ‘new media’, ‘web1.0’, ‘web2.0’ and even the acronym SNS itself, that are vital to understanding the courses core concepts – especially those related to the course prompt. 




Hinton, S & Hjorth L 2013, Understanding Social Media. Sage Publications, London 2013. (Section: pp. 1-31).

Chun, W.H.K 2016, Updating to Remain the Same: Habitual New Media, MIT Press, Massachusetts, United States (Sections: ‘Introduction: Habitual New Media, or Updating to Remain (Close to) the Same’, pp. 1-20.)

This week we began to take a deeper look into Social Media. 

Taking a look at the first reading of the week, Understanding Social Media, authors Hinton and Hjorth begin to unpack the history of the internet, social media and online media practice. Opening with – The fleeting moments of connection (on social media) take place through flashes of light, across deep-sea cables and microwave pulses that bounce invisibly between orbiting satellites. If we could see these connections plotted around the globe, the world would be illuminated like an exquisite decoration, shimmering with the mediated social interactions of many of its nearly seven billion inhabitants. Today, social media in its many forms account for a great deal of this mediated activity – with it bleeding across platforms (desktop, mobile phones, tablets and or modern network-capable televisions), across social and media contexts, creating various forms of presence. As smartphones continue to movie into mainstream everyday life in many urban settings globally, the demographics of social media are also changing (etc. old people/young people), and as Social Networking Service’s (SNSs) evolve, the term ‘social media’ is also developing to encompass the growing and often unwieldy sphere of contemporary online media practice. Hinton and Hjorth make a reference to Melissa Greg, who says that social media accompanies a movement towards ‘presence bleed’, where boundaries between personal and professional identities no longer apply; Making note that with the ‘context collapse’ of social media it is not uncommon to see users having more than one Twitter and Facebook account for different ‘public’ and ‘private’ contexts – friendship and intimacy can be both amplified and commodified through social media. Hinton and Hjorth explain that underlying this concept is an array of questions about the changing nature of what is public and what is private, and where work ends and life begins, as social media infiltrates every facet of everyday life. 

Hinton and Hjorth cover many major themes, that help us understand social media and the fabric of social media today, through the rest of the text and summarise these themes nicely in chapter one, in order to simplify the text enough to find exactly what you are looking for:

  • Chapter Two explores the rise of the Web2.0 as a way of contextualising the ideological environment in which social media operates. Rather than presenting Web2.0 as a revolution in the way the web ‘works’, as some have, we take a more critical stance, examining the way the web2.0 functions as an ideology that declares the corporate worlds growing understanding that the internet is not only mass, but also social media. We also look at the both empowering and controlling elements that go hand in hand with the web2.0.
  • Chapter Three emerges with the undisputed icons of social media, social network sites or SNSs. Here we contextualise SNSs within a tradition of internet studies which has been conducting research and enquiry into the nature of online communities for more than two decades. Without attempting to deny their novelty, we emphasise that SNSs represent a continuity with earlier ways of thinking about the social aspects of the internet. In this chapter, we look at how in the field of internet studies, early notions of the internet as a series of virtual communities have yielded to more recent ideas of SNS practices and discourses as involving ‘networked publics’ and ‘intimate publics’. We also engage with a number of other research approaches to SNSs in order to define the field.
  • Chapter Four returns to the themes of control and empowerment we introduced in Chapter 2, and applies them to practical example of social media in action. In this chapter we look at how internet participation can yield user created content (UCC) and how it has allowed the figure of the ‘producer’ to emerge. We describe this as being a potentially empowering outcome of social media, and examine how produce-age is present in online activism and citizen journalism. We then consider how the ultimate turn has challenged traditional models for journalism and how participative media is destabilising  the established power structure of governments and what Dan Gillmor refers to as ‘Big Media’.
  • Chapter Five we examine cultural production and focus on how social media is precipitating and reflecting changes in the arts as a specific site of cultural production. We look at how cultural institutions such as galleries and museums are responding to new challenged and embracing web2.0 inspired notions of social media and how this is displacing their traditional roles as arbiters of taste. We examine how artists are responding to social media, and how the emergence of art-themed SNSs like DeviantArt are challenging conceptions of art production and consumption as well as distinctions between the amateur and the professional artist. Finally, we examine the cultural complication of what Jean Burgess has termed ‘Vernacular Creativity’.
  • Chapter Six we look at games, a realm which has always been associated with the social. Here we look at how social media and games intersect in the form of SNS games – games that are played within SNSs and take advantage of features such as friends’ lists to add a social dimension to their practise. While social games offer new types of places to play and socialise, they also involve two forms of labour. First, time spent playing online games raises money for the SNSs and game companies through advertising and in -game purchases. Second, since social games are a way to socialise, playing games is also a way to maintain contacts and thus provide the means to maintain social capital. In this chapter, we explore the role intergenerational ties in China as a case study. Specifically, we examine how social games act as a way for youth who have moved away from home for work or study to maintain relationships with their families.
  • Chapter Seven we reflect upon the convergence between social, locative and mobile media, and upon the uneven journey of the mobile phones role in this. In particular we look at how location-based services (LBS), such as Google Maps and Facebook Places, have converged with mobile and social media through the smartphone. We look at how mobility has become about more than the ability to take your social media with you as the popularity of such device has grown, specifically there are two results: the expansion of cartographies enabled by LBS devices and mobile apps; and the development of location – based social apps that blend social relationships with geography. These changes reflect broader shifts in the relationships between identity, place and community and raise important issues about privacy, but also how we narrate and attach meaning to place. This chapter also considers the changing role camera phones play in our understandings and visualisations of place, especially as they become entangled in locative media practices.

Hinton and Hjorth state that the major themes they cover in the book – empowerment and control, online and offline, Anglophonic contexts and intimacy – are woven into and across the chapters. They take us through a background of the web throughout the rest of this section and some worthy highlights/important points include:

  • While the internet was developed from the late 1960s it wasn’t til early 1990s that the web evolved into what it is today and what we understand as “online”
  • TCP/IP – Basic Data Transfer Protocol
  • HTTP – Hypertext Transfer Protocol
  • Web Browsers (the interface), that constitutes our experience of the web
  • The Websener – a computer program that is constantly running on a computer that is always connected to the internet
  • Commercialising the web, in reference to Jenkins 2006 reading, and how an initial understanding of the web as a kind of TV station with many channels has given way to a more nuanced definition
  • Web 1.0 and and 2.0 – History
  • Web 2.0 and Business – Web2.0 and business – Web2.0 is a platform in which customers play an active role in building ones business
  • Web 2.0 and creative production – Web2.0 encapsulates the idea of making it easy for anyone to publish information on the internet: This is clearly linked to the new ways web2.0 was to work as a business
  • Using or being used? – The term ‘user’ has two connotations: controller and controlled. In computer parlance, the user is in charge of the machine – The user is in control (apparently) of the computer operations. On the other hand, within software developments circles the user is often derided. When we think about users in the context of social media, particularly within the construct of web2.0, which of these categories is most applicable?

Moving to reading two of this week, Updating to Remain the Same: Habitual New Media, author Chun takes us through (what I’d personally say, despite making reference to other scholars and texts, and not being first person, is definitely very much his view on) new media.  

Chun begins stating that new media exist at the bleeding edge of obsolesce. They are exciting when they are demonstrated, boring by the time they arrive. Even if a product does what it promises, it disappoints. (see what I mean? That a very broad statement to make – I digress…) If an analysis is interesting and definitive, it is too late: by the time we understand something, it has already disappeared or changed. We are forever trying to catch up, updating to remain (close to) the same; bored, overwhelmed and anxious all at once. Chun continues, in response to this rapidly cycling and possibly manic depressive time scale, much analytic creative, and commercial effort has concentrates on anticipating or creating the future, the next big thing: from algorithms that shift through vast amounts of data in order to suggest or predict future purchases to scholarly analyses that assess the impact of technologies that do not yet exist. What really matters is figuring out what will spread, and who will spread it fastest. “Is this really the best approach?” Chun asks – what does this constant move to the future, which dismisses the present as already past, era? What do we miss if we assume new media are simply viral or disruptive?

Chun states, Habitua New Media counters this trend to analyse the present through soothsaying by revealing that our media matter most when they seem to not matter at all, that is, when they have moved from the new to the habitual. Search engines are hardly new or exciting, but they have become the default mode of knowledge acquisition. Smart phones no longer amaze, but they increasingly structure and monitor the lives of their so-called owners. Chun continues and explains that through habits users become their machines: they stream, update, capture, upload, share, grind, link, verify, map, save, trash and troll. Repetition breeds expertise, even as it breeds boredom. Habituation dulls us to the new; because of the shelter – the habits – offered by habituation, the new is barely noticed. Habituation and the new are the dreams and nightmares of the new media companies.

Chun says that new media live and die by the update. The end of the update, the end of the object. Things no longer updated are things no longer used, useable, or cared for, even through updates often ‘save’ things by literally destroying – that is, writing over – the things they resuscitate. In order to remain, Chun argues, nothing remains, so now nothing remains even as everything does (…excuse me?). Things and people not updating are things and people lost or in distress, for users have become creatures of the update. To be is to be updated: To update and to be subjected to the update. The update is control to distrupting and establishing context and habituation, to creating new habits of dependency. To put it in a formula: Habit + Crisis = Update.

Chun says that if “networks” have become the dominant concept, deployed to explain everything new about this current era from social to military formations, from global capitol to local resistance, it is because of what they are imaged and imagined to do. As he explains in further detail in Chapter 1, “networks” render the seemingly complex and unmapable world of globalisation trackable and comprehensible by transforming time-based interactions and intervals into spatial representations: they spatialise temporal durations and repetitions networks embody “glocal” combinations by condensing complex clouds of interactions into definite, traceable lines of connection (or connections imagined to be so) between individual nodes across disparate locales. Network maps mediate between the local and the global, the detail and the overview. Chun continues with, imaged and imagined connections, this book reveals, are most often habits: things potentially or frequently repeated. Habit is formation: it forms and connects. Habits are creative anticipations based on past repetitions. that make network maps the historical future. Through habits, networks are scaled, for individual tics become indications of collective inclinations. Through the analytic of habits, individual actions coalesce bodies into a monstrously connected chimera. Habitual repetition, however, as Chun explains in chapter two, is also constantly undone by the other temporality of the networks: crisis. As many others have argued, neoliberalism thrives on crisis: it makes crisis ordinary. It creates super-empowered subjects called on to make decisive decisions, to intervene, to turn things around. Crisis are central to habit change (interesting in the times we are in, huh). Chun references Lauren Berlant, who said on impasse, an affectively intense cul-de-sac. Crisis make the present a series of updates in which we race to stay close to the same and in which information spreads not like a powerful, overwhelming virus, but rather like a long undead thin chain (SPOOKIER).

Chun then proceeds to explain his concept, New Media are N(YOU) Media; new media are a function of you. New media relentlessly emphasise you:; What’s on your mind? Chun states that habits are strange, contradictory things: they are human-made nature, or more broadly, culture become (second) nature. To outline some of the contradiction habits embody (and habits are all about embodiment) they are mechanical and creative; individual and collective; human and nonhuman; inside and outside; irrational and necessary. Habits are both inflexible and creative, Chun continues, are habits what endure as society within collectives – in which there is no society? Chun proceeds to make reference to Charles Duhigg, who has most famously argued that habit is a loop, initially provoked by a cue and a reward. However, once a body is habituated, the person anticipates the reward, so that craving drives the loop. This explanation of habit, Chun says, reveals that something very strange is happening cloaked within this apparent renewal of habit. Habit is becoming addiction: to have is to lose. Habits are central to understanding neoliberalism in particular to comprehending its simultaneous dissemination and contraction of privacy.

This is where Chun begins a rant about Neoliberalism, saying, neoliberalism, to repeat a cliche, destroys the public by fostering the private. It leads to the rampant privatisation of all public services, and, at least in the United States, gives private corporations the rights of citizens. He makes reference to David Harvey, who said neoliberalism is a “theory of political economic practices that proposes that human well-being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterised by strong private property rights, free markets and free trade”. In a neoliberal society, Chun proceeds, the logic of the market has become its ethics; all interactions from love to education, become economic transactions to be analysed in terms of cost and benefits. Neoliberalism’s emphasises individual empowerment and volunteerism. Neoliberalism’s emphasis on individual interest and market transactions spread the private (as market) and by doing so apparently destroy the private (as the intimate, darkened space necessary for growth and freedom); Yet privacy traditionally was considered a state of deprivation: in monarchial systems, ‘Private’ subjects, unlike public ones, could hold no power. Chun then makes reference to John Stuart Mill, who most famously maintained that liberty depends on the separation of public and private spheres, effectively transforming privacy from something privative to something sacred. To all the thinkers, Chun says in reference to people like Mill himself, that the domestic was key to defining the boundaries between public and private, for the walls of the home sealed the private from the public. Chun continues, privatisation is destroying the private, while also fostering state surveillance and security as house arrest. New media call into question the separation between publicity and privacy at various levels: from technical protocols to the internets emergence as a privately owned public medium, from‘s privatisation of surveillance to social networkings redefinition of “friends”.Social media are driven by a profound confusion of the private and public. Internet users are curiously inside out – they are framed as private subjects exposed in public. New media erode the distinction between the revolutionary and the conventional, public and private, work and leisure, fascinating and boring, hype and reality, amateur and professional, democracy and trolling. The combination of gossip with politics is not on unfortunate aspect of new media and digital culture, but the point. New media blur these distinctions because they are part of the postindustrial/neoliberal economy.

Chun states that the second half of the book elaborates, this logic thrives via an epistemology of outing, which constantly exposes open secrets. To break from this logic of leaking and outing, Chun proceeds, this book contends that we need to embrace the fundamentally non-personal nature of our networked communications. Habit, with all its contradictions, is central to grasping the paradoxes of new media: it’s enduring ephemerality, it’s visible invisibility, it’s exposing empowerment, it’s networked individuation, and its obsolete ubiquity. Chun explains that the book is divided into two parts – “Imagined Networks, Global Connections” and “Privately Public: The Internets Perverse Subjects”; The first part unpacks how and why ‘the network’ has become the defining concept of our era, revealing that networks have become key because they are imagined as ending postmodern confusion, these chapters trace how networks make possible groupings based on individual and connectable YOUs – They also elaborate on crisis as structuring the temporality of networks; The second part of the book develops more fully the inversion of privacy and publicity that drives neoliberalism and networks – highlighting how networks capture subjects through users like YOU, that is, users who like YOU (‘friends’) and those determined to be like YOU (‘neighbours’).

These chapters, Chun further explains, both document the epistemology of outing that drives this logic and outline ways of inhabiting this outing. Further, each chapter of this books is framed around one of the following aphorisms of habitual new media:

  1. Always Searching, Never Finding
  2. Habit + Crisis = Update
  3. The Friend of My Friend Is My Enemy (and thus my friend) – Investigates the odd transformation of the default internet user from the lurker to the friend as an indicative of a larger encroachment and recession of the private.
  4. I Never Remember; YOUs Never Forget

These aphorisms highlight both the dilemmas and opportunities YOU face as a small sovereign, but also the dilemmas of and opportunities for shifting the YOU, for keeping this shifter, shifty. Each chapter revisits the question of habits through critical revisiting of habit. Rather than return to the alleged primary source, each chapter seeks to understand the current resurgence of habit by exploring how habits resuscitate certain critical thinkers. In other words, each chapter investigates how both new media and critical theory remain. How has “it’s a network’ become a valid answer, the end, rather than the beginning, of an explanation”. Chun closes, this chapter argues that networks have been central to the emergence, management and imaginary of neoliberalism – in particular to its conception of individuals as collectively dissolving society. Although they enable a form of cognitive mapping that links the local to the global, networks produce new dilemmas: neoliberal subjects are now forever mapping, but more precarious than ever; they are forever searching, but never finding. Further, networks are belatedly too early – they are both projections and histories; they are both theory and empirically existing entities.




Niederer, S 2018, Networked images: visual methodologies for the digital age. Amsterdam University of Applied Sciences, Amsterdam. (read pp.1-20)

Lister, M et al 2009, New Media: A Critical Introduction. Routledge, New York. (Sections: Networks, Users and Economics pp 163-169; Wiki Worlds and Web 2.0 pp 204-209; The Long Tail pp 197-200; User-generated content, we are all users now pp 221-232.)

This week we began to look at the concept of “The Network”

In the first reading of this week, Networked images: visual methodologies for the digital age, author Niederer takes us through networked images, visual methodologies and the surge of online video culture.

Niederer opens up the reading with the idea of Video Vortex – a network of visual artists, filmmakers and theorists interested in the surge of online visual culture, given the rise of YouTube in 2005, a new platform used for video sharing. The Video Vortex imagined a future visual online culture that imaginaries still resonate today, and there are many more visual platforms that have been launched (i.e. Tumblr, Instagram, Snapchat, Pinterest) becoming bigger than we have ever imagined.  Researchers who study online culture have noticed that much of the current concepts that surround online images and digital visual culture acknowledge a visual turn – also referred to as a pictorial turn.

Niederer states that many still focus on a theorisation of a single image and how the image actually becomes unstable through its digitalisation. This is where Niederer introduces us to the idea of the Visual Methodologies. Niederer explains that The Visual Methodologies program proposes a different entry point – entailing that these online images become “networked” when users like, share, comment or tag them, and also when platforms and engines format, filter, feed and recommend them to others. Images may also be networked across platforms through their circulation, when the same image is fed to or otherwise resonating on different platforms and websites. For any research, it means the researcher has to consider the different ways of demarcating content that go beyond the single image and take into account the entire network of related content – actors, platforms and websites that surrounds that content and its images. Neiderer also explains that the Interface is the opportunities present to the user by the Software, and each platform and engine handles images in distinct ways, thereby revealing platform and specific techniques. Both the networked-ness and techncity of online images call for an approach attuned to the medium. Taking networked-ness and technicity of context as a methodological entry point, it becomes clear that images should not be studied as seperate from their network. Neiderer references info designer Gabriele Colombo, who pointed out that much of the image analysis work done today starts with a folder of images along with information and metadata about its location, user engagement and other variables. This point of departure means that ongoing research questions cannot be answered through the study of only a single image stripped from its context. For example: We may need to study how images circulate, are engaged with, appropriated, made into memes and changed over time – This kind of research opens up important questions about both over and under-representation of certain works of art in the writing and circulating of content. Images have not only made it to the foreground of digital culture; they have also made a mark on a wide range of research practices. Niederer then proceeds to explain Visual Methodologies further. He says, captured by the term ‘Visual Methodologies’, such research and visual materials encompasses research and practices that include both the study of images and their interpretation and meaning-making and the use of images for research. Neiderer states the strands of research:

  • The first strand of research looks at four sites wherein the meanings of images are made: The site of production (how was the object made?), the site of the image or object itself (what does the image look like?), the site of its circulation (where and how does the image travel?) and the site of its audience (how is or how was the image seen and by which audiences?).
  • The second strand of research examines images as tools and instruments for research. Here we can situate methods like photo elicitation, which uses photographic imagery to evoke information, feelings and memories from its interviewee by presenting them with pictures and asking for their associations, or, reversely by presenting on issue or concept and asking for their associated images. Niederer references Gillian Rose, stating that she calls for a mixed methods approach, which with enable the researcher to explore in more depth and more details both the role and the meaning of their images, their audiences (who is included and excluded), and their circulation.

Neiderer then proceeds stating that the Programme of the Visual Methodologies Collective that is being developing in Amsterdam, zooms in on visual methodologies for the digital age. Neiderer then state that the demarcation  calls for a mix of disciplines and skills and knowledge that is necessary to do this kind of research, including digital research methods, design, (visual) storytelling, new media theories and concepts, programming, and critical making. The programme follows the same categories as introduced by Rose, with on one hand, the study of images, users and platforms.

Neiderer then ends this portion of the reading discussing image research, images as content and image research & platform vernaculars. When discussing image research, Neiderer states that we are moving from images as data to images at content. As Flickr had become a place of abundant image production and sharing, we, as users, only get to witness the occasional example: a picture posted by a friend or family member, perhaps. We no longer have the illusion that we can keep up fully with what is being shared. In 2012, Flicker already amounted to 350,000 per day, on average. In 2018, to mention a few visual platforms and their volumes of use:

  • Instagram: 95 Million per day
  • Facebook: 350 Million per day
  • Snapchat: 9,000 per second
  • YouTube: 300 Hours of Video per minute

A massive amount of visual content is also watched and shared by users, in fact content that includes image is much more likely to be shared/engaged within social media today:

  • Instagram and Snapchat users collectively watch 6 billion videos daily
  • YouTube users watch 5 billion videos per day

As those figures illustrate the ‘Pictorial Turn’ is not so much a theoretical shift from text to image, but rather a practice driven by users and facilitated by platforms, in which more and more users increasingly share visual content and engage with it. These practices have sparked a range of new methods for research practices as well. Neiderer explains that the Methodological entry point is that images can be studies through networked-ness. The diffusion of networked images also opens up a way to study affective publics that are rendered through shared sentiment, opinion or affect. Studying the ways in which images are repurposed (memes, filters and other visual textual elements) provides new insights into the dynamic user cultures of a particular platform. Platforms are not only carriers of content and channels of its distribution, but also the sites of image production and on entry point to visual research methods as well as novel conceptualisation. Looking to images as content, Neiderer states that research rooted in the arts and humanities can create speculative and experimental inquiries in software and its input, interfaces and output. This focus provides fertile ground for the study of digital culture and its artefacts. For the quantitive study of images as content, we can draw from the field of content analysis developed in communication science and known for obtrusive methods and conducive approach to all content types (test, image, sounds and audiovisual). Traditionally content analysis has focused on existing data sets such as a collection of TV broadcasts, the photographs in newspaper articles on a particular topic or a set of comic books. Digital media, however, can be published or created online and enriched with new opportunities for navigation and interaction. Digital media images can be placed in news articles that is networked through intext hyperlinks and recommendations to similar articles or pulled into social media using social media buttons. Lastly, Neiderer discusses image research and platform vernaculars. In addition to researching special collections on a particular issue or theme, research can also address how platforms as a whole may have a particular visual language. In line with “platform vernaculars”, which refers to the different narrative patterns that shape content and information flows across platforms. We can speak of visual vernaculars as having distinct visual patterns and practices for different platforms. Visual vernaculars research contrasts images from different platforms are contrasted as offering “windows” on a particular topic or issue. Neiderer finishes by stating that this approach offers researchers who critically think about the limitations of studying social media content, and rightly so, a productive way forward by asking what is this topic according to Twitter? What is it according to Instagram? Do they provide identical, similar, or distinct views and descriptions of the same topic? Such questions will help create an understanding of both the textual and visual vernaculars as well as the cultures of use for each platform.

Moving to the second reading for this week New Media: A Critical Introduction we focused in on sections: Networks, Users and Economics pp 163-169; Wiki Worlds and Web 2.0 pp 204-209; The Long Tail pp 197-200; User-generated content, we are all users now pp 221-232. Author Lister takes us through these concepts and breaks down exactly what they are.

Beginning with Networks, Users and Economics pp 163-169, Lister states that what we now understand to be the internet in general and the web, in specific, is the product of a number of factors:

  • It’s method of development has not been by design
  • Instead its protean identity is reproduced by a mix of fandom, community, commerce and business, linked by technologies that are both private and publicly owned and variously regulated, in other words;
  • The internet came into existence as a result of numerous factors, accidents, passions, collisions and tensions.
  • It’s ongoing development must therefore be seen as this section between economic and regulatory factors and the communicative practices discussed further in this section

Lister notes the way in which the desire the communication and pressures of commercialisation have interacted to bring us the Web2.0 and its expressions in the form of social networking sites; the ways in which the interaction between the culture of open source and commercially produced and protected software gives the development of networked digital media a distinctive character; the ways in which the development of the internet has not only given rise to new cultural practices that have actually become a threat to the interest and business practices of huge corporations but at the same time given rise to new media behemoths in online distribution, retailing and services – an obvious example of this is the way in which the ownership of intellectual property in media and the desire to protect that ownership competes with the ongoing enthusiasm of users to swap files via a myriad of technologies, some of them having developed in a particular way directing as a result of the need to get round legal prohibitions on earlier methods. Lister explains that its this type of interaction between enthusiasm, politics, commerce and technology that we wish to explore across the myriad forms of geeks, businessmen, students, housewives, children, adults, gamers and gardeners that make up the web. To put it simply, he states, we think to understand networked media it’s necessary to understand their developments as an ongoing product of tension between culture and commerce. Lister develops further that the internet simply describes the collection of networks that link computers and servers together, including the definition of the internet, as per Federal Networking Council (US, 1995):

  • ‘Internet’ refers to the global information system that is (i) logically linked together by a globally unique address space based in the ‘Internet Protocol (IP)’ or its subsequent extensions/follow -ons; (ii) is able to support communications using the ‘Transmission Control Protocol/Internet Protocol (TCP/IP)’ suite or its subsequent extensions/follow ons, and or other IP-compatible protocols; and (iii) provides users or makes accessible another publicly or privately high level services layered on the communications and related infrastructure described therein. 

Lister goes on to say that this primarily technical definition argues for an internet defined by the ways in which computers or able to send and receive data through the globally agreed protocols that permit computers to link together. The important aspect of such a definition is how minimal it is – the internet is here simply a means for computers to communicate in order to provide (undefined) ‘high level services’. The definition is intended to facilitate flow and exchange of date. Built into such a definition is the concept of ‘open architecture’ – there is no attempt here to prescribe how or where such data flows. Previous ‘mass media’ (eg. Newspapers, film, TV) were designed as systems to send messages from a centre to a periphery; here is a system designed from the outset to provide circulation of information. This ‘open architecture’ model was envisioned as early as 1962 by the visionary J.C.R. Licklider, who wrote a series of memos at MIT describing his ‘Galactic Network’ concept. Lister explains that Licklider became the first head of computer research for the Defence Advanced Research Project Agency (DARPA) in the US, and it was this Pentagon funded agency that eventually developed the protocols referred to above in order to allow computers to form networks that could send small packets of data to one another. This is where Lister introduces us to the history of the beginning of the internet, stating that, the internet society records the growth of computer-based communications from a system based round four hosts/server in 1969 to 200,000,000 hosts by 2002. These hosts supported an enormous variety of networks, all of which developed from the initial scientific and defence oriented networks of the original internet. These computer engineering histories determine much of the character of the internet as we experience it today – especially the idea of an open architecture. The internet has a history that stretches back to the second World War. The discursive, technological and economic developments of the internet all serve to shape our experience today. The history of the internet draws upon a range of approaches, some of which are synthesised as the study of ‘Computer Mediated Communication (CMC). The study of CMC has primarily developed as a socio-linguistic discipline based in communications theory and sociology. While there’s some overlap with media studies in a common concern for understanding forms of technologically mediated communication, it was for many years by no means clear how the internet was a medium in the same way as TV, film or photography were distinct in media. It has become increasingly clear that following Butler and Carusin’s model of remediation, as existing media find new distribution channels online, they in turn change their cultural form. Hybridising forms of new media emerge through the interaction between existing forms and the new distribution technologies of the next. All media producers now have to consider what TV executives call ‘360-degree programming’ – how a TV test will have online life, how audiences will be offered additional interactive experiences, how a media product might become ‘transmedial’ by generating revenue across a range of audiences and platforms linked by the internet marketing and distribution. Lister explains leading scholar Steven Jones (1994) summed up the inflated claims for the impact of what was then termed ‘Computer Mediated Communications (CMC)’; He observed that popular and critical writing claimed the net would:

  • Create opportunities for education and learning
  • Create new opportunities for participatory democracy
  • Establish countercultures on an unprecedented scale
  • Ensnarl already difficult legal matters concerning privacy, copyright and ethics
  • Restructure man/machine interaction

Lister further develops that in David Gaunletts review of ‘some of the main issues’ in 2004, display strong continuities with the fundamental issues identified by a previous generation of CMC research. Gauntlett summarises the research crew in the field as:

  1. The web allows people to express themselves (via putting up their own sites through social networks)
  2. Anonymity and play in cyberspace (Gauntlett extends the earlier CMC based work that seized on the possible anonymities of net based communications as a living embodiment of post-structuralist identity theory and asserts that is where queer can really come to life, because the internet breaks the connection between outward expressions of identity and the physical body)
  3. The web and the big business (Here Gauntlett makes the excellent point that throughout the early phase of net developments the dominant discourse on the economics of the web, was that business interests would destroy the culture of the web but that nowadays, the bigger panics run in the opposite direction -big business are scared that the internet will ruin them)
  4. The web is changing politics and international relations

Moving to Wiki Worlds and Web 2.0 pp 204-209 Lister introduces the us to the idea of the Web2.0 – a term coined in 2003 by media consultant Tim O’Reilly. The idea of Web2.0 is that a particular assemblage of software, hardware and sociality have brought about the widespread sense that there’s something qualitatively different about todays web. This shift is allegedly characterised by co-creativity, participation and openness, represented by softwares that support, for example, wiki based ways of creating and accessing knowledge, social networking sites, blogging, tagging and ‘mash-ups’. The web 2.0 has a clear economic goal; O’Reilly introduces the idea as a phoenix to resurrect the internet economy from the still smouldering ashes of the ‘dotcom bubble’ crash of 2000. This collapse marked some kind of turning point for the web, such as a call to action as the ‘Web2.0’.

Next up, in The Long Tail pp 197-200, Lister gives us a run down of the concept ‘The Long Tail’. One of the ways that the new dynamics of global economics both shapes and reflects the tension between economic determination and media cultures can be seen in the theory of Long Tail economics. Originally argues in Wired in October 2004 and then developed as a book in 2006, Lister explains Chris Anderson’s work on the Long Tail is one one of the most compelling accounts of the ways in which conventional media economics have changed in the post network cultures. The implications of the Long Tail analysis are far reaching, arguing that the economic basis of production is changing in ways that unlock market diversity on an unprecedented level. Anderson argues that the capacity of networked communications to connect with a multiplicity of niche markets ensures that lower volume products can attain a sustainable margin of profitability. Existing economics of the media have had two major characteristics:

  • One is the ‘hit driven’ economy – producers of (TV, film, music etc.) have to produce several flops or mid-ranking products to achieve the one hit that will sustain the enterprise
  • The other is the ‘first copy’ cost principle – that actually getting to produce the first copy of a newspaper or film are very high but thereafter margins of profits depend on the cost of distribution; in newspaper this had traditionally been good, each paper is cheap to print; in film each is expensive to make.

Lister goes on to state that successful mass media economics depended upon highly capitalised businesses able to spread the risks of the hit seeking market as well as mass produce products and get then to the the right kind of consumers. These conditions have had the effect of making media production a high cost, low volume products were unlikely to get made since they lingered relatively invisible in the ‘Long Tail’ of the demand curve.

Finally in User-generated content, we are all users now pp 221-232, Lister explains that one of the ways in which the internet has become to central to contemporary media culture has offered audiences participatory opportunities. The history of the take-up of these opportunities shows how the marginal possibilities offered by the net for audiences to interact with media is now refashioning the whole enterprise of what is termed the ‘Big Media’. Audiences have become ‘user’s and user-generated content has started to become a real competitor to traditional media, with the impact of the internet on traditional media institutions is stronger than anyone may have predicted. The growth of the blogosphere, the impact of peer to peer music distribution and the explosion of YouTube in 2006 have all challenged the foundations of news media industries. The traditional gatekeepers of culture, the filters of news and guardians of quality have all had to adjust to realities of participatory culture. Studies of fans and fan cultures have spearheaded the theoretical construction of this shift from ‘audiences’ to ‘user’ in media studies. The incursion of the ‘ordinary person’ into the bastions of media privilege is experienced as both opportunity and threat by the industries themselves and has been understood by academic researchers primarily through the history of active audience studies. ‘Fans’ were the first groups to avail themselves of the mass of websites material that exists in a symbiotic relationship with other media studies. This seamless lattice of mediation can be seen as the extension of mass mediation into more and more of our time, more and more of our space – it also brings within our reach the possibility of becoming producers in our own right. Every SNS post, or conversation in a chatroom, every home page and downloaded MP3 playlist facilitates the individual communicating in a pseudo public mode of address. What is clear is that a great deal of web use facilitates a feeling of participation in media space.



In Tuesday’s Real to Reel class, Eileen, Ollie and I went out into the CBD to take like-subject images of people’s smiles. While I titled this piece “Smiles All Round”, it is quiet obvious that there are in fact, not all smiles around. I think these images tell us a lot about people – because people’s expressions tell us a lot about people. It’s interesting to see how many people are reluctant to show us their smile, and felt awkward in the process of us taking the photos themselves. Most people felt quiet vulnerable in showing us their true smile, so much so that some didn’t smile at all. This is a signifies that not all people are open to strangers. It signifies that not everyone wants to show their emotions. It signifies that there is so much more going on behind that smile than we know.

These last images are particularly interesting, because of the Airpods in the boys ears. We approached them, thinking that they were open to conversation because they were engaged in their own. However, we realised fast when we approached them that they couldn’t hear us because of these tiny things, or at-least they used them to pretend so. I know this because I did mention, that they were engaged in their own conversation prior to us approaching them. This signifies how closed off our society is, and how easy it is to shut people out. Is this because the technology enables us to be this way?




Norman, D 1998, The design of everyday things , Basic Book, New York (Sections: Preface vii-xv; Chapter one pp 1-13; Chapter 4 (constraints) pp 81-87; (computers) pp 177-186).

Norman, D 1999, ‘Affordance, conventions and design (Part 2)’, Nielsen Norman Group, viewed 1 April 2012,

The design of everyday things presents us with the idea that far too many items in the world are designed, constructed and foisted upon us with no understanding or even care for how we will use them. Norman states that while the problem might sound trivial, it is the difference between pleasure and frustration. Donald Norman makes reference to perceptual psychologist J.J Gibson’s original notion of “affordances”, applied to the concept of design practitioning, which focuses on making the end users of a product or service an integral part of the design process. The design of everyday things provides an insight into and provides an insight into everyday things that are deliberately, unremarkable, outrageous or other ill informed or intuitive to use. Norman was perplexed and fascinated by the fact that people come across new things everyday, yet know how to use them. Norman argues that if an object is well designed, you don’t have to learn how to use it after learning how to use it once – yet this isn’t the case with a lot of every day items. Norman wished to improve the design of everyday objects by putting forward some principles for designers to follow. There were a number of concepts but it was affordances that stuck out to most designers. We learn this concept of the “Norman Door”, in The design of everyday things which is the recognition of the lack of attention paid by the user.

What are affordances? Norman outlines that the term affordance refers to the perceived and actual properties of the thing, primarily those that determine how a thing could possibly be used. Affordance refers to the properties of an object and the person, and is the relationship between the object and the person and what the person can do with the object. The affordance is jointly determined by the properties of the object and the abilities of the person interacting with the object. Norman suggests that a person forms a ‘conceptual model’ of how things are used and this is comprised by the affordances.  

Affordance, conventions and design (Part 2) presents us with further developments on these ideas presented in The design of everyday things by Norman – diving into the explanations of affordances, constraints and conceptual models.

Norman explains in this article that the word “affordance” was originally invented by J.J Gibson to refer to the actionable properties between the world and an actor (a person or animal). To Gibson, affordances are relationships. They exist naturally; they do not have to be usable, known or desirable.

Norman explains that his previous work was about “perceived affordance”, stating – when I get around to revising POET, I will make a global change, replacing all instances of the word “affordance” with the phrase “perceived affordance”. Norman discusses that the designer cares more about what actions the user perceives to be possible than what is true. Moreover, affordances, both real and perceived, play very different roles in physical products than they do in the world of screen-based products. In product design, where one deals with real, physical objects, there can be both real and perceived affordances, and the two sets need not be the same.

When Norman moved to England to study Applied psychology Unit in Cambridge. His struggles with British water taps, light switches, and doors inspired him to write “The Psychology of Everyday Things”. A major theme of this was the attempt to understand how we managed in a world of tens of thousands of objects, many of which we would encounter only once. When you first see something you have never seen before, how do you know what to do? Norman decided the answer was that the required information was in the world: the appearance of the device could provide the critical clues required for its proper operation. Norman argued that understanding how to operate a novel device had three major dimensions: conceptual models, constraints, and affordances. Normans Conceptual Models point out that good design is also an act of communication between the designer and the user. This can be broken down:

  • Feedback: In design, it’s important to show the effect of an action.
  • Constraints: The surest way to make something easy to use, with few errors, is to make it impossible to do otherwise. To do this, we constrain our choices.
  • Affordances: A good designer makes sure that appropriate actions are perceptible and inappropriate ones invisible.

Norman then moves on to explain constraints and conventions. Normans conventions and constraints explained and understood in five different sections:

  • Physical constraints: are closely related to real affordances: For example, it is not possible to move the cursor outside the screen: this is a physical constraint. Locking the mouse button when clicking is not desired would be a physical constraint. Restricting the cursor to exist only in screen locations where its position is meaningful is a physical constraint.
  • Logical constraints: use reasoning to determine the alternatives. Thus, if we ask the user to click on five locations and only four are immediately visible, the person knows, logically, that there is one location off the screen. Logical constraints are valuable in guiding behaviour. It is how the user knows to scroll down and see the rest of the page. It is how users know when they have finished a task. By making the fundamental design model visible, users can readily (logically) deduce what actions are required. Logical constraints go hand-in-hand with a good conceptual model.
  • Cultural constraints: are conventions shared by a cultural group. The fact that the graphic on the right-hand side of a display is a “scroll bar” and that one should move the cursor to it, hold down a mouse button, and “drag” it downward in order to see objects located below the current visible set (thus causing the image itself to appear to move upwards) is a cultural, learned convention. The choice of action is arbitrary: there is nothing inherent in the devices or design that requires the system to act in this way. The word “arbitrary” does not mean that any random depiction would do equally well: the current choice is an intelligent fit to human cognition, but there are alternative methods that work equally well.
  • Convention: is a constraint in that it prohibits some activities and encourages others. Physical constraints make some actions impossible: there is no way to ignore them. Logical and cultural constraints are weaker in the sense that they can be violated or ignored, but they act as valuable aids to navigating the unknowns and complexities of everyday life. As a result, they are powerful tools for the designer. A convention is a cultural constraint, one that has evolved over time. Conventions are not arbitrary: they evolve, they require a community of practice. They are slow to be adopted, and once adopted, slow to go away. So although the word implies voluntary choice, the reality is that they are real constraints upon our behaviour. Use them with respect. Violate them only with great risk.
  • Symbols and constraints: are not affordances. They are examples of the use of a shared and visible conceptual model, appropriate feedback, and shared, cultural conventions.

Norman closes this article with How do you know if the user shares the conventions? Why, with data, of course. This is something that can not be decided by arguments, logic, or theory. Cultural constraints and conventions are about what people believe and do, and the only way to find out what people do is to go out and watch them. Not in the laboratories, not in the usability testing rooms, but in their normal environment… I still hear far too much dogmatism about what people really “want,” what they “believe,” or how they “really” behave, but I see very little data. It doesn’t take much data. My partner, Jakob Nielsen, has long argued that you can get these data at a discount: three to five people will give you enough for most purposes (Nielsen, 1993, 1994). But they need to be real people, doing real activities. Don’t speculate. Don’t argue. Observe.




Khoo E, Hight C, Torrens R, Cowie B 2017, ‘Introduction: Software and other Literacies’ in Software Literacy: Education and Beyond, Springer, Singapore. (pp.1-12)

Introduction: Software and other Literacies outlines us with the role and significance of software in contemporary society; presenting key concepts relevant to the study of software, including affordances, agency, human-machine assemblages and performance. Authors Khoo, Hight, Torrens and Cowie argue that software has become the engine of contemporary information society, with code now being part of the infrastructure of modern societies, and furthermore state that we are living in a software culture, one which is fundamentally reshaping all areas of modern life. The authors state that if we wish to understand the contemporary techniques of control, communication, representation simulation, analysis, decision-making, memory, vision, writing and interaction. Our analysis can’t be complete until we consider this software layer, that is comprised of all of these things, meaning that all disciplines which deal with contemporary society and culture – architecture, design, art criticism, sociology, political science, humanities, science and technology studies, and so on – need to account for the role of software and it’s effects in whatever subjects they investigate.

Introduction: Software and other Literacies explain that despite the diversity of hardware involved (e.g. Instagram) at base these all involve us as users engaging with different kind of applications, platforms and infrastructures constituted through software code. At an infrastructural level, for example, the internet and World Wide Web (Web hereon) are themselves organised through software based protocols that govern largely automated processes that are rarely visible to everyday users unless they fail. Once we peel back all the layers of contemporary society, we recognise that software also runs in the background of many of our key institutions and systems, from the information systems if a hospital, the planning and organisation of schools, the disturbingly sophisticated communication and targeting capabilities of the military-industrial complex, to the automated financial exchanged that drive global share markets. Authors advice that it’s useful to think of these kinds of everyday practices as coded, in the sense that they are deeply  embedded within and enabled computer code; they have been translated into software and augmented or transformed into something else because of this fact. This has profound implications for the understanding aspects of culture such as the emergence of the internet and web in the 1990s. The web is not just a means of distribution, but also constitutes a medium itself with distinctive characteristics, and underlies more recent and interconnected developments within mobile and gaming devices. These are all developments which constitute a rapidly expanding universe of software culture fed by recombinatory, evolutionary growth as a capability, or function or set of tools become coded they become available to be recombined in new ways for different platforms and contexts. As new spheres of human activity become coded, they become part of a broader emergence and dominance of software culture.

Introduction: Software and other Literacies presents the theory of Software Studies. Software studies adopts the perspective that the study of software partly involves investigating the cultural discourses that are embedded in code, together with the broader implications for users of how these discourses operate through the application of that code. Coding (or programming) is a form of writing which inscribes types of actions to be performed using a computer. Software is a neglected part in the digital revolution and was not recognised as a distinct industry until the 1960s, developing in the wake of the PC age in the 1970s. Practitioners of this industry are software engineers. They describe software development as an advanced writing technique that translates a text or group of texts written in natural language into a binary text or group of texts. No form of code is perfect’ it emerges from human endeavour and its inscribed with the conditions of its creation as with all cultural artefacts. Software is also an evolving part of culture, with authors identifying that Cultural Software as that which is central to cultural production, in it’s broadest terms it includes:

  • The popular forms of software that we use ourselves (Word processing, Image manipulation, internet browsers, media players)
  • Affordance is an action possibility or an offering. Possible actions on a computer system include physical interactions with devices such as screen, keyboard and mouse. Affordances allow us to do particular things; to select, to view, to manipulate in specific ways.

If we look at a software application as providing a set of possible actions, it’s vital to map these affordances in a specific hierarchy. The interface for a piece of software embodies that hierarchy of affordances. The interface is the default tools we find most easily on ribbons or drop-down menus of one of the MS programs for example. There are key aspects of software culture that are useful to consider when seeking to understand the contention that software entails a form of agency. A core premise of software studies is the need to move away from seeing software platforms and applications as neutral, as simply things that you do something with. Programming code needs to be understood as broadly as engendering both forces of empowerment and discipline. Software applications and platforms are attractive precisely because they are designed toward increasing efficiencies and productivity, generating entirely new markets, and providing new forms of play and creativity. However, they also serve as a broad range of technologies that more efficiently and successfully represent, collate, sort, categorise, match profiles and regulate people, processes and places. This tension between empowerment and discipline offers a broad frame for understanding the layered and complete role which software plays at a variety of levels, especially within networked media. Human agency operates in a complex way within software culture; we become part of human-machine assemblages where agency becomes more contingent on a range of human and non-human factors. Networked software, in particular, encourages a communicative environment of rapidly changing feedback mechanisms that tie humans and non-humans together into new aggregates. We co-create with cultural software, exploring and negotiating their potential to enable and constrain new specific practises. Contemporary media is experienced, created, edited, remixed, organised and shared with software… to understand media today we need to understand media software — it’s genealogy, it’s anatomy (interfaces, operations) and it’s practical and theoretical effects. How does media authoring software shape the media being created, making some design choices seem natural and easy to execute while hiding other design possibilities? How does media viewing/managing/remixing of media and the actions we perform on it? How does software change what “media” is conceptually?

The authors argue that at a fundamental level, we are collaborating with programming code when we engage with, respond to, or create content using an application or platform. But this is also the point where the empowering or discipling possibilities of software are actualised. When using the most basic operations of a word processor application, for example, nothing happens without the active intervention of the user. Generating textual content, as with any creative work involving software, involves human users in collaborative performances with a machine. In this sense, there is assumed to be a complex interplay between affordances and performance, which potentially plays out in a unique way each and every time a user engages with any application. We now interact with dynamic software performances, because what we are experiencing is constructed by software in real time (e.g. video game, website). As an example of how an interface imposes its own logic on media, consider “Cute and Paste” operations. digital Literacy is the awareness, attitude and ability of individuals to appropriately identify and access, digital tools and facilities to identify, access, manage, integrate, evaluate, analyse and synthesise digital resources, construct new knowledge, create media expressions and communicate with others, in the context of specific life situations, in order to enable constructive social actions, and to reflect upon this process.

In the end the authors of Introduction: Software and other Literacies hypothesise that there are three progressive tiers of development towards software literacy:

  1. A foundational skill level where a learner can use a particular software
  2. An ability to troubleshoot and problem-solve when an issue is faced in a software
  3. The ability to critique the software

REAL TO REEL: W1 – EVE ARNOLD INSPIRED MINI-ESSAY – “Strangers in the City (2020)”


When you’re walking around the City of Melbourne, it’s easy to find yourself getting lost in the sea of people – all walking at the same fast pace, eager to get to their next destination. No one stays still in the City. Or at-least it may seem that way. It seems rare to find someone sitting, doing nothing at all, truly nothing. Even if they might be still, they are still waiting for something or someone.

When I approached these boys at first, they were standing around, laughing and joking. When I asked to take their picture, they were quick to look at each other for affirmation before saying “yes”. None of them would exchange their names with me, or engage in conversation. While my appeal to these boys, were that they seemed care-free, they immediately took position sitting down for the photograph, and were very fast to close off – despite the fact that they oozed confidence before this.

It’s interesting how reluctant people are to converse with strangers in the City, and in this modern day. I wonder if it has always been this way, or if it’s something that has changed over time.

This man never gave me a name, but he did give me a warm “yes” when I asked if I could take his portrait. “Can I finish my ciggie first?”, “You don’t need to, it’s fine to have it in there”, “I’d rather not”.

I used to live by the Sea.

It’s different there. People are more open. They walk down the street without shoes, and seemingly without a care in the world. I used to go for walks with my grandfather when I was a little girl, and I took pleasure in saying hello to the locals, giving them a loving smile, and asking how their day was. I don’t do that anymore. People are uncomfortable with that here. And to be frank – so am I.

This girl smiled at me. Before I even approached her. I’m happy that she did.