Thinking seriously about post-humanism

I am beginning to think that these discussions of humanism and also of what it means to be human are fundamentally informed by a Cartesian notion of dualism. When Descartes said ‘I think therefore I am’ he created the potential for the human to be something more than the physical body and this same understanding seems to have permeated definitions of humanity.

It has also allowed Freud to go on to talk about mind and body in complicated ways and opened the door to Marx’s notion of the dialectic and to an expansion of ethical philosophy. But what if it is simply nonsense?

To put it bluntly, where is the evidence that mind exists apart from body? Increasingly, we understand that the brain is a lump of meat (I enjoyed that video) with some electrical charges whizzing through it and although we might imagine we feel things in our head we probably don’t. All that we have, and maybe Keats understood this, is a life of sensation.

I have argued before in this blog that Descartes could have got out of his dilemma by talking to the man in the next cubicle and agreeing to agree that they both existed in material form. They could have gone on from there to construct quite a satisfactory universe and drawn in other people to help build reality. This would have made it easy to see off any malignant Demons which appeared. Unfortunately, he didn’t and the outcome is dualism – the idea that we need two things inside ourselves to agree among themselves about what is real.

When we look at the various ways in which humanity has been defined subsequently the definitions constantly refer back to this idea of dualism. They talk about having other capacities which overcome the animal physical side of our being, thinking before we act and not being driven by base desires, understanding the psychology that drives our animal selves around and having a capacity for religious belief and the mystical which sits above our mundane everyday lives. The fact is that these are all chemical and electrical.

What provides them with some undeniable significance, either as sensations or values is the social world and the way that social beings construct reality. Dualism, undoubtedly useful in some circumstances, is simply one of these constructs but it manifestly isn’t a necessity or a prerequisite for being human. That is determined by what value we give to the sensations we experience while working collaboratively with others to share, define and normalise them. This seems a fair description of everyday life. I have a feeling that I haven’t had before so I share it with somebody else and they tell me when they felt the same and we give it a name. It it’s a good feeling, we’re probably in love and if it is a bad feeling they give us some antibiotics instead. It’s a simple process which happens throughout our lives and it starts with our mothers. It is not reliant on any notion of mind and body although most mothers will introduce one of these as a social convenience or perhaps to underline why Kyle down the road is a bit of an animal while her son is undeniably human.

That makes for a better definition of post-humanism which is the awareness that humanist approaches based on a flawed dualism are problematic although, as Derrida points out, they are also inescapable in terms of our ways of thinking. That makes post-humanism something much more positive and if we can use technology to play around with these understandings then so much the better.

Badmington, Neil (2000) Introduction: Approaching Posthumanism. Posthumanism. <http://www.palgrave.com/PDFs/0333765389.Pdf&gt;

Week 2 Artefact

Week 2 Artefact

An imaginary promotion for the logical next step in technology.

Defining Humanity

I can’t help but feel that Steve Fuller is really having an argument about how we define anything and then trying to hang some of his own ideas about why university education is important as the froth on the top of that debate. I got the feeling I was being led rather than challenged or questioned in that presentation!

It seems absurd to suppose there is some kind of emerging historical notion of humanity. His argument appears to be that there is some kind of intellectual elite leading this attempt down the ages to define humans by, in no particular order, additional non-animal characteristics (like the love of God!), bigger intellects and less hair (like Linnaeus), and greater goals (like Darwin and after). All of these approaches appear to overlook the nastiness of man’s inhumanity rather than humanity. The Crusades were not an exercise in educational clubbishness as Fuller appears to imply but an excuse, fuelled by ideology and greed, to wage war on some people. The same is true of Cortes, Attila the Hun and Adolf Hitler not to mention the slave trade and its imitations in any number of cultures down the ages.

It isn’t any wonder that people in the twentieth century challenged some of the more comfortable notions about humanity given the backdrop of two appalling world wars and the emergence of institutionalised racism and prejudice. Neither is it really any wonder in the 21st-century that humanity with a capital H is criticised as a male club, characterised as ignoring the environment, gender and economics and now sinking into an obsession with technology to create an alternative reality having ruined the one we’ve got.

Having said that, any time you see anyone espousing that tired old ‘things are getting worse argument’ and asserting how there used to be a golden age you can assume they are on shaky ground, getting on in years and nostalgic or politically dangerous. It is a point of view which appears to assert a kind of flow in history which rolls backwards and forwards or, for them, from good to bad.

However, it is still fair to ask a question about what humanity might be. The answer, for a lot of people, is what it isn’t. Being human is not being entirely animal, not being entirely driven by instant gratification and unconstructed desire, not being stupid or uncaring, and so on. This is boundary theory, the place where we get the interesting idea that dirt is simply material in the wrong place. Foucault, who Steve Fuller doesn’t appear to like, talks about normalising judgements, the everyday way that we speak about things which confirm the world as we build it. In other words, we don’t simply socially construct reality but we also work quite hard to maintain it. We don’t do this by asking each other what humanity is but we do it by reinforcing everyday value judgements and by disapproving of what lies beyond the boundaries. The recent gay marriage debate is a good example of people setting boundaries and many of those who do it are neither gay, churchgoing or much bothered about marriage but since they have Twitter accounts or are members of Parliament they feel obliged to join in. Frequently, in the media, this process is made quite explicit when behaviours and individuals are characterised or described as inhuman. Child killers, suicide bombers and paedophiles, for example.

What we need to remember in this discussion is that just because humanity is a leaky concept and fuzzy around the edges that doesn’t make it somehow flawed or inapplicable. If you think about it, your idea of a chair, red or a holiday is just as disparate. To me it seems entirely right that we should be working constantly to define our own humanity without the help of popes, dictators or anyone else for that matter who would like to tell us what being human is. That discussion is a place where we can express, exercise and develop our own moral values, principles and purposes as well.

As part of this, it is worth mentioning that you cannot be human if you are the only individual left on the planet. Humanity is a socially ascribed and developed characteristic which is why we tend to view people who describe themselves as overly humane as slightly suspicious. Best to judge people by what they do rather than what they say! Obviously, the last man on earth is going to dispute this but all he is holding onto are the vestiges of what people used to say and there are plenty of examples about how humanity can be made more flexible when the social bit is underplayed which is why stranded sailors in lifeboats were happy to eat the cabin boy and why people in high-density housing are more likely to kill each other.

So, where does that leave humanity and technology? More to the point, why is technology seen as a threat to the notion of being human when a variety of individuals and ideologies including Pol Pot, the Catholic Church and extreme Islam have provided a much greater threat down the years.

In theory, it is possible to argue that if we meet virtually we meet face-to-face less but given the billions of Facebook and Twitter postings each week and the exponential expansion of social media it would be absurd to suggest that we are communicating less. We are not just talking rubbish either. Twitter is a highly politicised forum for many people and an information exchange. Amazon reviews are an excellent source of shared wisdom about potential purchases and, increasingly, politicians are taking notice of mass petitions. Talk to the people who blog seriously with their primary school classes and the positive impact this has on their learning and then try to say technology is damaging humanity!

Is there any evidence that the technology is driving us? In terms of the use debate, which we have already had, it is certainly offering some attractive options and products but that doesn’t make us passive consumers or users. Television, as probably the most influential technology of the twentieth century, was far more dangerous in terms of indoctrination and social control.

I have got a feeling that Steve Fuller would have concluded if he had had more time that believing in God was an excellent way forward and that MOOCs are not real learning. I think he’s wrong on both counts.

Different Views of Technology

Lincoln Dahlberg discusses the danger of treating technologies as ‘things’. There are plenty of examples in the discussion forums of people doing this and anecdotally recounting their need to have the latest iPhone or other technology. In one sense, they are feeling guilty about being controlled by technology and the secret desire to have a Samsung Galaxy or whatever!

Of course, it is patently obvious that many technologies address themselves to us as objects or artefacts and that isn’t a new process. I could identify a similar response to the ones discussed in the forums in enjoying the possession of a first Kenwood Chef food mixer, a Black & Decker Hammer drill and a Garrard SP25 Mark 3 (less challenged students will have to google this one). I’m not sure that technology has to have plugs on either. I think my first set of metric socket spanners (for fiddling with cars) might also qualify.

What all these technologies have in common is that they enabled me to do something I couldn’t do before or to do something more effectively, efficiently and enjoyably – tastier, lighter cakes and smarter shelf fittings. And that, I think, is where the need to have them came from. Exactly the same analysis applies to a new Apple Mac.

However, that doesn’t mean that the fact that we experience them as things says much about their status and creation. It does allow people to make some fairly easy and probably false conclusions. When people say that Bill Gates is using his massive wealth and influence to coerce them into buying Windows 8 that is only part of the story. The production of Windows 8 has a long history and arises from what Marx calls ‘a complex social and institutional matrix’ which I suppose would involve the reputation of the product and the producer, technological development, commercial attractiveness, perceived need and demand, social reputation and status and its situation in a culture. In terms of almost any technological artefact, we can see these factors in play to various degrees.

One of the writers suggests a note of caution here. We shouldn’t start to assume that this is a kind of unknowable mush out of which the product emerges or, alternatively, a process which is inaccessible to us consumers – akin to the folk myth that alien technology from the Roswell alien landing gave us non-stick saucepans. We can usually track a clear history for any technology if we make the effort.

There is another tendency to see all of these things as part of a conspiracy where technology is a huge black cloud which is seeking to envelop us. I think that this is the technological determination model in its dystopian form. So, every innovation is viewed with suspicion. People don’t read enough and they don’t get out enough and they are passive recipients rather than active participants, and technology is to blame. It is easy to see why people take this line particularly those couch potatoes who might consider themselves to be digital migrants.

Social determinism doesn’t quite follow the conspiracy line but seems to assume that there are elements that control technology – political, economic and social forces – fighting for control and ownership. It would be foolish not to admit that there are political discussions around technology and that they reflect the balance of power in particular societies. There are also more subtle debates about what is proper and appropriate.

I lean towards this analysis personally because I do think that culture and meanings are socially constructed. I also think there is something around here about what Foucault says about disciplinary practices so that within the discussions there are normative judgements and practices that set up boundaries to exclude or include. I also think it would be daft to pretend that technologies do not impact on people. What I am less sure about is this sense in determinism that things are determined. There is not a destiny what shapes our ends – we do it!

MOOC Week One: Thinking about the Films

All of this week’s short films (linked alongside) could be seen as dystopian in some way and some are much more so than others. Like all dystopias they set the external threat alongside some kind of indomitable human, or at least biological, spirit as if the two are different and lead to either good or bad outcomes. You cannot portray the dystopia without some sort of counter to it.

In terms of the digital culture, most of these filmmakers are still stuck in the person and machine metaphors and the threat in these films is typically mechanical, automated, mass produced and inclined not to work too well. When you think about that it isn’t dystopian at all. Most people have this relationship with their IPod, their washing machine and lawnmower. Not going down that road is what makes New Media gets described as ‘disturbing’. Is a digital culture located in the social space between people and technologies?

There are some interesting issues around responsibility. Are we to assume that the icons in Bendito Machine III come from space? If they did they might work better! The closer link is to some kind of cargo cult. This stuff gets washed up and this society tries to make sense of it and, of course, it mostly gets it wrong. This is a very neat way of making sure that the encounter with technology is entirely innocent which helps with the moral values in a lot of dystopian science fiction. Aliens that come from outside are easier to deal with than ones we created. That is what makes Blade Runner a more complicated film than Star Wars. We like to pretend that we aren’t accountable for the creation of iTunes but I’m afraid it’s down to all of us that it is so irritating to use.

With the aliens from Mars in War of the Worlds there is is not a lot you can do except run but in the real world the main thing about dystopian technology is that it isn’t outside of us or even outside of our control. We are not controlled by the machines or new technologies except to the extent which we make use of them and then there is a sense in which we can become dependent on them but that dependency is entered into voluntarily. I have a car and I can’t do without it blah blah but if I didn’t have it I would do without it.

The next thing to say is that the relationship is not between you and me and the technology. The relationship is between us and in that sense I take the point that technology is neutral. Even a drone is neutral. It is simply how it is used and negotiated socially which gives it any additional property or value. I think perhaps that the ‘us’ is important here. The same goes for all technology. Facebook would be pretty crap if you were the only user! A lot of dystopias suggest the isolation of the individual in order to make the dystopia worse.

There is also a tendency to make a space for escape from the dystopia. In Thursday, everybody looks pretty stupid especially the woman which I tend not to think is an accident. I think that might be done to allow a sophisticated viewer to feel a bit smart in comparison. In this short it is other people who are the slaves of the technology and who live in the dystopia. There seems to be a kind of space, it might be called an additional textuality, for the viewer to live outside it and fly above it like the birds. The same is true of the couple in Inbox who apparently transcend the threat of the dystopian through true love – not exactly a 21st-century theme!

Presumably, therefore, these two films avoid technological determinism and the inevitability of consequences arising from technology. It is happening, but you don’t have to join in. I haven’t gone on to the reading as yet but it looks as if technological determinism is pretty much the same as any historical determinism. The spread of fascism in Europe in the 1920s and 30s is akin to the spread of mobile phones in the 1990s at least in the way it is discussed. It is worth bearing in mind how disciplinary practices like education have a tendency to resurface and to do the same things in spite of historical change. Anyway, that’s the next bit.

Digital Cultures and e-learning

It is very easy to assume that the whole world has gone digital. To listen into my Twitter feed, you would assume that schools are constantly wired, blogging and teleconferencing with their flipped investigative curriculum informed by an exciting range of media. However, when I go into schools as I often do for work, I see old whiteboards used as blackboards, computer rooms with no children in them, textbooks, spelling tests, worksheets and homework journals. As always, the truth lies somewhere in between and we shouldn’t get too carried away.

The idea that there is a digital culture sounds good as well. That is where us digital natives live, sneering at the immigrants (interesting how old metaphors that ought to be discredited can resurface) with our IPads and operating at a higher level where we know things before other people do and we know things that other people don’t even know they don’t know. It is a culture with interesting boundaries as well where Pearltrees and Pinterest are welcome but we would like to exclude the Daily Mail online! At yet another level, it is a culture where the individual can count, or can allegedly count, in the increasingly globalised world, and we like being those people.

Probably the oddness and personality of that mix is simply explained by the fact that we are discussing a culture and cultures are slippery things. We all have very different understandings of what culture is but, broadly, I would accept that it is a collection of social attitudes, assumptions and practices arising as a consequence of deeper social, economic, political and technological processes. Somewhere along the line, these loose conglomerations take on a more granular and networked structure and they begin to mutually reinforce one another so that some of the shared understandings take on more importance than others. Then, there is a point at which the culture becomes self-aware and reflexive and that changes the way that it understands itself and that may lead to a harder formulation of practices.

Think about living in revolutionary France in the 1780s. At what point do you start thinking that you are not just one of a mob burning down palaces and guillotining priests and the nobility however enjoyable that is? Where and when do you realise that you are actually part of a revolutionary culture? Is it when somebody gives you a coloured sash or a manifesto to read? If you were told then that your activities were going to lead to an appalling musical and a worse film would you have gone back to the allotment? The point is that the social behaviours come first and they begin to define the culture which then becomes self knowing. Presumably, along the way, people who exhibited the right behaviours but did not quite fit in with the emerging culture were guillotined as well.

It is also important to make the point that cultures are serious things. Without something like them you do not get changed behaviours and without changed behaviours you do not get genuine change. The so-called Arab Spring provided some good examples of where the cultural drivers began to emerge alongside the unfolding events. Sometimes, of course, it was hard to say which came first.

So, what kind of culture is a digital world entitled to? The first thing to say is that if by digital we mean the encroachment of new technologies on human activity then it is important to be inclusive. After all, why would we limit a discussion to media and ignore shopping? And it is not just shop assistants who are being put out of work by the digital culture but also architects, bankers, lawyers and estate agents so this digital culture certainly has its own problems.

That is worth mentioning because we sometimes look at the digital culture as being entirely beneficial. It breaks down the notions of them and us in the media, revolutionises publishing, encourages new forms of news, and invents slippery crossovers between art and visual electronics and it is easy to sometimes see these areas of media consumption as being where the digital culture is placed. They are, of course, areas where the changes are particularly significant and that perhaps goes someway to explain the emphasis but when you hear people talking enthusiastically about cyberculture you can bet that the emphasis is fairly narrow.

One thing that the analysis of culture often appears to leave out is what I would call the epistemological shift. Rene Descartes spent a lot of time sitting on his own wondering how he could prove what he knew and suffering a lot of trouble from malignant demons on the way. Eventually, he decided that he existed because he was thinking (cogito ergo sum) and then, somewhere down the line, he gave God a look in. If only he had taken a walk down the corridor, then the chap in the next cubicle could have told him what he knew and they could have agreed to share it and that would have invented reality for them without any need for God, or mind and matter dualism. The next two hundred years could have been much simpler for everyone.
In essence, our reality and that includes our culture is social. So, our knowledge is social as well and so is the process of knowing. Descartes was right that we can’t know anything on our own part, but with other people, we can agree that we know an awful lot more. There are still things we don’t know but, for the most part, we know what they are or, perhaps, we know of someone who does.

Philosophers panic that this leads to a terrible relativism but, of course, it doesn’t. It is the wisdom of the crowd which guarantees reality and the shared understanding that this is the table which you eat your meals off and that is a workbench. The distinction is to do with the understandings and not some innate, unknowable properties of tables. This might seem obvious today but English philosophers in the twentieth century spent a very long time fretting about the innate properties of all sorts of things.

That might seem like a diversion but I tend to think that cultures are knowledge-based and therefore you need to have a bit of a view of knowledge in order to talk about them. So, what are the characteristics of our new digital culture?

The first thing you have to talk about is participation. Participation is what gives us Twitter and Facebook as social movements and that is undoubtedly what they are. It has also given us a different conception of news which, for the first time in its history, is increasingly participatory. The official message that the London Olympic opening ceremony in 2012 was overtly lefty and liberal would probably have prevailed if it hadn’t been for nine million tweets saying something different. We are also now much more inclined to believe the tweeter or the individual with a camera on their phone rather than the official news station correspondent who, it turns out, is broadcasting from an adjacent country!

This kind of participation has also given people more rights over what can be known. The fact that people know stuff now and can tell each other what they know has rocked the establishment. Greedy politicians, lying journalists, corrupt policeman, cheating bankers and perverted priests are simply symptomatic of how far this uncovering can go once it gets started. More worryingly, these stories could not see the light of day in the pre-digital culture and were simply suppressed by the forms of media which existed then. It is important to make that point. They were not suppressed by governments or government agencies but simply by the ways in which the world could be represented at that time. This is a paradigmatic shift in anybody’s terms and important in terms of political and social changes well.

Next, there is some interesting stuff in the digital culture about how close we want to be to the knitting. Intimacy and immediacy is important in the participatory media. We like to hear stuff from the horse’s mouth and from the main players not the mouthpieces. We also want everything to be live and immediate. We don’t want to be told what happened any more but, instead, we want to smell the flames so that the aim of the digital culture is to take us closer to the reality. Of course, in a sense that is illusory and, however close you get, there is a line of pixels in between.

So, linked to this need for immediacy there is something else which is sometimes called hypermediacy. This is the creation of a sense of immediacy without being immediate. Pornographic film and music videos work in this area not actually delivering more reality but creating a sense of reality through disparate images and music which claims in some way to be more immediate than the pixel reflections. This is now so endemic we are barely aware of it. We look at multiple screens without a thought and don’t think of them as confusing or unreal.

In the digital culture, therefore, there is a mixture of the participatory which engenders the need for immediacy and a technology which enables the transformation of the media. The key principle here seems to be remediation (re-mediation) which is the creation of new forms of expression to meet these needs by rehashing, integrating and cannibalising the old media. At the same time, working in the old media gets easier so that video and music mixing, slow motion sequencing, stop-pause animation, montages and mash-ups have all moved into the public domain where anyone with the right equipment can access them. Marshall McLuhan talked about how this process worked in the 1960s and how the visual media reprocess content.

So we have increased participation and immediacy and remediated technology as part of the digital culture but it is much harder to unpack what drives what. The shift from the 78 record, to the LP, the cassette and the CD and then to other forms of digital media is based in technological invention and has led to changes in the cultural ways in which we purchase, share and engage with music but how far is this to do with the pursuit of immediacy and how far to do with convenience and economy?

There is one other element which we ought to mention. Bricolage is concerned with the recycling process and how new forms of media are generated from old and new sources. News gathering as opposed to reporting is a good example where the modern online newspaper will bring together eyewitness reports in all sorts of media, Twitter feeds, social network responses and background information in a new mix largely accredited by its readership and by the networks that support it. Apparently, this is not quite the same as remediation.

This is one model of the digital culture but it may not be the only one. It doesn’t seem to include television or the wider ramifications of commercial changes although it could apply to them. There may also be other processes in play but we have to remember we are talking about a culture which like yeast is alive, growing and constantly in a state of flux.

The next stage is to look at how this kind of culture incorporates learning and e-learning in particular.

RSA on Outrospection

Interesting stuff about empathy – an essential requirement for a MOOC. What I can’t help noticing is that this is essentially the social construction of reality. In other words, people cannot help engaging in this process of empathy because it is how they construct the world and their reality. What is at issue is the extent to which they do this and I think the notion here is that empathy is constructing the world in a nice friendly way. After all, you can presumably construct it in a pathological way. Worthy stuff and the interesting question would be how do you create a digital literacy based on empathy?

The MOOC under attack

After all the excitement about MOOCs (Massive Open Online Courses) we are now beginning unsurprisingly to see the backlash. Those who live in the more elitist shining ivory towers of higher education are realising that they don’t like these things and it was interesting to see the Khan Academy given something of a put down, along with the flipped classroom in the Times Educational Supplement last week. There is also plenty of covert political support for cynicism from a government in the UK which is focused on back to basics and an establishment which espouses what might be called a classical education. Both view most technologies in education as a diversion.

The central argument about the MOOCs is that they don’t deliver but the people who put this argument forward do not have a clear idea of what they are supposed to be delivering anyway. Of course, they are neither alternatives to nor the equivalence of research based university learning but they might be something else quite interesting. So what are people afraid of?

At a basic level, elements in higher education worry about the crowd sourcing of knowledge and information. Universities have always been the repositories of knowledge and have, at the same time explicitly controlled access to it. If you think that is unfair, think about how the numbers of law and medicine degrees have been controlled for generations in order to restrict entry to the professions. It is something that higher education does as second nature. Think about the way universities limit free access to their research findings (in other words to things which expand the sum total of human knowledge), brutally top slice their collaborations and are mean in giving anyone else credits however duff their own courses are.

They also worry about changed notions of learning where the boundaries between learning and other activities, including leisure, are slowly being eroded. Universities still hold on to lectures despite the research which shows that most of the audience, if they have all arrived, are multitasking or deliberately tasking themselves elsewhere. The reason why the lecturers persist instead of flipping the learning, publishing materials online, and developing activity based sessions is because the lecture structure has a symbolic purpose to represent the dissemination of expert knowledge and that is where the status of a higher educator is often located.

The other thing that higher ed. is gradually losing touch with is the assessment process. If you have a highly disciplined model of learning, in the Foucault sense, you clearly have to possess the threat of the assessment and the inclusion or exclusion that that supplies. So, while there is lots of evidence in schools that peer assessment helps children to learn, we don’t see it happening much in higher education.

So, all in all, higher education has a lot to worry about in developing MOOCs and, as interested parties, we have to make sure that they don’t set them up in order to marginalise them. If universities start to act as the proprietors of these courses and begin to claim the right to own and run them, we will start to see the reinforcement of that old disciplinary power which says that the guru and the lecture room is best.

I’m not saying that that is happening at the moment. I think there are some fantastic democratic initiatives around in the widest sense but it is important to be aware that the critics are waiting ready to, firstly, expect too much from MOOCs and then, secondly, to discredit them on the basis of their own false premises. After all, that is pretty much what they do in most of their dealings with knowledge!

Something about Me

jim 2009_small

Hello, my name is Jim Sweetman and I have been experimenting with new technologies and e-learning for the best part of twenty years. I started a long time ago with a BBC computer and a cassette deck so I am conscious that things have moved on quite a long way!

I currently do quite a lot of work for an organisation called the National College for School Leadership in the United Kingdom which has a thriving set of online communities and a leadership curriculum based in online learning. I not only do learning design for them but I also provide online facilitator training.

I don't know how far that is going to equip me to take part in one of these courses but it will be good to find out. I also do online learning design for a range of clients so I have worked on both specific events and programmes.

I like playing with new technologies. I didn't know that I needed an iPad until I got one!