Monthly Archives: February 2013

Thinking seriously about post-humanism

I am beginning to think that these discussions of humanism and also of what it means to be human are fundamentally informed by a Cartesian notion of dualism. When Descartes said ‘I think therefore I am’ he created the potential for the human to be something more than the physical body and this same understanding seems to have permeated definitions of humanity.

It has also allowed Freud to go on to talk about mind and body in complicated ways and opened the door to Marx’s notion of the dialectic and to an expansion of ethical philosophy. But what if it is simply nonsense?

To put it bluntly, where is the evidence that mind exists apart from body? Increasingly, we understand that the brain is a lump of meat (I enjoyed that video) with some electrical charges whizzing through it and although we might imagine we feel things in our head we probably don’t. All that we have, and maybe Keats understood this, is a life of sensation.

I have argued before in this blog that Descartes could have got out of his dilemma by talking to the man in the next cubicle and agreeing to agree that they both existed in material form. They could have gone on from there to construct quite a satisfactory universe and drawn in other people to help build reality. This would have made it easy to see off any malignant Demons which appeared. Unfortunately, he didn’t and the outcome is dualism – the idea that we need two things inside ourselves to agree among themselves about what is real.

When we look at the various ways in which humanity has been defined subsequently the definitions constantly refer back to this idea of dualism. They talk about having other capacities which overcome the animal physical side of our being, thinking before we act and not being driven by base desires, understanding the psychology that drives our animal selves around and having a capacity for religious belief and the mystical which sits above our mundane everyday lives. The fact is that these are all chemical and electrical.

What provides them with some undeniable significance, either as sensations or values is the social world and the way that social beings construct reality. Dualism, undoubtedly useful in some circumstances, is simply one of these constructs but it manifestly isn’t a necessity or a prerequisite for being human. That is determined by what value we give to the sensations we experience while working collaboratively with others to share, define and normalise them. This seems a fair description of everyday life. I have a feeling that I haven’t had before so I share it with somebody else and they tell me when they felt the same and we give it a name. It it’s a good feeling, we’re probably in love and if it is a bad feeling they give us some antibiotics instead. It’s a simple process which happens throughout our lives and it starts with our mothers. It is not reliant on any notion of mind and body although most mothers will introduce one of these as a social convenience or perhaps to underline why Kyle down the road is a bit of an animal while her son is undeniably human.

That makes for a better definition of post-humanism which is the awareness that humanist approaches based on a flawed dualism are problematic although, as Derrida points out, they are also inescapable in terms of our ways of thinking. That makes post-humanism something much more positive and if we can use technology to play around with these understandings then so much the better.

Badmington, Neil (2000) Introduction: Approaching Posthumanism. Posthumanism. <;


Week 2 Artefact

Week 2 Artefact

An imaginary promotion for the logical next step in technology.

Defining Humanity

I can’t help but feel that Steve Fuller is really having an argument about how we define anything and then trying to hang some of his own ideas about why university education is important as the froth on the top of that debate. I got the feeling I was being led rather than challenged or questioned in that presentation!

It seems absurd to suppose there is some kind of emerging historical notion of humanity. His argument appears to be that there is some kind of intellectual elite leading this attempt down the ages to define humans by, in no particular order, additional non-animal characteristics (like the love of God!), bigger intellects and less hair (like Linnaeus), and greater goals (like Darwin and after). All of these approaches appear to overlook the nastiness of man’s inhumanity rather than humanity. The Crusades were not an exercise in educational clubbishness as Fuller appears to imply but an excuse, fuelled by ideology and greed, to wage war on some people. The same is true of Cortes, Attila the Hun and Adolf Hitler not to mention the slave trade and its imitations in any number of cultures down the ages.

It isn’t any wonder that people in the twentieth century challenged some of the more comfortable notions about humanity given the backdrop of two appalling world wars and the emergence of institutionalised racism and prejudice. Neither is it really any wonder in the 21st-century that humanity with a capital H is criticised as a male club, characterised as ignoring the environment, gender and economics and now sinking into an obsession with technology to create an alternative reality having ruined the one we’ve got.

Having said that, any time you see anyone espousing that tired old ‘things are getting worse argument’ and asserting how there used to be a golden age you can assume they are on shaky ground, getting on in years and nostalgic or politically dangerous. It is a point of view which appears to assert a kind of flow in history which rolls backwards and forwards or, for them, from good to bad.

However, it is still fair to ask a question about what humanity might be. The answer, for a lot of people, is what it isn’t. Being human is not being entirely animal, not being entirely driven by instant gratification and unconstructed desire, not being stupid or uncaring, and so on. This is boundary theory, the place where we get the interesting idea that dirt is simply material in the wrong place. Foucault, who Steve Fuller doesn’t appear to like, talks about normalising judgements, the everyday way that we speak about things which confirm the world as we build it. In other words, we don’t simply socially construct reality but we also work quite hard to maintain it. We don’t do this by asking each other what humanity is but we do it by reinforcing everyday value judgements and by disapproving of what lies beyond the boundaries. The recent gay marriage debate is a good example of people setting boundaries and many of those who do it are neither gay, churchgoing or much bothered about marriage but since they have Twitter accounts or are members of Parliament they feel obliged to join in. Frequently, in the media, this process is made quite explicit when behaviours and individuals are characterised or described as inhuman. Child killers, suicide bombers and paedophiles, for example.

What we need to remember in this discussion is that just because humanity is a leaky concept and fuzzy around the edges that doesn’t make it somehow flawed or inapplicable. If you think about it, your idea of a chair, red or a holiday is just as disparate. To me it seems entirely right that we should be working constantly to define our own humanity without the help of popes, dictators or anyone else for that matter who would like to tell us what being human is. That discussion is a place where we can express, exercise and develop our own moral values, principles and purposes as well.

As part of this, it is worth mentioning that you cannot be human if you are the only individual left on the planet. Humanity is a socially ascribed and developed characteristic which is why we tend to view people who describe themselves as overly humane as slightly suspicious. Best to judge people by what they do rather than what they say! Obviously, the last man on earth is going to dispute this but all he is holding onto are the vestiges of what people used to say and there are plenty of examples about how humanity can be made more flexible when the social bit is underplayed which is why stranded sailors in lifeboats were happy to eat the cabin boy and why people in high-density housing are more likely to kill each other.

So, where does that leave humanity and technology? More to the point, why is technology seen as a threat to the notion of being human when a variety of individuals and ideologies including Pol Pot, the Catholic Church and extreme Islam have provided a much greater threat down the years.

In theory, it is possible to argue that if we meet virtually we meet face-to-face less but given the billions of Facebook and Twitter postings each week and the exponential expansion of social media it would be absurd to suggest that we are communicating less. We are not just talking rubbish either. Twitter is a highly politicised forum for many people and an information exchange. Amazon reviews are an excellent source of shared wisdom about potential purchases and, increasingly, politicians are taking notice of mass petitions. Talk to the people who blog seriously with their primary school classes and the positive impact this has on their learning and then try to say technology is damaging humanity!

Is there any evidence that the technology is driving us? In terms of the use debate, which we have already had, it is certainly offering some attractive options and products but that doesn’t make us passive consumers or users. Television, as probably the most influential technology of the twentieth century, was far more dangerous in terms of indoctrination and social control.

I have got a feeling that Steve Fuller would have concluded if he had had more time that believing in God was an excellent way forward and that MOOCs are not real learning. I think he’s wrong on both counts.

Different Views of Technology

Lincoln Dahlberg discusses the danger of treating technologies as ‘things’. There are plenty of examples in the discussion forums of people doing this and anecdotally recounting their need to have the latest iPhone or other technology. In one sense, they are feeling guilty about being controlled by technology and the secret desire to have a Samsung Galaxy or whatever!

Of course, it is patently obvious that many technologies address themselves to us as objects or artefacts and that isn’t a new process. I could identify a similar response to the ones discussed in the forums in enjoying the possession of a first Kenwood Chef food mixer, a Black & Decker Hammer drill and a Garrard SP25 Mark 3 (less challenged students will have to google this one). I’m not sure that technology has to have plugs on either. I think my first set of metric socket spanners (for fiddling with cars) might also qualify.

What all these technologies have in common is that they enabled me to do something I couldn’t do before or to do something more effectively, efficiently and enjoyably – tastier, lighter cakes and smarter shelf fittings. And that, I think, is where the need to have them came from. Exactly the same analysis applies to a new Apple Mac.

However, that doesn’t mean that the fact that we experience them as things says much about their status and creation. It does allow people to make some fairly easy and probably false conclusions. When people say that Bill Gates is using his massive wealth and influence to coerce them into buying Windows 8 that is only part of the story. The production of Windows 8 has a long history and arises from what Marx calls ‘a complex social and institutional matrix’ which I suppose would involve the reputation of the product and the producer, technological development, commercial attractiveness, perceived need and demand, social reputation and status and its situation in a culture. In terms of almost any technological artefact, we can see these factors in play to various degrees.

One of the writers suggests a note of caution here. We shouldn’t start to assume that this is a kind of unknowable mush out of which the product emerges or, alternatively, a process which is inaccessible to us consumers – akin to the folk myth that alien technology from the Roswell alien landing gave us non-stick saucepans. We can usually track a clear history for any technology if we make the effort.

There is another tendency to see all of these things as part of a conspiracy where technology is a huge black cloud which is seeking to envelop us. I think that this is the technological determination model in its dystopian form. So, every innovation is viewed with suspicion. People don’t read enough and they don’t get out enough and they are passive recipients rather than active participants, and technology is to blame. It is easy to see why people take this line particularly those couch potatoes who might consider themselves to be digital migrants.

Social determinism doesn’t quite follow the conspiracy line but seems to assume that there are elements that control technology – political, economic and social forces – fighting for control and ownership. It would be foolish not to admit that there are political discussions around technology and that they reflect the balance of power in particular societies. There are also more subtle debates about what is proper and appropriate.

I lean towards this analysis personally because I do think that culture and meanings are socially constructed. I also think there is something around here about what Foucault says about disciplinary practices so that within the discussions there are normative judgements and practices that set up boundaries to exclude or include. I also think it would be daft to pretend that technologies do not impact on people. What I am less sure about is this sense in determinism that things are determined. There is not a destiny what shapes our ends – we do it!