Ernst Troeltsch (1931) used the term ‘church’ to refer to a large, hierarchically organised religious institutions with an inclusive, universal membership, typically with close links to the state.
According to Troeltsch* Churches have about 5 characteristics:
Churches tend to have very large memberships, and inclusive memberships.
Churches tend to claim a monopoly on the truth.
Churches have large, bureaucratic, hierarchical structures
Churches have professional, paid clergy
Churches tend to be closely tied to the state.
Criticisms of the ‘concept’ of the church
Steve Bruce (1996) suggests that the above definition of church may have been true in pre-modern Christian societies, but ever since the Reformation, and especially since the increase of religious pluralism, this type of definition of a ‘church’ no longer applies to organisations which formally call themselves churches in modern societies – organisations such as the Church of England.
There are several examples of ‘churches’ which do not fit the above definition:
The Church of England does not have universal membership.
Many churches today do not claim a monopoly on the truth, they tend to be tolerant of other faiths.
The links between the church and the state are not as strong as they once were.
It seems then, that the only ways in which modern churches resemble Troeltsch’s definition lies in their organisational structure.
Haralambos and Holborn: Sociology Themese and Perspectives
Chapman et al: Sociology AQA A-Level Year 2 Student Book
Durkheim’s view of religion implied that a truly religious society could only have one religion in that society. In Durkheim’s analysis this was the situation in small-scale, Aboriginal societies, where every member of that society comes together at certain times in the year to engage in religious rituals. It was based on observations of such societies that Durkheim theorized that when worshiping religion, people were really worshiping society.
However, in more modern societies, especially postmodern societies, there is no one dominant religion: there are many religions, or a plurality of religions. Sociologists describe such a situation as religious pluralism.
According to Steve Bruce (2011) modernization and industrialization in Northern Europe and America brought with them social fragmentation, such that a plurality of different cultural and religious groups emerged. We see religious pluralism most obviously in the growth of sects and cults and in the increase in ethnic diversity of religion in societies.
Two process happen as a result of this: people find that their membership of their particular group or religion no longer binds them to society as a whole; and the state finds it difficult to formally support one ‘main religion’ without causing conflict.
Bruce thus argues that ‘strong religion’, which influences practically every areas of people’s lives: shaping their beliefs and practices cannot exist in a religiously plural society. Strong religion can only exist in isolated pockets, such as the Amish communities, but these have isolated themselves from society as a whole.
Religiously plural societies are thus characterized by ‘weak religion’ – which is a matter of personal choice and does not dominate every aspect of people’s lives. Weak religions accept that there is room for other religious belief systems and have little social impact.
Examples of weak religions include modern Protestantism, the ecumenical movement and New Ageism.
Arguments against increasing religious pluralism as evidence of secularization
It is possible that religion is just changing to fit a postmodern society rather than it being in decline. Why does a society need to have one dominant religion for us to be able to say that religion is important?
It might be that diverse religions which preach tolerance of other religions are the only functional religions for a diverse postmodern society.
There are societies which have more than one religion where religious beliefs are still strong: for example Northern Ireland and Israel.
The concepts of ‘normal’, and ‘normality’, and the question of what counts as ‘normal behaviour’ has long been of interest to sociologists. Sociologists from different perspectives have very different approaches to answering the basic, but fundamental question, ‘what is normal’?
For the early positivists such as August Comte and Emile Durkheim, uncovering the existence of social norms (or typical patterns of behaviour) was central to their early positivist sociology. However, contemporary sociologists are more likely to question whether or not there is such a thing as ‘normal’ in our postmodern society.
Interest in the word ‘normal’ started to grow in line with early Positivist sociology, peaked during the ‘heyday’ of structuralist sociology in the 1940s-70s and has been in decline since the (contested) shift to postmodern society from the 1980s…
What is Normal?
‘Normal’can be defined as any behavior or condition which is usual, expected, typical, or conforms to a pre-existing standard.
‘Normal behaviour’ may be defined as any behaviour which conforms to social norms, which are the expected or typical patterns of human behaviour in any given society.
It follows that in order to establish what ‘normal’ behaviour is, sociologists firstly need to establish what social norms are present in any given society.
This is actually more difficult than it may sound, because social norms exist at ‘different levels’ of society (at least for those sociologists who actually believe social norms actually exist!)
Some social norms exist at the level of society as a whole, known as ‘societal level norms’, which tend to be very general norms, such as ‘obeying the law most of the time’ or ‘children being expected to not talk to strangers’.
Other norms are context-dependent, and are specific to certain institutions – for example the specific norms associated with sitting a formal examination within an educational setting, or those associated with a funeral. (In some respects the two examples are quite similar!)
Social norms can also vary from place to place, time of day, and different norms may be expected of people depending on their social characteristics: their age, or gender for example.
Given all of the above problems with establishing the existence of social norms, postmodern sociologists have suggested that we need to abandon the concept of normality all together, and just accept the fact that we live in a society of individuals, each of whom is unique.
However, many contemporary sociologists disagree with this postmodern view, given then fact that there do appear to be certain patterns of behaviour which the vast majority of people in society conform to.
The remainder of this post will consider a range of examples of behaviours which might reasonably be regarded as ‘normal’ in the context of contemporary British society….
How might sociologists ‘determine’ what is ‘normal’?
As far as I see it, there are a number of places sociologists can look, for example:
They can simply start out by making observations (possibly backed up by ‘mass observation’ data) of daily life, which will reveal certain General norms of behavior.
They can use statistical data to uncover ‘life events’ or actions that most people will engage in at some point during their ‘life-course’.
They can look at statistical averages.
They can look at attitude surveys and field experiments to find out about typical attitudes towards certain objects of attention and typical behaviours in specific contexts.
They can simply look at the most popular tastes and actions which the majority (or ‘largest minority’ of people engage in.
Below I discuss the first three of these…
Normal behaviour in daily life….?
Simple observations of daily life (backed up with a few basic surveys) reveal there are several social norms that the vast majority of the public conform to. For example:
Wearing clothes most of the time
Despite the fact that according to one survey as many as 1.2 million people in the UK define themselves as naturists (which is about as many as there are members of the Church of England), only 2% of people report that they would ‘get their kit off too’ if they came across a group of naked people playing cricket on a beach while on a coastal ramble’.
You probably don’t think about it very much, but nearly all of us do it – ignoring other people on public transport. So much so that if you type in ‘avoiding people on public transport’ to Google, then the first search return is actually a link to ‘how to do it‘… from ‘sitting by yourself and putting a bag on the seat next to you’ to (most obviously) using your mobile phone or eating something. There’s even advice on how to ‘disengage’ from conversation, just in case some deviant is socially unaware enough to talk to you.
The limitations of establishing ‘normality’ from such ordinary, everyday behaviours…
While most of us engage in such behaviours, is this actually significant? Do these ‘manifestations of similarity’ actually mean anything? Most of us brush our teeth, most of us ignore each other on public transport, most of us wear clothes, but so what?
All of these manifestations of ‘normality’ are quite passive, they don’t really involve much of a ‘buy in’, and there’s still scope for a whole lot of differences of greater significance to occur even with all of us doing all of these ‘basic’ activities in unison…
Life Course Norms…?
It’s probably not as simply as ‘normal life in the U.K.’ as equating to having a 9-5 job, a mortgage, a fuck off big television, walking the dog, paying taxes and having a pension….
But it possible to identify some ‘life-events’ that the vast majority of people in the United Kingdom (or at least England in some of the examples below) will experience at some points in their life. All of the examples below are take from across the A-level sociology syllabus…
Most children in the United Kingdom will go to school….
According to World Bank data, 98.9% of children in the United Kingdom are enrolled in school, so it’s reasonably fair to say that ‘it is normal for children in the UK to go to secondary school’.
NB – it’s probably worth pointing out that ‘secondary school enrollment is much more common in the UK compared to the United States, and especially Uruguay, and various other less economically developed countries.
Of course the fact that nearly 99% of children are enrolled in secondary school in the UK tells us nothing about their experience of education, or how long they actually spend in school, but nonetheless, being enrolled and being subjected to the expectation to attend secondary school in the UK is one of the most universal experiences through the life-course.
Most people in the U.k. will engage in paid work or live with someone who has engaged in paid work at some point in their lives
Only 0.8% of 16-64 year olds live in households where all members have never worked. These figures don’t actually tell us how many people have never worked, but we can say that 99.2% of the adult population has either worked, or is currently living with someone who has, at some point in their lives, worked.
Limitations of establishing ‘normal’ behaviour from these trends
The limitations of deriving an idea of ‘normality’ from life-course data is that you are much less likely to find norms across the generations rather than in one specific age-cohort. More-over, one of main reasons postmodernists argue that it is no longer appropriate to talk about social norms today is that there is a trend away from shared norms in many areas of social life and a movement towards greater diversity.
Social Norms based on statistical averages
A third method of determining what is ‘normal’ is to look at the ‘median’ value of a distribution, that is the value which lies at the midpoint.
In social statistics, it is very like that the median will provide a more representative average figure than the mean because a higher percentage of people will cluster around the median compared to the mean.
Median disposable household income in the UK in 2017 was £27,300
Limitations of establishing ‘normal behaviour’ from medians or means
Is the median the ‘best’ way of establishing ‘what is normal’? Even though it’s the figure around which most people cluster, there can still be enormous differences in those at both ends of the distribution.
As to the mean, as with the household average above…. this might be useful for establishing trends over time, but surely when we look at ‘today’, this is meaningless… there are no households with 2.4 people in!
So… is there such a thing as normal?
While it is possible to identify ‘norms’ using various methods, hopefully the above examples at least demonstrate why postmodernists are so sceptical about the concept of normality today!
A simple definition of secularization is the declining importance of religion in a society.
Wilson (1966) provided a ‘classic’ definition of secularization which has been widely adopted by A-level text book authors, teachers and students for decades:
Wilson (1966) defined secularization as “the process whereby religious thinking, practices and institutions lose social significance”.
This ties in nicely with Clement’s definition of ‘religiosity’ as consisting of the three Bs of belonging (institutions), behaving (practices) and believing (thinking), and has lead to something of a tradition within A-level sociology of assessing the nature and extent of secularization by looking at three broad indicators:
The power and influence of religious institutions in society – e.g. how much of a say do religious leaders have in making political decisions in a nation state.
The extent to which people practice their religions – e.g. how many people get off their backsides and attend a religious ceremony once in a while. Or, in the case of Buddhists, how many of them stay sitting on their backsides.)
The strength of religious beliefs within a society – e.g. how many people believe in some kind of concept of God or an afterlife.
As can be seen from the above indicators of secularization, measuring the significance of religion in a society, and thus measuring its decline (or otherwise) is far from simple: not only do you need to decide which indicators to use to measure each of the above ‘aspects’ of religion (at the institutional, behavioural and personal-belief levels), but you also need to decide on the relative importance of each of these in determining the social significance of religion.
On top of this, further problems in measuring the nature and extent of secularization lie in the fact that measurements have to somehow take into account the fact that religion does not stand still: it has changed considerably over the last 100 years or so. Finally, sociologists need to decide how far back they go, or what the most appropriate time scale is to make a judgement as to the nature nature and extent of secularization.
A fuller definition of secularization is provided by Steve Bruce (2002) who defines secularization as a “social condition manifest in (a) the declining importance of religion for the operation of non-religious roles and institutions such as the state and the economy’; (b) a decline in the social standing of religious roles and institutions; and (c) a decline in the extent to which people engage in religious practices, display beliefs of a religious kind, and conduct other aspects of their lives in a manner informed by such beliefs’.
Professor Steve Bruce, aka Brucey Baby*, not be confused with Steve Bruce, manager of the football club Aston Villa (or to be confused with either Bruce Dickinson, lead singer of the heavy metal band Iron Maiden, or Bruce Forsyth, recently deceased host of both ‘Play your Cards Right’ and ‘Strictly Come Dancing’), is worth a mention as he is one of the main historical contributors to the ‘secularization debate’.
Sociologists use the term ‘religiosity’ to refer to the significance of religion within a society.
There are numerous indicators which sociologists use to measure the degree of ‘religiosity’ within society, such as the strength of religious beliefs, the number of people who actively engage in religious practices, and the amount of power religious institutions have within a society.
“scholars of religion often analyse how faith influences individuals’ experiences, attitudes and values by looking at the three Bs: belonging (identification and membership), behaving (attendance) and believing (in God)
Belonging – this aspect of religiosity is typically measured by how many people are members of religious organisations and actually identify with formal religious institutions.
Behaving – this aspect of religiosity can be measured by the religious activities people actually engage in… such as how often they attend places of religious worship, whether they get married via a religious institution, and how often they pray in private,
Believing – this is the most subjective aspect of religiosity, and can include whether people believe in God, the afterlife, spirits etc…
Problems with measuring ‘religiosity’
Someone’s faith is largely subjective, which makes it very difficult to measure quantitatively.
Changes in religious belief and practice make religiosity difficult to measure…. now we have moved into a postmodern society, which is more individualistic, people are more likely to practice religion privately and individually, and less likely to engage with traditional religious institutions such as the Christian church – but does this shift mean society is necessarily less religious?
Sociologists disagree over how we should define religion, which will influence how religious they perceive a society to be. For example, someone who uses a substantive definition of religion, and says that a religion must involve a belief in God, would probably believe that religiosity is in decline. However, someone who uses a functional definition could argue that religiosity is just as strong as ever, it has just changed – with civil religion having taken over from ‘traditional’ religions, for example.
Weber argued that the values of the protestant religion led to the emergence of Capitalism in Western Europe around the 17th century.
Weber observed that Capitalism first took* off in Holland and England, in the mid 17th century. He asked himself the question: ‘why did Capitalism develop in these two countries first?’
Protestant Individualism and the Emergence of Capitalism
Based on historical observation and analysis, Weber theorized this was because these were the only two countries in which Protestantism was the predominant religion, rather than Catholicism, which was the formal religion of every other European country.
Weber theorized that the different value systems of the two religions had different effects: the values of Protestantism encouraged ways of acting which (unintentionally) resulted in capitalism emerging, over a period of many decades, even centuries.
Protestantism encouraged people to ‘find God for themselves’. Protestantism taught that silent reflection, introspection and prayer were the best ways to find God. This (unintentionally, and over many years) encouraged Protestants to adopt a more ‘individualistic’ attitude to their religion by seeking their own interpretations of Christianity.
In contrast, Catholicism was a religion which encouraged more conservative values and thus was resistant to such changes. The Catholic Church has a top-down structure: from God to the Pope to the Senior Bishops and then down to the people. Ultimate power to interpret Catholic doctrine lies with the Pope and his closest advisers. Practicing Catholics are expected to abide by such interpretations, they are generally not encouraged to interpret religious scripture for themselves. Similarly, part of being a good Catholic means attending mass, which is administered by a member of the Catholic establishment, which reinforces the idea that the church is in control of religious matters, rather than spirituality being a personal matter as is more the case in Protestant traditions.
Part of Weber’s theory of why Capitalism first emerged in Protestant countries was that the more individualistic ethos of Protestantism laid the foundations for a greater sense of individual freedom, and the idea that it was acceptable to challenge ‘top down’ interpretations of Christian doctrine, as laid down by the clergy. Societies which have more individual freedom are more open to social changes.
Calvinist Asceticism and the Development of Capitalism
Weber argued that a particular denomination of Protestantism known as Calvinism played a key role in ushering in the social change of Capitalism.
Calvinism preached the doctrine of predestination: God had basically already decided who was going to heaven (‘the saved) before they were born. Similarly, he had also already decided who the damned were – whether or not you were going to hell had already been decided before your birth.
This fatalistic situation raised the question of how you would know who was saved and who was damned. Fortunately, Calvinism also taught that there was a way of figuring this out: there were indicators which could tell you who was more likely to be saved, and who was more likely to be damned.
Simply put, the harder you worked, and the less time you spent idling and/ or engaged in unproductive, frivolous activities, then the more likely it was that you were one of those pre-chosen for a life in heaven. This is because, according to Calvinist doctrine, God valued hard-work and a ‘pure-life’ non-materialistic life.
According to Weber this led to a situation in which Calvinist communities encouraged work for the glory of God, and discouraged laziness and frivolity. Needless to say there was quite a motivation to stick to these ethical codes, given that hell was the punishment if you didn’t.
Over the decades, this ‘work-ethic’ encouraged individuals and whole communities to set up businesses, and re-invest any money they earned to grow these businesses (because it was a sin to spend the money you’d made on enjoying yourself), which laid the foundations for modern capitalism.
Weber argued that over the following centuries, the norm of working hard and investing in your business became entrenched in European societies, but the old religious ideas withered away. Nonetheless, if we take the longer term view, it was still the Protestant work ethic which was (unintentionally) responsible for the emergence of Capitalism
On the plus side, Weber’s theory of social change recognizes that we need to take account of individual motivations for action in order to understand massive social structural changes
On the negative side, critics have pointed out that the emergence of Capitalism doesn’t actually correlated that well with Protestantism: there are plenty of historical examples of Capitalist systems having emerged in non-Protestant countries – such as Italian Mercantilism a couple of centuries early.
Find out More
This post is a very brief summary of Max Weber’s theory of religion and social change. For a much more detailed account, including more specific historical details of Calvinism, please see this post (forthcoming!)
*Weber recongized that features of the Capitalist system were present in other parts of Europe previous to the 17th century, but Holland and England were the first societies to really adopt capitalist values at the level of society as a whole, rather than it just existing in relatively isolated pockets.
There is an enormous variety of religious beliefs and practices globally, and the main problem with defining religion is to find a definition which encompasses this variety without including beliefs or practices which most people do not regard as religious.
There are two general approaches to defining religion: functional which tend to have broad, more inclusive definitions of religion and and substantive approaches which tend to have narrower, more exclusive definitions of religion.
Functional definitions of religion
Functional definitions define religion in terms of the functions it performs for individuals and/ or society. For example, Yinger (1995) defines religion as ‘a system of beliefs and practices by means of which a group of people struggles with the ultimate problems of human life.’
Problems with functional definitions of religion
They are too inclusive: almost any movement with a belief system of any kind and a committed group of followers would classify as a religion – for example, communism, nationalism, and even atheism.
Because functional definitions are too inclusive, it makes the growth/ decline/ impact of religion impossible to measure.
Functional definitions are based on subjective opinions and assumptions about what the role of religion is. Rather, the role of the sociologist should be to uncover what the functions of religion are through empirical investigation.
Substantive definitions of religion
Substantive definitions of religion define religion in terms of its content rather than its function.
Emile Durkheim‘s, approach to defining religion can be regarded as a substantive definition – Durkhiem argued that religion was the collective marking off of the sacred from the profane.
A common approach to defining religion substantively is to define religion in terms of a belief in a higher power such a god or other supernatural forces. For example Robertson (1970): ‘Religion refers to the existence of supernatural beings that have a governing effect on life’.
Problems with substantive definitions of religion
They can be too exclusive. For example, definitions which are based on a belief in God would exclude Buddhism.
Substantive definitions might still be too inclusive. For example, people who believe in fate, magic, or UFOs might be included as religious according to the above definition.
Defining religion: why it matters
The definition of religion that a sociologist describes to will have a profound impact on their conclusions about the role and impact religion has in society. This is most obviously the case where the secularization debate is concerned: if one adopts a more exclusive definition of religion, then it would appear that religion is in decline. However, if one adopts a more exclusive definition of religion, this decline will not be so apparent.
The social processes through which new members of society develop awareness of social norms and values and help them achieve a distinct sense of self. It is the process which transforms a helpless infant into a self-aware, knowledgeable person who is skilled in the ways of a society’s culture.
Socialization is normally discussed in terms of primary socialization, which is particularly intense and takes place in the early years o life, and secondary socialization, which continues throughout the life course.
Stages of Socialization
Socialization takes place through various agencies, such as the family, peer groups, schools and the media.
The family is the main agent during primary socialization, but increasingly children attend some kind of nursery schooling from a very young age. It is in the family that children learn the ‘basic norms’ of social interaction – in Britain such norms include learning how to walk, speak, dress in clothes, and a whole range of ‘social manners’, which a taught through the process of positive and negative sanctions, or rewarding good and punishing bad behaviour.
In modern societies, class gender and ethnic differences start to affect the child from a very young age and these influence patterns of socialization. Where gender is concerned, for example, children unconsciously pick up on a range of gendered stereotypes which inform the actions of their parents, and they typically adjust their behaviour accordingly.
In adulthood, socialization continues as people learn how to behave in relation to new areas of social life, such as work environments and political beliefs. Mass media and the internet are also seen as playing an increasing role in socialization, helping to shape opinions, attitudes and behaviour. This is especially the case with the advent of new media, which enable virtual interactions via chatrooms, blogs and so on.
Taken together, agencies of socialization form a complex range of contrary social influences and opportunities for interaction and it can never be an entirely directed or determined process: humans are self-aware beings capable of forming their own interpretations of the messages with which they are presented.
Criticisms of the Concept
The main criticism of theories of socialization is that they tend to exaggerate its influence. This is particularly true of functionalism which tended to see individuals as cultural dopes, at the mercy of socializing agencies.
Dennis Wrong (1961) took issue with what he saw as the ‘oversocialized concept of man’ in sociology, arguing that it treats people as mere role-players, simply following scripts.
Today, theories of society and cultural reproduction are much more likely to recognize that individuals are active players and that socialization is a conflict-ridden and emotionally charged affair, and the results of it are much less predictable than functionalist theories suggested in the 1950s.
The systematic domination of women by men in some or all of society’s spheres and institutions
Origins of the Concept
Ideas of male dominance have a very long history, with many religions presenting it as natural and necessary.
The first theoretical account of patriarchy is found in Engels theory of women’s subservience under capitalism. He argued that capitalism resulted in power being concentrated in the hands of fewer people which intensified the oppression of women as men passed on their wealth to their male heirs. (I’ve outline this theory in more detail in this post: the Marxist perspective on the family).
The main source of patriarchal theory stems from Feminism, which developed the concept in the 1960s, highlighting how the public-private divide and the norm of women being confined to the domestic sphere was the main source of male dominance and female oppression, highlighted by the famous Feminist slogan ‘the personal is the political’.
Subsequent Feminist theory and research explored how
Today, there is much disagreement over the concepts usefulness within the various different Feminist traditions (for the purposes of A-level sociology, typically divided up into Liberal, Marxist, Radical).
Meaning and Interpretation
The concept of Patriarchy forms the basis for radical forms of Feminism which has focused on how Patriarchy is reproduced in many different ways such as male violence against women, stereotypical representations in the media and even everyday sexism.
Sylvia Walby re-conceptualized Patriarchy in the 1990s, arguing that the concept failed to take account of increasing gender equality, but that it should still remain central to Feminist analysis, suggesting that there are six structures of patriarchy: Paid Work, Household Production, Culture, Sexuality, Violence and the State.
Walby also argued that analysis should distinguish between public and private forms of patriarchy.
The concept of patriarchy has been criticized from both outside and within Feminism.
The concept itself has been criticized as being too abstract: it is difficult to pin it down and find specific mechanisms through which it operates.
Many Feminists argue that Patriarchy exists in all cultures, and thus the concept itself is too general to be useful, as it fails to take account of how other factors such as class and ethnicity combine to oppress different women in different ways.
Black Feminists have criticized the (mainly) white radical Feminist critique of the family as patriarchal as many black women see the family as a bulwark against white racism in society.
Postmodern Feminism criticizes the concept as it rests on the binary distinction between men and women, the existence of which is open to question today.
Much contemporary research focuses on discourse and how language can reproduce patriarchy. For example Case and Lippard (2009) analysed jokes, arguing they can perpetuate patriarchal relations, although Feminists have developed their own ‘counter-jokes’ to combat these – they conclude that humor can act as a powerful ideological weapon.
Working definition: the separation or estrangement of human beings from some essential aspect of their nature or from society, often resulting in feelings of powerlessness or helplessness.
Today, the concept of alienation has become part of ordinary language, much used in the media. We may be told, for example, that who groups are becoming alienated from society, or that young people are alienated from mainstream values. With such usage of the concept we get the impression of the feeling of separation of one group from society, but the concept has traditionally been used in sociology, mainly by Karl Marx, to express a much more profound sense of estrangement than most contemporary usage (IMO).
Origins of the concept
Sociological usage of the term stems from Marx’s concept of alienation which he used to develop the effects of capitalism on the experience work in particular and society more generally.
Marx developed his theory of alienation from Feuerbach’s philosophical critique of Christianity – Feuerbach argued that the concept of an all powerful God as a spiritual being to whom people must submit in order to reach salvation was a human construction, the projection of human power relations onto spiritual being. Christianity effectively disguised the fact that it was really human power relations which kept the social order going, rather than some higher spiritual reality, thus alienating from the ‘truth’ of power was really maintained.
Marx applied the concept of alienation to work in industrial capitalist societies, arguing that emancipation for workers lay in their wrestling control away from the small, dominating ruling class.
Later, Marxist inspired industrial sociologists used the concept to explore working relations under particular management systems in factories.
Marx’s historical materialist approach began with the way people organise their affairs together to produce goods and survive. For Marx, to be alienated is to be in an objective condition which as real consequences, and to change it we need to actually change the way society is organised rather than changing our perception of it.
Work in the past may well have been more physically demanding, but Marx argued that it was also less alienating because workers (craftsmen for example) had more control over their working conditions, work was more skilled and it was more satisfying, because workers could ‘see themselves in their work’.
However, in 19th century industrial factories, workers effectively had no control over what they were doing, their work was unskilled and they were effectively a ‘cog in a machine’, which generated high levels of alienation – or feelings of powerlessness, helplessness, and of not being in control.
It doesn’t take too much of a leap to apply this analysis to late-modern working conditions – in fast food outlets such as McDonald’s or call centers, for example.
Marx’s theory suggests capitalist production creates alienation in four main areas:
Workers are alienated from their own labour power – they have to work as and when required and to perform the tasks set by their employers.
They are alienated from the products of their labour – which are successfully claimed by capitalists to be sold as products on the marketplace for profit, while workers only receive a fraction of this profit as wages
Workers are alienated from each other – they are encouraged to compete with each other for jobs.
They are alienated from their own species being – according to Marx, satisfying work is an essential part of being human, and capitalism makes work a misery, so work under capitalism thus alienates man from himself. It is no longer a joy, it is simply a means to earn wages to survive.
Marx’s well known (but much misunderstood) solution to the ills of alienation was communism – a way of organizing society in which workers would have much more control over their working conditions, and thus would experience much less alienation.
Marx’s concept of alienation was very abstract and linked to his general theory of society, with its revolutionary conclusions, and as such, not especially easy to apply to social research.
However, in the 20th century some sociologists stripped the concept from its theoretical origins in order to make the concept more useful for empirical research.
One example is Robert Blauner’s ‘Alienation and Freedom (1964) in which he compared the alienating effects of working conditions in four industries – focusing on the experience of the four key aspects of alienation: powerlessness, meaninglessness, isolation and self-estrangement.
Blauner developed ways of measuring these different types of alienation incorporating the subjective perceptions of the workers themselves, arguing that routine factory workers suffered the highest levels of alienation. However, he found that when production lines became automated, workers felt less alienated as they had more control over their working conditions.
Blauner’s work ran counter to existing theory that technological innovation and deskilling would lead to ever greater levels of alienation. It also suggested alienation could be reduced without destroying capitalism.
While the collapse of Communism suggests that Marx’s general theory of alienation is no longer relevant, many firms today seem to have taken on board some aspects of the theory – for example, it is well establish that increasing worker representation and participation reduces worker ‘alienation’, as outlined in the Taylor Review of Modern Working Practices. Another example of how firms combat alienation is the various media and tech companies which design work spaces to be ‘homely and comfortable’.
Other sociologists have attempted to apply the concept of alienation to criminology (Smith and Bohm, 2008) and even the study of health and illness (Yuill 2005).
Giddens and Sutton (2017) Essential Concepts in Sociology