Look at the top 10 countries, the bottom 10 countries, and look at ten in the middle.
NB you may need to screen out certain odd countries (such as those which are Islands with very small populations for example!)
Using your own knowledge, and further research on these countries if necessary, try to find out if any of the above three groups (top 10, middle 10, bottom 10) have anything in common.
Can you come up with theory for why some countries are more gender equal than others?
Why do some countries perform better in the PISA tests than others?
The Programme for International Student Assessment assess students from dozens of countries in their ability in maths, reading and science. All students do the same test and so we get national league tables as a result.
This is the hub page for the 2018 PISA results (results are only released every four years). Have a look at the countries at the top of the league tables compared to those at the bottom – can you think of a theory for why students in some countries do better than students others?
The strengths and limitations of autobiographies as a source of data
Whether they have a readership of millions or tens, autobiographies are selective in the information they provide about the life of the author.
They thus tell you what the author wants you to know about themselves and their life history.
However, you have no way of knowing whether the events outlined in an autobiography are actually and I wouldn’t even trust an autobiography to give me an accurate view of the authors’ own interpretation of what the most significant events in their life history were.
The author may exaggerate certain events, either because they mis-remember them, or because they want their book to sell, thus they are selecting what they think their audience will want to read.
In some cases, events may even be fabricated altogether.
As a rule, I’d say that the more famous someone is, then the less valid its contents are. An exception to this would be less famous ‘positive thinking’ lifestyle gurus, whose income maybe depends more on their book sales than really famous people, who could possibly afford to be honest in the biographies!
Either way, there are so many reasons why an autobiography might lack validity, I wouldn’t trust the content of any of them – think about it, how honest would you be in your autobiography, if you knew anyone could read it?
Using autobiography sales data may be more useful…
IMO the value of autobiographies lies in telling us what people want to hear, not necessarily in getting to the truth of people’s personal lives.
If want to know what people want to hear, look a the sales volumes – there are really no surprises…..
It is possible to analyse qualitative social media data to reveal social trends in attitudes.
Twitter recently released an analysis of the content of 4 billion tweets made over the past three years, from users based in the United States. (Source)
The fastest growing theme which Twitter users are talking about is ‘creator culture’, with people tweeting about products they create in order to sell to make a living…
They claim that the content of tweets reveal that the U.S. population has become increasingly interested in six major cultural themes over the last 4 years (from 2016 to 2020):
Tweets about ‘Creator Culture’ are up 462% – which includes tweets about creative currency, ‘hustle life’ and connecting through video.
One Planet tweets are up – 285% – includes tweets on the themes of the ethical self, sustainability, and clean corporations
Tweets about Well Being are up 225% – tweets about digital monitoring, holistic health and being well together
Tech Life tweets are up – 166% – on the topics of blended realities, future tech and ‘tech angst’.
‘My Identity’ tweets are up – 167% – fandom, gender redefined and ‘representing me’ are the main themes here.
Tweets about ‘Everyday Wonders’ are up 161% – a theme which includes DIY spirituality, awe of nature and cosmic fascination.
The 2020 report by twitter (here) was produced for marketing purposes, but nonetheless reveals what twitter users are becoming increasingly interested in, and there are no real surprises here.
The report is broken down into several sections which include the nice infographics I’ve put up in this post, there are many more available in the reports.
Intuitively I’m not surprised to see any of the above trends emerging from this analysis – I’m sure that as a population as a whole, we are generally more interested in all of the above in 2020, compared to 2016.
The limitations of using Twitter data to reveal cultural trends
There may be a lot of data, but there are possible problems with representativeness – twitter users tend to be younger and more educated than the wider population. (Source).
There’s also a problem with the motivations behind the data being collected – this was done for marketing purposes, to be useful to companies wishing to advertise on Twitter – so this analysis wouldn’t show any more negative trends which may have been tweeted about.
A limitation of the way this data is published is that we’re not told the raw numbers – so we know how much more a particular trend is being tweeted about in percentages, but we don’t know about the actual numbers. Some of these may have started from a very low base in 2016, in which case a 250% increase in 4 years still wouldn’t be that signficant!
This analysis paints Twitter as a wholly positive place where people are full of wonder and fascination, and are creative and positive. In reality we all know there’s a darker side to Twitter!
Personal documents are those which are intended only to be viewed by oneself or intimate relations, namely friends or family. They generally (but not always) not intended to be seen by a wider public audience.
For the purposes of A-level sociology, the two main types of personal document are diaries and personal letters.
Today, I’m inclined to include personal ‘emails’ and certain intimate chat groups – such as circles of close friends chatting on WhatsApp, in this definition, because the data produced here will reveal personal thoughts and feelings, and isn’t intended for wider public consumption.
I think we can also include some personal blogs and vlogs in this definition, as some of these do reveal personal thoughts and feelings, even if they are written to be viewed by the general public – people sharing aspects of their daily lives on YouTube, or people writing more focused blogs about the travel experiences or how they are coping with critical illnesses, all have something of the ‘personal’ about them.
We could also include ‘naughty photos’ intended only to be shared with an intimate partner, but I think I’ll leave an analysis of those kind of documents out of this particular post!
Just a quick not on definitions – you need to be careful with the distinction I think between personal and private documents.
Personal documents = anything written which reveals one’s personal thoughts and feelings. These can either be written for consumption by oneself, by close others, or sometimes for public consumption.
Private documents – these are simply not intended to be viewed by a wider public audience, and can include someone’s personal diary or intimate letters/ photos between two people, but company accounts and strategy can also count as private documents, even if shared by several dozens of people, if not intended for consumption by a wider audience.
As with all definitions, just be clear what you’re talking about.
Certainly to be safe, for the sake of getting marks in an A-level sociology exam question on the topic, personal diaries and ‘intimate letters’ are certainly both types of personal document.
Examples of sociological research using Personal Documents
Thomas and Znaniecki, The Polish Peasant (1918/ 1921)
Ozana Cucu-Oancea argues that this remains the most significant work using personal documents in the history of the social sciences (source).
The study used a range of both personal and public documents, and the former included hundreds of letters between Polish immigrants and their families back home in Poland, as well as several personal diaries.
In all the work consisted of 2,200 pages in five volumes, so it’s pretty extensive, focussing on the cultural consequences of Polish migration.
The documents revealed touched on such themes as crime, prostitution, alcoholism; and the problem of social happiness in general.
What was significant about this study from a theoretical point of view is that it put the individual at the centre of social analysis and stood in contrast to Positivism which was popular at that time.
The limitations of using personal documents in social research
There is a problem of interpretation. The researchers might misinterpret the meaning of the documents. The less contextual information the researchers have, the more likely this is to happen.
Practically it takes a long time to sift through and organise the information.
Who cares? Let’s face it, are you really going to go and read a 2, 200 page work analysing letters from Polish Immigrants, written over 100 years ago?
Voices of Guinness: An Oral History of the Royal Park Brewery (202) is a recent academic work by Tim Strangleman which explores the experience of work in one Guinness Factory from the 1940s to the early 2000s.
The research took place over several years and consists of oral histories (presumably based on in-depth structured, or even unstructured interviews) with people who used to work in the factory and the use of a range of secondary documents such as photos, pictures and the Guinness factory magazine.
Strangleman puts together a kind of collage of life histories to present various stories about how workers made sense of going to work: what work meant to them and how they coped with its challenges.
This is a useful example of ‘work in modernity’ – Strangleman describes how the Guinness company established a kind of ‘industrial citizenship’ – their aim was to build workers who were fully rounded humans who had a sense of ownership over their work, a concept which many seem very alien now with ‘zero hours contracts’.
The workers for the most part in the 1940s – 1970s at least bought into this – they felt at home in the workplace and because of this, they felt able to criticize the management, a situation which may have been uncomfortable for them, but helped them to keep the workers happy enough.
In the 40s-60s – leisure was broadly focused around the factory and with work colleagues – there were several social clubs such as sports clubs, even theatre clubs, but this started to change in the 1960s when rising incomes led to more privatised forms of leisure.
The workers in late modernity also expected to be employed for life, which is one of the most notable changes to date – most students today don’t want a job for life, and you see the idea of ‘temporary employment’ built into the modern day site of the factory – NB the Guinness Factory is now closed, it has been replaced with ‘Logistics’ wharehouses, the kind of temporary structures which stand in contrast with the more permanent nature of work in modernity.
This is an excellent study to show what work used to be like in Modernity, and as Strangleman says, it reminds us what we have lost in Postmodernity.
It’s also interesting to contrast how the solidness of the factory then ties in with the stable idea of ‘jobs for life’ whereas now people no longer expect or even want jobs for life, we see more temporary buildings forming the basis for working class jobs, most obviously the prefab Amazon warehouses.
Official Statistics are numerical data collected by governments and their agencies. This post examines a ranges of official statistics collected by the United Kingdom government and evaluates their usefulness.
The aim of this post is to demonstrate one of the main strengths of official statistics – they give us a ‘snap shot’ of life in the U.K. and they enable us to easily identify trends over time.
Of course the validity and thus the usefulness of official statistics data varies enormously between different types of official statistic, and this post also looks at the relative strengths and limitations of these different types of official statistic: some of these statistics are ‘hard statistics’, they are objective, and there is little disagreement over how to measure what is being measured (the number of schools in the U.K. for example), whereas others are ‘softer statistics’ because there is more disagreement over the definitions of the concepts which are being measured (the number of pupils with Special Educational Needs, for example).
If you’re a student working through this, there are two aims accompanied with this post:
After you’ve read through this material, do the ‘U.K. official statistics validity ranking exercise’.
Please click on the images below to explore the data further using the relevant ONS data sets and analysis pages.
Ethnic Identity in the United Kingdom According the U.K. 2011 Census
U.K. Census 2011 data showed us that 86% of people in the United Kingdom identified themselves as ‘white’ in 2011.
How valid are these statistics?
To an extent, ethnic identity is an objective matter – for example, I was kind of ‘born white’ in that both my parents are/ were white, all of my grandparents were white, and all of my great-grandparents were white, so I can’t really claim I belong to any other ethnic group. However, although I ticked ‘white’ box when I did the U.K. Census, this personally means very little to me, whereas to others (probably the kind of people I wouldn’t get along with very well) their ‘whiteness’ is a very important part of their identity, so there’s a whole range of different subjective meanings that go along with whatever ethnic identity box people ticked. Census data tells us nothing about this.
Religion according to the U.K. 2011 Census
In the 2011 Census, 59% of people identified as ‘Christian’ in 2011, the second largest ‘religious group’ was ‘no religion’, which 25% of the U.K. population identified with.
Statistics on religious affiliation may also lack validity – are 59% of people really Christian? And if they really are, then what does this actually mean? Church attendance is significantly lower than 59% of the population, so the ‘Christian’ box covers everything from devout fundamentalists to people that are just covering their bases (‘I’d better tick yes, just in case there is a God, or gods?’)
The British Humanist Society present a nice summary of why statistics on religious belief may lack validity…basically based on the ‘harder’ statistics such as church attendance which show a much lower rate of committed religious practice.
The United Kingdom Employment Rate
The employment rate is the proportion of people aged from 16 to 64 in work.
The lowest employment rate for people was 65.6% in 1983, during the economic downturn of the early 1980s. The employment rates for people, men and women have been generally increasing since early 2012.As of December 2016, the employment rate for all people was 74.6%, the highest since records began in 1971
Household Income Distribution in the United Kingdom
Household income statistics are broken down into the following three broad categories:
original income is income before government intervention (benefits)
gross income is income after benefits but before tax
disposable income is income after benefits and tax (income tax, National Insurance and council tax).
In the year ending 2016, after cash benefits were taken into account, the richest fifth had an average income that was roughly 6 times the poorest fifth (gross incomes of £87,600 per year compared with £14,800, respectively)
Reasons why household income data may lack validity
While measuring income does appear to be purely objective (you just add and minus the pounds), the income data above may lack validity because some people might not declare some of the income they are earning. Cash in hand work, for example, would not be included in the above statistics, and some money earned via the ‘gig economy’ might not be declared either – how many people actually pay tax on their YouTube revenue for example, or from the goods they sell on Ebay?
The United Kingdom Crime Rate
Below I discuss data from the Crime Survey of England and Wales (CSEW), which is a victim-survey conducted by structured interview with 35 000 households. It seems pointless discussing the crime rate according to police recorded crime because it’s such an obviously invalid measurement of crime (and the police know it), simply because so many crimes go unreported and hence unrecorded by the police.
Latest figures from the Crime Survey for England and Wales (CSEW) show there were an estimated 6.1 million incidents of crime experienced by adults aged 16 and over based on interviews in the survey year ending December 2016.
The green dot shows the figure if we include computer based crimes and online fraud, a new type of crime only recently introduced to the survey (so it wouldn’t be fair to make comparisons over time!) – if we include these the number of incidents of crime experienced jumps up to 11.5 million.
Reasons why even the CSEW might lack validity
Even though its almost certainly more valid than police recorded crime – there are still reasons why the CSEW may not report all crimes – domestic crimes may go under-reported because the perpetrator might be in close proximity to the victim during the survey (it’s a household survey), or people might mis-remember crimes, and there are certain crimes that the CSEW does not ask about – such as whether you’ve been a victim of Corporate Crime.
The U.K. Prison Population
The average prison population has increased from just over 17,400 in 1900 to just over 85,300 in 2016 (a five-fold increase). Since 2010, the average prison population has again remained relatively stable.
Prison Population Statistics – Probably have Good Validity?
I’ve included this as it’s hard to argue with the validity of prison population stats. Someone is either held in custody or they or not at the time of the population survey (which are done weekly!) – A good example of a truly ‘hard’ statistic! This does of course assume we have open and due process where the law and courts are concerned.
Of course you could argue for the sake of it that they lack validity – what about hidden prisoners, or people under false imprisonment? I’m sure in other countries (North Korea?) – their prison stats are totally invalid, if they keep any!
United Kingdom Population and Migration Data
Net migration to the U.K. stood at 248 000 in 2016, lower than the previous year, but still historically high compared to the 1980s-1990s.
There are a number of reasons why UK immigration statistics may lack validity
According to this migration statistics methodology document only about 1/30 people are screened (asked detailed questions about whether they are long term migrants or not), on entering the United Kingdom, and only a very small sample of people (around 4000) are subjected to the more detailed International Passenger Survey.
Then of course there is the issue of people who enter Britain legally but lie about their intentions to remain permanently, as well as people who are smuggled in. In short the above statistics are just based on the people the authorities know about, so while I’m one to go all ‘moral panic’ on the issue of immigration, there is sufficient reason to be sceptical about the validity of the official figures!
You might like to rank the following ‘official statistics’ in terms of validity – which of these statistics is closest to actual reality?
Immigration statistics – Net migration in 2016 was 248 000
Prison statistics – There are just over 85 000 people in prison
Crime statistics – There were around 6 million incidents of crime in 2016
The richest 20% of households had an average income of around £85 000 in 2016
Please click the pictures above to follow links to sources…
The United Kingdom Census is a survey of every person in the United Kingdom, carried out every 10 years, the last one being in March 2011. It asks a series of ‘basic’ questions about sex, ethnicity, religion and occupation. It is the only survey which is based on a ‘total sample’ of all U.K. households. You might also like this summary – What is a Census?
Secondary data has already been collected so should be easier to use, but you have to factor in bias!
There is a huge amount of secondary data available, it is often easier to work with than people in primary research, however you are limited to what is available and you are subject to the biases of the people who produced it!
What is secondary data?
Information which has been collected previously, by someone else, other than the researcher. Secondary data can either be qualitative, such as diaries, newspapers or government reports, or quantitative, as with official statistics, such as league tables.
Strengths of using secondary data in social research
There is a lot of it! It is the richest vein of information available to researchers in many topic areas. Also, some large data sets might not exist if it wasn’t for the government collecting data.
Sometimes documents and official statistics might be the only means of researching the past.
Official statistics may be especially useful for making comparisons over time. The U.K. Census for example goes back to 1851.
At a practical level, many public documents and official statistics are freely available to the researcher.
Limitations of using secondary data
Official statistics may reflect the biases of those in power – limiting what you can find out.
Official statistics – the way things are measured may change over time, making historical comparisons difficult (As with crime statistics, the definition of crime keeps changing.)
Documents may lack authenticity– parts of the document might be missing because of age, and we might not even be to verify who actually wrote the document, meaning we cannot check whether its biased or not.
Representativeness – documents may not be representative of the wider population –especially a problem with older documents. Many documents do not survive because they are not stored, and others deteriorate with age and become unusable. Other documents are deliberately withheld from researchers and the public gaze, and therefore do not become available.
This was a brief post, for revision purposes, designed as last minute revision for the AS and A Level sociology exams.
For more detailed posts on research methods, including secondary data, please see my page on research methods.
‘Who Tweets’ is an interesting piece of recent research which attempts to determine some basic demographic characteristics of Twitter users, relying on nothing but the data provided by the users themselves in their twitter profiles.
Based on a sample of 1470 twitter profiles* in which users clearly stated** their age, the authors of ‘Who Tweets’ found that 93.9% of twitter users were under the age of 35. The full age-profile of twitter users (according to the ‘Who Tweets’/ COSMOS data) compared to the actual age profile taken from the UK Census is below:
Compare this to the Ipsos MORI Tech Tracker report for the third quarter of 2014 (which the above research draws on) which used face to face interviews based on a quota sample of 1000 people.
Clearly this shows that only 67% of media users are under the age of 35, quite a discrepancy with the user-defined data!
The researchers note that:
‘We might… hypothesis that young people are more likely to profess their age in their profile data and that this would lead to an overestimation of the ‘youthfulness’ of the UK Twitter population. As this is a new and developing field we have no evidence to support this claim, but the following discussion and estimations should be treated cautiously.
Looking again at the results from the Technology Tracker study conducted by Ipsos MORI, nearly two thirds of Twitter users were under 35 years of age in Q3 of 2014 whereas our study clearly identifies 93.9% as being 35 or younger. There are two possible reasons for this. The first is that the older population is less likely to state their age on Twitter. The second is that the age distribution in the survey data is a function of sample bias (i.e. participants over the age of 35 in the survey were particularly tech-savvy). This discrepancy between elicited (traditional) and naturally occurring (new) forms of social data warrants further investigation…’
This comparison clearly shows how we get some very different data on a very basic question (‘what is the age distribution of twitter users’?) depending on the methods we use, but which is more valid? The Ipsos face to face poll is done every quarter, and it persistently yields results which are nothing like COSMOS, and it’s unlikely that you’re going to get a persistent ‘tech savy’ selection bias in every sample of over 35 year olds, so does that mean it’s a more accurate reflection of the age profile of Twitter users?
Interestingly the Ipsos data shows a definite drift to older users over time, it’d be interesting to know if more recent COSMOS data reflects this. More interestingly, the whole point of COSMOS is to provided us with more up to date, ‘live’ information – so where is it?!? Sort of ironic that the latest public reporting is already 12 months behind good old Ipsos –
At the end of the day, I’m not going to be too harsh about the above ‘Who Tweets’ study, it is experimental, and many of the above projects are looking at the methodological limitations of this data. It would just be nice if they, err, got on with it a bit… come on Sociology, catch up!
One thing I am reasonably certain about is that the above comparison certainly shows the continued importance of terrestrial methods if we want demographic data.
Of course, one simple way of checking the accuracy of the COSMOS data is simply to do a face to face survey and ask people what there age is and whether they state this in their Twitter profiles, then again I’m sure they’ve thought of that… maybe in 2018 we’ll get a report?