Posted on Leave a comment

Why is the NHS in Crisis? Yes, it’s neoliberalism – AGAIN!

The Daily Mail  and their Tory beneficiaries would have you think that the current crisis within the NHS are caused mainly by a combination of the following variables:

  • Winter Viruses
  • Inefficiency
  • Immigrants
  • Lazy Staff
  • Drunks

HOWEVER, this is not the case according to some more in-depth analysis by Ravi Jayaram, an NHS consultant (in The Guardian), who instead blames several years of chronic underfunding by the Tory government which have had the following effects:

  • Firstly, Primary Care services have been decimated by funding cuts, and as a result there are fewer GPs per patients, and so people feel they have to go to A and E rather than seeking help from their local GP.
  • Secondly, the recent conflict over Junior Doctors’ pay and the removal of the nurses bursary has left a sour note in the NHS, with those who are able to do so retiring early or leaving the country, meaning that the staff left behind struggle to provide safe and effective care.
  • Thirdly, whole wards of some hospitals have been closed by hospital trusts in order to stay in the black, meaning there is a decrease in supply.

NB – all of this has been going on while, as is well known, there is an increasing demand for NHS services by an ageing population!

And the deeper cause of all of this….well it’s a blinkered commitment to a neoliberal ideology which champions lower taxation and tight control on public spending….

 

 

 

 

 

 

 

 

Advertisements
Posted on Leave a comment

Big Data: Controlling its Use

Changes in the way we interact and communicate lead to changes in the way we govern ourselves and just as with the invention of the printing press resulting in the evolution of copyright and libel laws, so the emergence of big data will result in new laws to govern the new ways in which this information is collect, analysed and utilized.

In this final chapter of the main section of Viktor Mayer-Schonberger and Kenneth Cukier’s (2017) ‘Big Data’: The Essential Guide to Life and Learning in the Age of Insight – the authors suggest four ways in which we might control the use of Big Data in the coming years….

Firstly, Crozier suggests we will need to move from ‘privacy by consent’ to ‘privacy by accountability. Because old privacy laws by consent don’t work in the big data age (See here for why), we will effectively have to trust companies to make informed judgments about the risks of re-purposing the data they hold. If they deem there to be an element of risk of harm to people, they may have to administer a second round of ‘consent of use’, if the risk is very small, they can just go ahead and use it.

If is also possible to deliberately blur data so that it becomes fuzzy and you cannot see individuals in it – so you can set analytical programmes to return aggregate results only -an approach known as differential privacy.

Comment: NB – this sounds dubious – we just trust companies more….the problem here being that we can only really trust them to do one thing – put their profits before everything else, including people’s privacy rights. 

Secondly, we will also need to ensure that we do not judge people based on propensity by aggregate. In the big data era of justice, we need to hold people account for their individual actions – i.e. for what they have actually done as individuals, rather than what the big data says people like them are likely to do.

Comment: NB – all he seems to be saying here is that we carry on doing what we already do (in most 9cases at least!) 

Thirdly (which stems from the problem that big data can be something of a ‘black box’ – that is to say the number of variables which go into making up predictions and the algorithms which calculate them defy ordinary human understanding) – we will need a new series of experts called algorithmists to be on hand to analyse big data findings if and when individuals feel wronged by them. Crozier argues that these will take a ‘vow of impartiality’ in monitoring and reviewing the accuracy of big data predictions, and sees a role for both internal and external algorithmists.

Comment: this doesn’t half sound like something August Comte, the founding father of Positivism,  would say!

Crozier argues this is just the same as new specialists emerging in law, medicine and computer security as these field developed in complexity.

Fourthy and finally, Crozier suggests we will need to develop some sort of new anti-trust laws to ensure that one company does not come to have a monopoly on data.

Comment: Fair enough!

Overall Comment 

I detect a distinct pro-market tone in the authors’ analysis of big data – basically we trust companies to use it (but avoid monopoly power), but we mistrust governments – precisely what you’d expect from the Silicon Valley set!

Posted on Leave a comment

Tax avoidance – supporting evidence for the Marxist Perspective on Crime

One of they key ideas of Marxist criminologists is that the Law is made by the property owning Capitalist class and  serves their interests.

(NB You might like to review the perspective by reading this long-form post on the Marxist theory of crime more generally before continuing…)

The issue of tax avoidance, which means legally bending the rules to avoid paying tax, is one of the best examples of how the legal system surrounding tax is structured in such a way that allows the wealthy to set up ‘shell companies’ in tax-havens to avoid paying tax on their income and investments….

Such methods can only ever benefit the rich as you need to be quite wealthy to be able to afford the legal and accountancy fees associated with doing this, so these methods are not really available to average, or even moderately high income individuals.

Lewis Hamilton Tax Avoidance.png

To my mind, the most notorious example of a tax avoider from 2017 was Lewis Hamilton, who used the ‘off shore’ method to get a £3 million VAT rebate on his £16 million private jet.

The Lewis Hamilton story was revealed as part of the ‘Paradise Papers’ leak – which consists of 13.4 million documents from offshore legal service providers such as Appleby covering seven decades, from 1950 to 2016. Tax-dodging is a very common practice by the wealthy!

Focussing on Corporate Tax Dodgers rather than individuals…

Corporate Tax Dodgers: the UK’s Worst Offenders – This article lists Google and Gary Barlow (or rather the Corporate entity ‘Take That’ as among the UK’s worst tax-dodgers, although it doesn’t distinguish between tax evasion (which is illegal) and tax avoidance (which isn’t)… I especially love the fact that it was put together (as basically an advert) by an accountancy firm in the North East of England – one of England’s poorest regions and thus the most likely to suffer from lower government revenue to tax dodging.

On a similar theme this Daily Mail article outlines with more clarity the Corporations avoiding Tax – including some very big names such as Café Nero and Vodafone, and LOTS more!

 

 

 

 

 

Posted on Leave a comment

Validity in Social Research

Validity refers to the extent to which an indicator (or set of indicators) really measure the concept under investigation. This post outlines five ways in which sociologists and psychologists might determine how valid their indicators are: face validity, concurrent validity, convergent validity, construct validity, and predictive validity.

As with many things in sociology, it makes sense to start with an example to illustrate the general meaning of the concept of validity:

When universities question whether or not BTECs really provide a measure of academic intelligence, they are questioning the validity of BTECs to accurately measure the concept of ‘academic intelligence’.

When academics question the validity of BTECs in this way, they might be suspicious that that BTECs are actually measuring something other than a student’s academic intelligence; rather BTECs might instead actually be measuring a student’s ability to cut and paste and modify just enough to avoid being caught out by plagiarism software.

If this is the case, then we can say that BTECs are not a valid measurement of a student’s academic intelligence.

How can sociologists assess the validity of measures and indicators?

what is validity.png

There are number of ways testing measurement validity in social research:

  • Face validity – on the face of it, does the measure fit the concept? Face validity is simply achieved by asking others with experience in the field whether they think the measure seems to be measuring the concept. This is essentially an intuitive process.
  • Concurrent validity – to establish the concurrent validity of a measure, the researchers simply compare the results of one measure to another which is known to be valid (known as a ‘criterion measure). For example with gamblers, betting accounts give us a valid indication of how much they actually win or lose, but wording of questions designed to measure ‘how much they win or lose in a given period’ can yield vastly different results. Some questions provide results which are closer to the hard-financial statistics, and these can be said to have the highest degree of concurrent validity.
  • Predictive validity – here a researcher uses a future criterion measure to assess the validity of existing measures. For example we might assess the validity of BTECs as measurement of academic intelligence by looking at how well BTEC students do at university compared to A-level students with equivalent grades.
  • Construct validity – here the researcher is encouraged to deduce hypotheses from a theory that is relevant to the concept. However, there are problems with this approach as the theory and the process of deduction might be misguided!
  • Convergent validity – here the researcher compares her measures to measures of the same concept developed through other methods. Probably the most obvious example of this is the British Crime Survey as a test of the ‘validity’ of Police Crime Statistics’. The BCS shows us that different crimes, as measured by PCR have different levels of construct validity – Vehicle Theft is relatively high, vandalism is relatively low, for example.

Source 

Bryman (2016) Social Research Methods

 

 

Posted on Leave a comment

Playing the SENCO Game…

According to the latest Department for Education data, the number of pupils receiving extra time in exams in England and Wales has increased by 35.8% since 2013/14.

However, at the same time there has been a 20.4% decrease in pupils identified as having Special Education Needs.

This represents a real terms 4 year increase of 51.2% of pupils receiving extra time, relative to those pupils identified as SEN (which should give us an indication of the underlying ‘pool’ of pupils who are potentially eligible for extra time.

Here’s the statistics (full sources below)

SEN pupils

So what’s going on here? How do we explain this?

This Telegraph article points to the fact that a disproportionate amount of the increase in pupils receiving extra time is driven by kids (or rather parents) in Independent schools…they are twice as likely to receive extra time as kids in state funded schools.

This alone has to push you towards a combination of cultural capital theory and labelling theory in explaining what’s going on here – it’s extremely unlikely that kids in Independent schools have objectively (i.e. really) suddenly become more in need of extra time, relative to kids in state schools – and as the article alludes to, it’s probably down to middle class parents getting their kids assessed for extra time (and maybe those kids gaming the system?)

NB – the number of kids in state schools receiving extra time in exams has also increased, but not as fast as those in independent schools. (Might be interesting to subject this to regional analysis to see if it’s linked to income?)

VERY INTERESTINGLY, if you dig into the Access Arrangements data below, this aspect of the data doesn’t exist from the DFES (I assume it did once, otherwise said article wouldn’t have been written)

As to the increasing number of kids receiving extra time AT THE SAME TIME AS A DECREASE IN KIDS WITH SEN – this might reflect a polarisation – i.e. objectively there are fewer kids with ‘more serious’ SEN that require such exam concessions, but overall there are fewer kids with any SEN…

HOWEVER, once you dig even deeper into the stats below, what do you find…

Statemented kids are on the increase within state funded schools (where you get Pupil Premium for taking on statemented kids), while non statemented SEN kids are on the decrease (which you don’t get funding for, but you have to spend school resources on to keep OFSTED happy)

Compared to Independent schools – Statemented kids are on the decrease, while non-statemented kids are on the increase – and how do we explain the difference – these schools don’t get extra money for taking on statemented SEN kids like state schools, while they can get their kids extra time by doing their own ‘in-house’ SEN assessment.

NB – this is only one possible interpretation, and I’m prepared to stand corrected if anyone wants to pull me up on my less than perfect understanding of SEN funding and access arrangement policy!

Sources of Data

SEN data

https://www.gov.uk/government/statistics/special-educational-needs-in-england-january-2017

Access Arrangements

https://www.gov.uk/government/statistics/access-arrangements-for-gcse-and-a-level-2016-to-2017-academic-year

Telegraph Article

http://www.telegraph.co.uk/education/2017/11/30/one-six-children-now-given-extra-time-public-exams-official/

Posted on Leave a comment

What is Reliability?

Reliability refers to the consistency of a measure of a concept. There are three factors researchers generally use to assess whether a measure is reliable:

  • Stability (aka test-retest reliability) – is the measure stable over time, or do the results fluctuate?  If we administer a measure to a group and then re-administer it and there is little variation in the results changed over time, the measure can be staid to have ‘test-retest reliability.
  • Internal reliability – are the indicators which make up the scale or index of a measurement consistent ?If the score a respondents according to one indicator of a measure are consistently related to the scores they achieve according to other indicators for that same measure, then the measure can be said to have ‘internal reliability’.
  • Inter-rater reliability – how much agreement is there over which observed empirical phenomena fit into what indicator? If researchers have a high level of agreement over how observed behaviour ‘map onto’ the indicators of a measure, then we can say the measure has a high level of inter-rater reliability.

Three ways to assess reliability

Source

Bryman, Alan (2016) Social Research Methods

 

Posted on Leave a comment

‘Station’ based lessons for A level sociology

Station based lessons are those in which the teacher sets up a number of different (and differentiated) tasks on different tables in the class room and students spend a set time at each table, moving from task to task.

I find these are most useful at the very beginning of the Winter and Easter terms, after students have done sufficient sociology to enable them to work through said tasks largely on their own, with the teacher acting only as a facilitator…

This is precisely what I’ll be doing with my Upper sixth groups when I face the horror and terror of going back to school on Thursday…. Station lessons make things a little easier…

Here’s one to try out, based on recapping consensus theories of crime and deviance, links to the resources are below.

Overview plan:

  • students spend about 30-40 minutes working through the 5 stations, 5-7 minutes on each of five separate stations.
  • students spend about 20-30 minutes ‘writing up’ the answers in the attached booklets.

Resources 

  • Consensus Theories of Crime Recap Lessons.
  • White board for task
  • A3 photocopies of pages 2-4 above for stations 2, 3, and 5.
  • Card sorts for task 4 (I don’t have these to hand, but you simply need cards with concepts, and pictures and perspectives – this is more of a general recap rather than a consensus theory of crime recap),

Station 1: White Board Station (AO1 – Knowledge)

  •  Explain your one of the consensus theories of crime in picture form – you may use three words also.

Station 2: AO1 Concepts Station (A01 – Knowledge)

  • Research and write in the definitions for two-three of the concepts
  • If you finish, add in an example or piece of supporting evidence which illustrates the concept

Station 3: Data Response Station (AO2 – Application)

  • Read the item, then for one theory write in how that theory would explain the case study in the item. 

Station 4: Card Game Station (AO3 – Analysis)

  • Game 1: Shuffle the concepts and theories cards – pick two (or three!) at random, suggest a link between them.
  • Game 2: Rank the ‘case studies cards’ – rank them in order of how well they support your assigned theory. 

Station 5: Evaluation Station (AO3 – Evaluation)

  • Add in as many evaluation points as possible for one theory
  • If you finish, then add in counter-evaluation to the previous evaluations of theories

Further comments

There’s not a lot else to say really… this was just a New Year’s post for all the sociology teachers out there, happy new year!