Changes in the way we interact and communicate lead to changes in the way we govern ourselves and just as with the invention of the printing press resulting in the evolution of copyright and libel laws, so the emergence of big data will result in new laws to govern the new ways in which this information is collect, analysed and utilized.
In this final chapter of the main section of Viktor Mayer-Schonberger and Kenneth Cukier’s (2017) ‘Big Data’: The Essential Guide to Life and Learning in the Age of Insight – the authors suggest four ways in which we might control the use of Big Data in the coming years….
Firstly, Crozier suggests we will need to move from ‘privacy by consent’ to ‘privacy by accountability. Because old privacy laws by consent don’t work in the big data age (See here for why), we will effectively have to trust companies to make informed judgments about the risks of re-purposing the data they hold. If they deem there to be an element of risk of harm to people, they may have to administer a second round of ‘consent of use’, if the risk is very small, they can just go ahead and use it.
If is also possible to deliberately blur data so that it becomes fuzzy and you cannot see individuals in it – so you can set analytical programmes to return aggregate results only -an approach known as differential privacy.
Comment: NB – this sounds dubious – we just trust companies more….the problem here being that we can only really trust them to do one thing – put their profits before everything else, including people’s privacy rights.
Secondly, we will also need to ensure that we do not judge people based on propensity by aggregate. In the big data era of justice, we need to hold people account for their individual actions – i.e. for what they have actually done as individuals, rather than what the big data says people like them are likely to do.
Comment: NB – all he seems to be saying here is that we carry on doing what we already do (in most 9cases at least!)
Thirdly (which stems from the problem that big data can be something of a ‘black box’ – that is to say the number of variables which go into making up predictions and the algorithms which calculate them defy ordinary human understanding) – we will need a new series of experts called algorithmists to be on hand to analyse big data findings if and when individuals feel wronged by them. Crozier argues that these will take a ‘vow of impartiality’ in monitoring and reviewing the accuracy of big data predictions, and sees a role for both internal and external algorithmists.
Comment: this doesn’t half sound like something August Comte, the founding father of Positivism, would say!
Crozier argues this is just the same as new specialists emerging in law, medicine and computer security as these field developed in complexity.
Fourthy and finally, Crozier suggests we will need to develop some sort of new anti-trust laws to ensure that one company does not come to have a monopoly on data.
Comment: Fair enough!
I detect a distinct pro-market tone in the authors’ analysis of big data – basically we trust companies to use it (but avoid monopoly power), but we mistrust governments – precisely what you’d expect from the Silicon Valley set!