
Last Friday I held my inaugural lecture as Professor of Privacy, Cybersecurity, and IT Contract Law at the Faculty of Law – European Centre for Privacy & Cybersecurity at Maastricht University. During my lecture I introduced the research activities that I would like to further develop at Maastricht University in a multidisciplinary setting that includes law, technology, data science, social science and economics, leveraging both my academic background and my practical experience as business lawyer.
My lecture touched specifically on the economy of personal data, issues and observations on personal data and data protection as a part of Corporate Social Responsibility. My objective in this particular area is that of triggering virtuous competition between countries concerning data protection which has the ability to create an environment that identifies and promotes data protection as an asset (as opposed to merely using “data” as an asset), helping the same companies in their adherence to the principles of data protection to responsibly further their economic targets.
Data is the driving factor in the paradigmatic change from a product-based to a service-based economy that we are currently witnessing where processing and the analysis of personal information lie at the core in what we call the data-centric economy. In this context, the European Commission has recently highlighted that
“Data is rapidly becoming the lifeblood of the global economy. It represents a key new type of economic asset. Those that know how to use it have a decisive competitive advantage in this interconnected world, through raising performance, offering more user-centric products and services, fostering innovation – often leaving decades-old competitors behind.”
European Commission, Enter the Data Economy – EU Policies for a Thriving Data Ecosystem [11 Jan 2017])
Data are now a valuable resource for many companies and with the proper technology, companies can harness large amounts of information in order to draw relevant conclusions regarding past and current activities, as well as draw predictions for the future (i.e., Big Data & Analytics). This can potentially allow them to better develop, advertise and sell products and services. In the words of the European Commission:
“The main advantage of Big Data is that it can reveal patterns between different sources and data sets, enabling useful insights. Let’s think for example of health, food security, intelligent transport systems, energy efficiency and urban planning. These ultimately allow higher productivity and improved services, which are the source of economic growth, The use of Big Data by the top 100 EU manufacturers could lead to savings worth €425 billion, and by 2020, Big Data Analytics could boost EU economic growth by an additional 1.9%, which means a GDP increase of €206.”
European Commission, The EU Data Protection Reform and Big Data – Factsheet [Jan 2016])
It cannot be ignored that the top 5 most valuable companies on the Fortune 500 are data companies including Apple, Amazon, Alphabet, Microsoft, and Facebook. This is in stark contrast to just a few years ago when energy companies were at the top of the ranks.
Netflix is a good example that allows us to understand how the data economy works in practice. Netflix uses big data analysis to anticipate customer demand, mapping previous offerings against customer preferences and the success of those offerings, in order to predict the next big hits. Other examples include Amazon which big data analysis, e.g., to tailor the advertisements served to online customers and to improve customer experience and Starbucks, which uses big data analysis, e.g., to determine the potential success of each new location they plan to open, taking into account information available on traffic, demographics and local customer behaviour.
Another example is the Google search engine which relies extensively on big data analysis to sift through millions of websites in order to locate the most relevant results it can for its users. Sometimes, however, something can go wrong with the algorithms, and this can have huge economic impact on companies. For example, in 2017 Coca-Cola, Walmart and General Motors announced plans to suspend, or move spending away from, YouTube because ads (in some cases their own) were appearing alongside offensive content. It is noteworthy that, according to The Economist, Google’s own brand has suffered, with damages to its sales that could be as high as $ 1bn in 2017, or around 1 % of its gross advertising revenue. Shares of its parent company, Alphabet, have fallen by around 3 %.
Algorithms not only process personal data but may (increasingly with AI) take decisions that can affect people in terms of their
- financial circumstances, such as their eligibility to obtain credit (creditworthiness);
- insurance premiums (insurance risk assessment – it may lead to higher premiums);
- employment suitability or putting individuals at a serious disadvantage (scoring).
From the data protection point of view, compliance with the privacy principles are instrumental to the creation of that value that qualifies “data as an asset”. This is specifically what concerns data quality, purpose limitation, accountability; the provision of transparent, easily accessible and intelligible information to individuals; and the identification of an appropriate legal basis, e.g., consent, performance of a contract, legitimate interest (if applicable), etc. Therefore, it is more accurate to state that data per seare not an asset. They can only become an asset if collected and processed in a way to lawfully enable a company to extract value from them.
From the perspective of a company, it is important to have a strategic and accurate approach to data protection compliance in order to collect personal data from the start in a way that enables further lawful processing activities. Effectively, the difference for a company between dying buried under personal data and harnessing their value is directly related to privacy compliance management.
A strategic and accurate approach to data protection can generate a significant return on investment (ROI). However, what about REAL consumer/data subject protection? Two fundamental pillars of data subjects’ protection are still: privacy policies and the data subject’s consent.
On privacy policies, interestingly enough, the New York Times recently published the following results:
“[t]he average person would have to spend 76 working days reading all of the digital privacy policies they agree to in the span of a year. Reading Amazon’s terms and conditions alone out loud takes approximately nine hours. Why would anyone read the terms of service when they don’t feel as though they have a choice in the first place? It’s not as though a user can call up Mark Zuckerberg and negotiate his or her own privacy policy. The “I agree” button should have long ago been renamed “Meh, whatever.”
How Silicon Valley Puts the ‘Con’ in Consent, The New York Times Feb. 2, 2019
When considering consent back in 2012, I came to understand that the law itself wasn’t enough and started to consider the possibility of framing data protection as corporate social responsibility. Already then, before the GDPR came into force, is was evident that by publishing a privacy policy (which nobody reads) and collecting consent (which is very easy to get) companies were allowed to lawfully carry out very intrusive profiling activities (capable also of producing legal effects or similarly significant effects on individuals) without data subjects effectively being aware of it. And this, unfortunately, still largely holds true.
No present or forthcoming legal framework (whether it be new competition rules or the EU’s much-discussed General Data Protection Regulation) will ever be able to effectively regulate our data-centric society while also perfectly maximising the benefits for citizens and effectively minimising risks for individuals (consumers/data subjects). Regulators and institutions can no longer be the police of the Internet.
The European Data Protection Supervisor, the European Commission and the Council of Europe have confirmed the necessity of data processing that takes ethical and value-based models into consideration, pointing towards the development of virtuous compliance that goes beyond what is strictly prescribed in the law.
More precisely, the European Data Protection Supervisor recently stressed that: “[t]he extent to which humans can enjoy their fundamental rights depends not only on legal frameworks and social norms, but also on the features of the technology at their disposal. Recent discoveries of inappropriate use of personal data have driven the public debate on data protection to an unprecedented level. It is necessary that the shaping and the use of technology takes account of the need to respect the rights of individuals, rather than being driven exclusively by economic interests of few businesses.” (Opinion 5/2018)
Examples which can help us to better understand the situation and the relevant key concepts. Include Cambridge Analytica and Digital Out-of-Home (DOOH) advertising. I will further explain these examples in detail.
In 2018 the UK Data Protection Authority (the ICO) published its “Investigation into the use of data analytics in political campaigns Investigation update” report that provides details with respect to the office of Information Commissioner Elizabeth Denham’s investigation of the widespread use of data analytics in electoral campaigns. The report largely focuses on Facebook and Cambridge Analytica as targets of the investigation as a result of their failure to safeguard the information of individuals, allowing the data of an estimated 87 million users to be harvested without their express knowledge.
The relationship between ethics and data protection can be successfully illustrated through the example of the Cambridge Analytica case in which the original purposes for the profiling system developed by the Psychometrics Centre of Cambridge University had the potential capability of leading to useful academic, societal and business insights concerning psychological targeting as a tool to influence behaviour, something that is undoubtedly important to understand. Such technology and specifically, algorithms, however, also present significant risks as they have effectively been transformed into a means to influence democratic processes, essentially hampering the fundamental principles of democratic society which must be protected. In fact, technology is neither good, bad, or neutral as Melvin Kranzberg, wisely pointed out. It is now possible to create an algorithm that is capable of predicting individual’s behaviour, and the logical consequence of this technology is improved regulation on the part of legislators and responsibility on the part of governments and businesses. The duty of law makers, politicians and digital businesses is to ensure that such incredible technologies are used in a fair way, so to respect fundamental rights and freedoms and to grant dignity to the digital society.
We are all familiar with the concept of Data Protection by Design/Default as they are laid down in Article 25 and Recital 78 of the GDPR, and encourage organisations to build technical and organisational measures into the design of their processing operations as to safeguard privacy and provide data protection from the start (by design). Additionally, this means that by default such entities ensure that personal data is always processed with the highest possible level of privacy protection, meaning that only necessary data are processed and such data is not stored for more than necessary and only relevant authorised people have access to such data (by default).
Data Protection by Design/Default is indeed one of the most effective ways to fully achieve compliance with the fundamental data protection principles as they are established in Article 5 of the GDPR. The time, however, has arrived to go one step further, towards a concept of “Fairness by design” where fairness relates to balanced and proportionate data processing. In line with this principle, organisations should take into account the interests and reasonable expectations of privacy of data subjects. The processing of personal data should not intrude unreasonably upon the privacy, autonomy and integrity of data subjects, and organisations should not exert pressure on data subjects to provide personal data.
Fairness goes beyond what is strictly prescribed by the law, taking into consideration an ethical dimension as discussed above. Like Data Protection by Design, it should be built into the very design of data processing activities, whether they be products, services, or applications and – most importantly – the algorithms that underpin the information/data processing should be designed and developed in a way that is compatible with the concept of “fairness by design”.
Fairness by design may be seen as a further specification of the principle of data protection by design aimed at complementing the legal with the ethical dimensions of privacy and protection of personal data for the development of a healthy and democratic digital society.
With respect to Digital-Out-of-Home advertising, incredible progress thanks to the digitalisation of something as simple as the billboard. Shopping malls, airports and train stations are full of strategically placed DOOH boards that provide you, the traveller or consumer, with real-time targeted messages of which companies are able to measure impact and drive return on investment (“ROI”).
DOOH has experienced constant revenue growth over the past years and has successfully revolutionised advertising as consumers become increasingly mobile both in terms of both devices and time spent outside the home. DOOH software is capable monitoring how many people are in front of a display and how long they look at it and is able to obtain data such as gender, approximate age, and identify features such as eyeglasses, beards and moustaches, and perhaps most incredibly, gage emotions, all without the bystander even knowing that their data is being collected.
Companies like, for example, large retailers take advantage of the technology which can help understand local consumer behaviour. Every day new technologies do not only create challenges to citizens’ fundamental rights and freedoms it also create societal benefits. For example, the NHS launched a digital out of home campaign to highlight the life-saving power of blood donation amplified through online and social activity including a Canvas advert for Facebook, sponsored posts on Facebook, Instagram and Twitter and organic social activity.
More transparency is needed concerning these and similar technologies which are increasingly ubiquitous.
The point that I want to make here, however, is not about mere legal compliance or the fact that DOOH advertising is challenging citizens’ privacy and data protection rights. This is obvious. As I already mentioned, we need to go one step further!
Data protection compliance should be understood as part of Corporate Social Responsibility (“CSR”).
Corporate Social Responsibility is defined as the commitment of businesses to contribute to economic and industrial development while at the same time improving the quality of life of the workforce, families, communities, and society as a whole. The time when companies were able to consider data protection as a mere legal compliance obligation is in the past. Instead, in this data-centric world businesses need to consider privacy and data protection as assets that can help them to responsibly further their economic targets.
As the White House stressed in its 2014 Big Data and Privacy: A Technological Perspective report to President Obama, the effective use of technology can successfully leverage the benefits of big data while at the same time limiting risks to privacy. This, however, in my opinion can only be done at the company level. Sound corporate policy can allow for data processing in a responsible and sustainable way, furthering the potential of data to improve human existence. It can be used to challenge climate change and to create medical cures we never thought were possible. Data, if correctly used, has the power to change the world and make it more respectful for human beings, animals and the environment.
Therefore, all companies participating in the data-centric society need to act in a socially responsible way, by complying with five main rules of Socially Responsible Data Protection, regardless of normative control:
1. embed data protection and security in the design of processes;
2. be transparent with citizens about the collection of their data;
3. balance profits with the actual benefits for citizens;
4. publish relevant findings based on statistical/anonymized data to improve society;
5. devote a portion of revenues to awareness campaigns for citizens with regard to the data-centric society.
I propose that companies acting according to such principles be awarded with a seal to be displayed on their sites, media, materials, and products demonstrating that they act responsibly in the data-centric society. In this way they will be recognised by consumers/data subjects, leveraging data protection as CSR as a competitive edge.
Data is the present and the future and instead of regulating alone, we must push for Data Protection to be considered as a genuine aspect of CSR.
We need to do it, not only for the rights to privacy and data protection, but also to safeguard freedoms of movement and freedom of speech in our society, especially for youngsters and future generations!
CONNECT