Engineering trust: Why tech needs privacy by design

Nowhere is aversion towards transparency deeper than in the tech industry. The result of this corporate culture has been a massive breakdown of public trust. What management cannot fix, engineering must, argues leading privacy campaigner Simon Davies.

The US industry magnate Harvey Firestone once observed that ‘fundamental honesty is the keystone to business’. A century on, the new information and communication industries could do well to heed his words. There is a crisis of public trust brewing, that only a dose of honesty will cure. Since 2012 opinion polls have suggested that there has been an overall decline in trust and respect for big business, but in recent times ICT companies have suffered disproportionately. In traditional markets, product satisfaction is more easily measured, and trust can readily be linked to merchandise or service delivery. Organisations in the online world are dependent on a much more robust foundation of trust, that can withstand the legal and technical turbulence of virtual space.

In the post-Firestone world, honesty is often wrongly interpreted as ‘brand integrity’. And as far as most organisations are concerned, the trick to ensuring brand integrity is never to get caught out being dishonest. This is a cynical and disingenuous position that flies in the face of public expectation.

In 2013, the Privacy Surgeon and the London School of Economics published a report on the emerging trends in privacy.1 The clear outcome was that transparency and accountability will become important factors in the public consciousness. This means people will increasingly expect companies to be straight with them.

The key condition is ‘fundamental’. Fundamental honesty isn’t the same as ‘not lying’. Firestone wasn’t talking about ‘not lying’; he was referring to something greater. Fundamental honesty is a quality, not a statement, and it is what business needs to build within its core environment, as well as with the customer. All solid personal relationships depend on an intrinsic honesty, rather than a contrived set of communication filters. Business should behave no differently.

A company can be dishonest without lying. Indeed, many are institutionally dishonest. They deceive by telling half-truths, massaging language or creating imagery, rather than nurturing a solid foundation of evidence and reason. Most of the successful consumer campaigns against corporations are based on this circumstance.

In the Firestone view, honesty is the path that navigates between deception and ambiguity. Honesty has the characteristics of clarity, truth and introspection. The organisation that exhibits those qualities will engender public trust, although being so honest can be internally painful for an organisation.

Trust is one consequence of honesty and privacy advocates have a key role to play in affirming that trust. They do this because customer expectations have shifted, and leading companies often recognise that their role must shift too. Most large enterprises have plenty of lawyers to help focus on data protection compliance, but ‘trust is a big issue’ and activists can help with that agenda, Microsoft’s chief privacy officer, Brendon Lynch, said in an interview. ‘People will not use technology they do not trust.’ Advocacy organisations ‘have direct connections with the media,’ Lynch said, ‘and they can really set the tone for how your privacy practices are going to be received.’2

Source: Wikimedia

Corporate culture, PR and the mask of honesty

All the same, being frank and open with the public is alien to the nature of large organisations. Within corporations there is rarely such thing as a ‘definitive view’. Many major decisions are resolved organically through a complex process of doubt and challenge – just as they are in all human relationships. With friendships and partnerships, a discussion of doubt is often more important than a contest of definite views. Disclosure is usually more nurturing than secrecy. People display their vulnerability, openness and uncertainty to help strengthen trust, whereas corporations believe such a display would raise the stigma of weakness and risk.

Governments are much the same. They enforce ‘cabinet solidarity’ with the threat of dismissal for dissident views. Corporations enforce corporate solidarity with the threat of sacking or excommunication from the core. Either way, this strategy is pointless. Anyone in media knows that enforced solidarity under pressure leaks like a ripped sieve. Corporations, generally speaking, are terrified of appearing uncertain or transitional. However, the prevailing policy of smothering public discussion of internal debate is increasingly unsustainable in a world where the building of trust is critically important to success.

Large corporations manage their relationship with the public through the triangulation of three mighty dynamics. The first is risk, which is managed by their lawyers and spin doctors. The second is profit, which is steered by the relevant business model. The third is corporate culture, which determines how a company establishes a process (or – more often – a lack of process). Corporate culture is often theoretically measured through indices, such as a Corporate Social Responsibility framework.

The problem in the new ICT age is that few organisations have been able to create a corporate consciousness that can build enduring trust. In the past – on issues such as the environment, employee care or global responsibility – some corporations managed to embed a belief system within the corporate culture. This permeated all elements of company decision making and thus created a resilient bedrock of trust.

Most ICT corporations haven’t yet been able to enshrine such a process. They swerve chaotically between the dynamics of risk mitigation, profit and a constantly shifting ethical compass. They speak through PR agencies, who are paid by the column inch to make the companies look good. They ratchet the risk sensitivity so high, that every public utterance is a compromise that never quite tells the whole truth.

Let’s explore for a minute the dynamics of trust. Going back to what Firestone observed, there is a powerful bond between trust and honesty. In the world of real human relationships trust is built through a narrative of thought – not a set of assertions.

But where is the narrative in the modern corporation? Well, it does vaguely exist in the form of plausible deniability. Different departments are often free to commission pieces of conflicting research, though in reality only those pieces that are in harmony with company policy will ever see the light of day. This is only natural. Organisations do not want to inspire an image of conflict.

What if we were to turn the conventional wisdom on its head? What if – instead of seeing a variance of views as a threat – corporations publicly exposed the entire process of decision making within the context of a reasoned framework? That would create the ultimate level of honesty and accountability, and possibly the highest ever level of public trust. To provide benchmarks, let’s look for a moment at two major industry players, Google and Microsoft.

At a superficial level, Google is an honest organisation. It is thoroughly transparent about the fact that it has little intention of respecting privacy regulation. Its corporate compass proclaims a higher ethic of unrestricted data without control. In this context the company has little interest in abiding by restrictive legal conditions, but it is endearingly blatant in its rebellion against legal control.

While this ethic has instinctive appeal to some, I doubt it is sustainable. The company’s problem is that it continues to mask its internal narrative. There is little or no knowledge of Google’s processes – if indeed there are any processes – at an ethical, intellectual or an engineering and management level. The high ideal is reduced to dogma. Thus, when the company is caught with its pants down over such issues as the Wi-Fi scandal or the Safari circumvention, the company goes into PR meltdown in a way that corrodes public trust even further. The fact that the ethical compass is linked to advertising revenue has created widespread cynicism about the positions the company has taken with regard to legal compliance.

Microsoft perhaps as an older and more war-torn organisation – has a more evolved intellectual and ethical framework but the company is often too risk averse to promote its full potential. More than a decade ago Microsoft created a substantial foundation called Trustworthy Computing and has sought to build on that foundation to improve its internal compass. It appears however that the corporation is dependent on an internal committee system, that precludes brave choices. Trustworthy Computing was a way to ensure that the entire company was in tune with a set of standards that would nurture public trust. Given the company’s parlous state pre-2002, this was an essential move. However, the quest to create internal harmony has resulted in nervousness about alienation of any part of the corporation.

The aphorism ‘Error will slip through a crack, while truth will stick in a doorway’ might well be the inhouse motto of many of the world’s major IT companies.

If management can’t fix the trust issue, can engineers?

On 10th June 2000, amidst great ceremony, Her Majesty Queen Elizabeth cut the tape to open the first new Thames river crossing in more than a century. The Millennium Bridge had won acclaim for its sleek shape and elegant design – and London was buzzing with excitement about its new landmark.

Then… unexpected drama. The bridge lasted less than 48 hours before a fatal design flaw caused it to be closed for over two years. The problem came down to people’s refusal to respect the engineering limitations of the structure. As soon as the bridge was traversed by pedestrians it started to sway unnervingly. Everyone was at a complete loss to explain why such a beautiful and efficient design went haywire. Authorities however showed no hesitation in shutting it down.

It turned out that the newly named ‘Wobbly Bridge’ was the victim of a positive feedback phenomenon known as synchronous lateral excitation. The natural sway motion of people walking caused small sideways oscillations in the bridge, which in turn caused people on the bridge to sway in step, increasing the amplitude of the bridge oscillations, thus continually magnifying the effect. This resonance had been known to engineers for decades, but the Millennium Bridge had been conceived for submission to a prestigious design competition, so elegance was the primary driver. The interface between human behaviour and engineering was never addressed.

The same phenomenon is all too common in the world of information and communication technologies. Those who design the machines that enable the invasion of privacy are often oblivious to such outcomes, while privacy advocates and data protection regulators are often a million miles from understanding the dynamics and priorities of engineers. While ‘Human-Computer Interaction’ and ‘Security Usability’ are taught in many security and information systems courses, the reality is that the interface between users and machines is still a niche interest. Engineers will design the most ingenious systems, but it is usually only in the latter stages of development that someone may ask the difficult question, ‘How will people interact with this device?’

How people behave is of course crucial to privacy. Will users generate vast amounts of sensitive data that machines will unlawfully process? Will they understand the risks associated with information technologies? Will the design attract privacy-crunching apps that are allowed to exploit personal information?

These are of course critically important considerations for concepts such as Privacy by Design (PbD), which seek to embed privacy protection at every level from conception to deployment. PbD is one of the main pillars of future privacy protection, but it currently exists mostly in the theoretical realm. As a concept PbD was known to the architecture and building sectors from as early as the 1960s, however, within the information arena at least, the expression ‘Privacy by Design’ appears to have emerged only in the late 1990s, and not before another phrase – ‘Surveillance by Design’3– was coined during the debates over the US Communications Assistance for Law Enforcement Act’ (CALEA) in 1994. This, and related legislation globally, was intended to ensure that surveillance capability was embedded into communications design, by mandating that systems were constructed in such a way that law enforcement agencies were able to access whatever data they wanted.

In an effort to counter this trend, researchers and regulators started to develop countermeasures that might provide a higher standard of privacy protection, built from the core rather than as bolt-on measures. Privacy by design is amongst the most important of these. This emerging approach is intended to ensure that privacy is maximised by embedding protection seamlessly across every strand of design and deployment of a product or service. As one prominent contributor to the field, Ann Cavoukian, observed: How we get there is through Privacy by Design. Where PETs [Privacy Enhancing Technologies] focused us on the positive potential of technology, Privacy by Design prescribes that we build privacy directly into the design and operation, not only of technology, but also of operational systems, work processes, management structures, physical spaces and networked infrastructure. In this sense, Privacy by Design is the next step in the evolution of the privacy dialogue.4

Some organisations now appear to be more open to the argument that data minimisation is a sensible approach to risk mitigation and that giving users a degree of data autonomy is central to nurturing trust. In both respects, the use of PbD can be an invaluable benefit to seeking practical alternative approaches. But is the world of engineering ready for it yet? Yes, there are celebrated examples of privacy awareness in the world of engineering, but the key question is whether this awareness has permeated the mainstream of IT development. The answer appears to be a resounding ‘no’.

The reality check for me occurred a few years ago. I was visiting a city that has a very good university with a large and strong Computer Science department – one of the top-rated in Britain. I had been in touch with the department to let them know I planned to visit the area, and to ask whether a small gathering over coffee could be organised to discussed emerging data protection and privacy issues. Amazingly, there appeared to be no interest in privacy in this department. The meeting never took place.

This got me to thinking that there may be a real disconnect in the academic world between engineering and data protection. The interface between human behaviour, personal data and privacy rules seems to exist mainly in the theoretical realm (Information Systems is the closest we get, but even that field is largely theoretical).

Is it that pure engineering, design and coding is still a world removed from the discussions my colleagues have about legal rights? This is a lost opportunity. People might not trust management, but it could be that mathematics and engineering could provide some certainty of protection. Some of the experts in my alma mater Privacy International seem to believe this is the case. One experienced IT professional observed: ‘There is generally some ethical red tape associated with new projects, but once that fence is cleared, then anything goes. In my experience, legal issues are obstacles to be overcome after a novel IT solution has been built, and it is to be rolled out.’

Some exciting tools are being adopted that might assist a more dependable PbD formula across organisations. The notion of Differential Privacy in which a mathematical approach is taken to determining the privacy value of data may in time create a common standard which might form the basis of agreement on the common foundation of PbD techniques. The Differential Privacy approach itself is currently little understood in the general business community as it dramatically challenges in a complex way more traditional legal and political approaches to privacy protection by instituting a hard ceiling at a mathematical and engineering level on the storage and exploitation of data.5

Falling victim to fashion

One of the most striking features of Privacy by Design is the contrast between the popularity of the concept and the actual number of systems and infrastructures that use the technique. PbB has become a fashionable idea, and in the wake of fashion came the pretenders that falsely claim their organisations or products have a genuine commitment to the PbD process.

Many PbD efforts are false, selectively assessing a particular strand of the organisation to lower the risk of criticism or creating a modular approach that selectively fits the organisation’s structure. There are some notable exceptions, but the overriding challenge is to identify instances where a PbD effort has been undertaken with the full consent of all stakeholders within an organisation.

PbD appears to be increasingly adopted at the level of principle by large companies and sectors. The mobile phone network provider organization GSMA for example announced that it is attempting the development of a set of global privacy principles for mobile based on a PbD process. The need for collaboration to be established with handset manufacturers and apps stores is central to a PbD approach in this instance, but such seismic positioning is fraught with logistical problems that would confront any sector attempting an integrated approach to privacy protection.

However, the key messages embraced by PbD have not been lost on regulators. In Europe for example the RFID industry is required by the European Commission to establish a PbD process that will bind the industry to a set of privacy conditions that should provide assurance that privacy is embedded seamlessly throughout the design and deployment aspects of the technology.6An industry-led first draft of these principles has been rejected by the Article 29 Working Party of privacy commissioners.7

The European Data Protection Supervisor has also signalled a possible embedding of PbD into the basis of data protection law, which might ultimately create a general requirement: It would be important to include the principle of ‘Privacy by Design’ among the basic principles of data protection, and to extend its scope to other relevant parties, such as producers and developers of ICT products and services. This would be innovative and require some further thinking, but it would be appropriate and only draw the logical consequences of a promising concept. 8

There is a vast gulf to traverse. In a paper titled ‘What IT Professionals Think About Surveillance’, noted privacy expert Ivan Szekely observed: ‘It can be concluded that the attitudes of the IT professionals only marginally influence their actual behaviour … Those who care more about privacy do not appear to be using more PETs [Privacy Enhancing Technologies – the forebear of Pbd] in their own online activities or in the products they have helped develop.’

This parlous situation needs urgent attention. The challenges are, however, not insurmountable. The theory and practice behind PbD is commonplace, and thus should not be seen as controversial. The concept of embedded protection on the basis of sensitive seamless design has been embraced over the years in numerous environments. In the field of forensics, investigators have for many decades known that the forensic chain of events (collection of material, recording, processing, analysis, reporting etc) is only as reliable as its weakest link, and that a ‘total design’ approach should be taken across the entire chain to reduce the risk of failure.

According to this rationale, the ‘spaces’ between events and processes are seen as posing as much of a risk of failure as the component parts themselves. With this threat model in mind, a system or infrastructure can be designed from ground up, to ensure a seamless approach to risk reduction. The same approach has been pursued to a varying extent for environmental protection, workplace safety, urban planning, product quality assurance, child protection, national security, health planning, infection control and information security.

This approach is rooted in a belief that reliable protection in a complex ecosystem can only be achieved through an integrated design approach. It is reasoned that unless a system is developed from ‘ground up’ with protection at its core, failure will emerge through unexpected weaknesses. The three key perspectives in PbD – regulatory, engineering and managerial – involve significant intersection. However, while the PbD concept continues to run along divergent paths, there is a substantial risk that the technique will be characterised by difference rather than convergence. More interaction and dialogue are required involving regulators, business managers and engineers.

Currently, the evolution of PbD is being conducted sporadically. This dynamic is true for the early development of all such techniques. If proponents of PbD are arguing for an integrated and seamless adoption of systems, then they must argue with equal vigour for an integrated approach to developing PbD as a practical framework. Without such an approach, investors will remain uneducated and unmotivated, and the PbD concept will remain a largely theoretical concept, adopted by a small number of the ‘good’ privacy actors.

The big question – not just for the Googles and Facebooks of this world but also for startups and governments – is whether they are prepared to adopt an engineering approach as a means of garnering greater public trust. As the public become more sensitive and educated on the privacy issue over coming years, such an approach may well become the defining element of trust.

 

This is an excerpt from the final chapter of Simon Davies new book Privacy: A Personal Chronicle, Published by EPIC (2018). In it, the leading privacy advocate recalls three-decades of path-breaking campaigning, from his battle against the SWIFT banking system and CCTV to his investigations into the privacy-violating practices of state intelligence agencies and Silicon Valley. For more information see: https://www.epic.org/bookstore/

Predictions for Privacy: A report on the issues and trends that will dominate the privacy landscape in 2013. Available at www.privacysurgeon.org (Accessed 1st August 2018).

Eliza Krigman; Privacy Activism: Turning threat into opportunity. Data Privacy Leadership Council. 2016

Samarajiva, R. (1996) ‘Surveillance by Design: Public Networks and the Control of Consumption’, in R. Mansell and R. Silverstone (eds), Communication by Design: The Politics of Information and Communication Technologies, Oxford: Oxford University Press, 129-56.

‘Privacy by design: the definitive workshop. A foreword by Ann Cavoukian’, http://www.springerlink.com/content/d318xq4780lh4801/fulltext.html

See the work of Cynthia Dwork and others at http://en.wikipedia.org/wiki/Differential_privacy for a general background to Differential Privacy.

Commission Recommendation of 12.5.2009 on the implementation of privacy and data protection principles in applications supported by radio-frequency identification: http://www.rfidjournal.com/article/view/4890

Opinion 5/2010 on the Industry Proposal for a Privacy and Data Protection Impact Assessment Framework for RFID Applications Report: http://ec.europa.eu/justice/policies/privacy/workinggroup/wpdocs/2010_en.htm

Published 29 January 2019
Original in English
First published by Eurozine

© Simon Davies / Eurozine

PDF/PRINT

Newsletter

Subscribe to know what’s worth thinking about.

Related Articles

Cover for: Why Parliaments?

The original assembly in 12th century Spain was not a space for popular democracy, but for tough bargaining and long distance government. After 800 years of evolution and facing civilizational challenges, parliaments need to further transform to meet the moment and deliver on the promise of inclusion. Can watchdog parliaments gather enough steam to effectively restrain those in power?

Cover for: The narrowing spectrum

The narrowing spectrum

Representation and democracy in German public service broadcasting

Calls to reform Germany’s public service broadcasters have been intense following the ARD corruption affair in 2022. A culture of corporate democracy substitutes genuine representation, while rigid hierarchies invite abuses of power. Greater civic participation must be enabled at all levels.

Discussion