Lisa Khan’s 93 page article about Amazon’s business practices suggests an antitrust legal intervention may be required. This has sustained an ongoing public debate about trust in “big tech”. However, Amazon customers are not giving up their Prime accounts, including Khan’s husband who is a regular user according to her. Over at Facebook, the fallout from the Cambridge Analytica scandal hasn’t had a major impact on their bottom line, despite an increase in distrust.
According to a recent survey, 81% of respondents reported that they ‘have little no confidence Facebook will protect their data and privacy’, which is line with Business Insider’s Digital Trust annual survey (Business Insider, 2018). Yet Facebook report that their “daily active users” and “monthly active users” have not declined and analysts suggest advertisers are not looking elsewhere (Business Insider, 2018a). An independent study by the Pew Research Centre (2018) showed that more people are changing their privacy settings on Facebook. However, a mass exodus has not taken place. Have Amazon and Facebook users entered a state that we might call ‘digital trust dissonance’?
This concept tries to explain why research shows that millions of people express distrust for major technology corporations (Google, Facebook, Microsoft, Amazon), yet continue to use these platforms with little or no restraint. Even when a company has admitted to losing users’ personal data in a major hack, for example the recent British Airways hack. This behaviour seems to echo the privacy paradox, a cognitive dissonance discovered by academics like Susan B. Barnes (2006) when studying early use of social networking sites like Myspace. Barnes found that people express genuine concerns about their online privacy, yet continue to broadcast personal details in public forums and on websites that warn them that they are collecting their data. Are we seeing a similar effect with trust in digital technologies?
Distrust in technology is nothing new (E.g. the Luddites). One of more vocal groups in society, older adults, will “frequently deploy the concept of distrust“ (Knowles & Hanson, 2018) when talking technology as a reason for non-use. However, these are likely to be outliers, Digital trust dissonance could have several causes. A primary cause is likely that individuals are simply ‘locked in’ when it comes to using specific technology platforms, either by their employer or family. The time cost and compatibility issues associated with switching to alternative platform are too high. For example, try using Open Office instead of Microsoft Office when all your colleagues use the Microsoft platform. You may trust the makers of Open Office (Apache) more than Microsoft, but look out of the inevitable email from a friend or co-worker who cannot open your documents.
Another reason for digital trust dissonance could be a lack of visibility of the negative impacts of technology usage. It is hard to see how one’s trust has been betrayed if one cannot observe any real-world impact or ‘direct betrayal’’ of their trust. This sets a dangerous precedence as individuals’ may become resigned to the fact that their trust in a technology inevitably comes with some downsides, leading to a dependency that becomes hard to break.
An explanation for the digital trust dissonance may also from an explanation for the privacy paradox as proposed by Hallam and Zanella (2017). They suggest “a temporally discounted balance between concerns and rewards”. In other words, the more distant to the individual a privacy breach is, the more that individual will discount it. The same may be true with trust. If we had the specific details of what data we lost in a breach, and which agents received that data, our trust would break more profoundly. If we are bundled in with millions of others, with few or no details about our individual data, we may discount our distrust.
Entities like the EU are showing how regulation can place a ‘check’ on large multinational tech companies (E.g. Google), which might actually increase our trust in them. However, can regulations go far enough when companies lose millions of user details to hacking from criminals or hostile regimes? How can we build healthy levels of trust and distrust? Trust in technology to improve our lives with the right amount of distrust to lobby for better security, regulation and fair use of our data. It seems that the major tech companies have learned that major user data breaches, negative press or wider breaches of social trust seem to have little effect on their business. Perhaps we’ve become so dependent on technology in the 21st century that when you’ve got ‘em by the app, their clicks and minds will follow.
Barnes, S B (2006). A privacy paradox: Social networking in the United States. First Monday. Firstmonday.org. Available at: http://firstmonday.org/article/view/1394/1312
Bran Knowles & Vicki L. Hanson (2018). Older Adults’ Deployment of ‘Distrust’. ACM Trans. Comput-Hum. Interact. 1, 1, Article 1 (March 2018).
Business Insider (2018). Here’s a sneak peek at just how big Facebook’s trust problem is.
BusinessInsider.com. USA. Available at: http://uk.businessinsider.com/consumers-dont-trust-facebook-at-all-new-survey-data-2018-4
Business Insider (2018a). One third of Facebook US users say they use social network less than in 2017. BusinessInsider.com. USA. Available at:
Hallam, C. & Zanella , G (2017). Online self-disclosure: The privacy paradox explained as a temporally discounted balance between concerns and rewards. Computers in Human Behavior, Volume 68, March 2017, Pages 217-227
Morgan Meaker, M (2018). Europe is using smartphone data as a weapon to deport refugees. Wired.com. Available at: https://www.wired.co.uk/article/europe-immigration-refugees-smartphone-metadata-deportations
Pew Research Centre (2018). Americans are changing their relationship with Facebook. Pew Research Centre, USA. Available at: