Skip to Content

Trust and Digital Trust: Looking Toward the Future

The issue of digital trust may not loom large to users and purveyors of digital business in a short-term “yes, I’ll adopt” way but it matters very much to a business strategy long term. That’s one of the reasons that Facebook used the proceeds from a class action suit about privacy several years ago to establish the Digital Trust Foundation, which helps to fund proposals on digital privacy and security.

Digital business needs to be trusted, just as any other business does.

Without public trust in digital methods, digital businesses will eventually falter and fail as public trust erodes.

This is no idle construct for technology news in general. PricewaterhouseCoopers issued a report entitled “10 Digital Trust Challenges” in which they compared digital trust to the Magna Carta. If it’s been a while since you’ve brushed up on the latter, the Magna Carta established rights and privileges, transparently stated, with the eventual agreement of all parties.

Although the analogy is implicit, the report observes that trust, in general, has “three pillars:”

  • Legitimacy
  • Effectiveness
  • Transparency.

How well does your digital world meet those standards of trust?

How Private Is Data? Who Gets To Use It, and Why?

One of the key challenges in legitimacy, effectiveness, and transparency going forward will be the issue of data security. In a nutshell, how private will data be? Which data, for which purpose, will not be private? Who gets to use it? Why?

Digital innovation has opened up these issues on several different fronts.

First, of course, data collection on everyone in their roles as business people and consumers is increasingly possible — not only possible but widespread.

Second, massive data collection has opened potential areas of analysis not possible without it. Big data and algorithms can and are sorting and sifting. They may well be opening some roads and closing others in areas like employment, healthcare, and more.

As the PWC report points out, the general public may not realize how much data is being collected, and very few realize its deployment potential. While the potential is for good (data on disease conditions, to create a cure, for example), it also exists for prospective adverse effects (variable health insurance rates for certain genetic markers, for another example).

It’s possible that if the public did realize it, they would not see widespread data collection as legitimate or transparent.

Yet the potential for future public debate also exists. Many consumer-facing online sites, for example, require an agreement with terms and conditions to proceed. Can an agreement primarily designed to set, say, terms of service for retail stores or posting personal messages be used to gather data with potential consequences like this? What are the ethical implications of doing so? What would an after-the-fact realization do to issues of digital trust? Ultimately, would a lack of public debate cause such data collection to be viewed as illegitimate and thus ineffective for long-term consumer trust?

How might the perception of business legitimacy be affected going forward? What level of transparency regarding disclosure is standard? What level should be standard?

These questions and answers are large and surround the long-term, continued acceptance and use of the digital world.