top of page

How shall we live?On trust and paying attention in the 21st century

In closing the NLnet sponsored project “Trust, semantic learning, and monitoring”I want to end with this summary of what I believe the study may tell us about the role of trust in our increasingly online and cyber-enhanced society.

(For readers interested in full research: the whole trust series is here)

Few subjects receive more attention than trust in the social science literature, yet it wasn’t always so. Two books, published in the 1990s, kicked off an avalanche of writing:

  • Robert Putnam’s Making Democracy Work: Civic Traditions in Modern Italy(1993), and

  • Francis Fukuyama’s Trust (1995).

These works contributed to make trust part of a larger and more complex notion of social capital, but didn’t obviously result in a clearer description of what trust does. Why are we still talking about something so obvious? Even after thirty years of subsequent work, no one has really gotten to the bottom of what trust is on a scientific level.

During this period, I was developing Promise Theory for something entirely different: studying agent based software systems. The two topics met tangentially, but resulted in a fortuitous series of papers that ended with the NLnet project this year. Let me try to summarise the state of that work and what I think it might mean for the future.

A muddle of our own making

Why study trust? Let’s simply say: we should distrust what we read about it. Unable to formalise the subject convincingly, no field of research has made much progress in defining what trust is. We trust that we know it when we see it!

The social science literature relies heavily on questionnaires, about good or moral behaviour, to measure when people believe they trust. Apart from feeling trite and superficial, these fail to plumb the depths of what’s going on with social capital. Similarly, in the world of Information Technology (IT), ideas about trust are equally superficial and vacuous. Trust is something of a dirty word in IT. “Thou shalt not trust anyone” is the prevailing narrative. Trust is acclaimed as the mortal enemy of security.

Making trust into either angel or daemon feels equally vapid. It surely evolved for a reason, so what was it? That’s not to say that certain essayists on the matter have not gotten close to an answer, yet their polemic seem to have been largely ignored–research is now an industry that doesn’t always profit from ending discussions. Natural sciences, like neuroscience, haven’t accomplished much either (see the selected literature). It was only through an unexpected series of accidents that I ended up working on the issue from the perspective of Promise Theory and found a simple way of summarising all those ideas that resulted in some measurable predictions.

Over a decade ago in 2006, Jan Bergstra and I started to discuss the role of trust in Promise Theory, but we never finished the work. We knew, from the precepts of Promise Theory, that trust had to be built on the idea of individual assessments, involving intentions in some way, and that assessment was a completely ad hoc judgement for each individual in turn. Such a view was particularly hard for IT folks to swallow, as the standard lore in IT is that all answers are absolutely determined and must either be true or false. Over the years, others too have had the same essential ideas about trust on a heuristic level–but it’s only with the formalities of Promise Theory that a quantitative picture of trust seems to show consistency.

The short version is this: trust appears to be something much simpler than a moral concept (just as promises turned out to be simpler than obligations): indeed, it takes the form of a heuristic regulator for attentiveness. If we assess something to be trustworthy, we don’t look too hard, so it saves us the work of attending to what is going on; it frees up resources for better use. Trust promotes and is promoted by continuity (steady promised behaviour) and is demoted by unexpected transients or surprises (impositions, accusations, etc).

The trust system, as described, fits neatly into what Daniel Kahneman would call cognitive “system 1”. It’s a quick and approximate rush to judgement, which can later be dissected by more laborious “system 2” methods if it should become necessary.

How trust works

Most of the writing about trust concerns why and when we trust. It’s tacitly assumed that everyone knows what it means — but what’s it for? What’s the function of trust?

Let’s not try to repeat what makes us assess something as trustworthy. Enough had been said about that–indeed Promise Theory deliberately keeps out of how such assessments are made. Rather, let’s focus only on how the mechanisms of trust shape decisions that build on the economy of attention for each agent in a system.

It’s a social capital–a common currency–to be sure. Just as money behaves like an action potential, similar to energy (or vice versa), acting as a rough predictor of action, so trust also has those qualities for higher level processes between “agents”. But there is a better analogy than capital available in the natural sciences. It’s perhaps disconcerting to think that something made of counting tokens can predict our behaviour–even in the broadest of terms, but it seems to. Whatever we think about freewill, there are constraints at work. As an assessment of activity, trust is more like the concept of energy, with its kinetic and potential forms. Its moral semantics are of secondary importance.

In the NLnet project work, trust emerges as an accounting system for investing our attention. It has two essential parts: what we commonly call trust and trustworthiness (Trust and Trustability, if you like). Trustworthiness (or potential trust) is a passive assessment about another person or phenomenon; it summarises the cumulative reliabilities of past experiences (the continuity of our experience of them). Kinetic or active trust, on the other hand, is a policy we formulate ourselves: how much effort are we willing to either invest or forgo in watching over a relationship with something or someone? Everyone makes up their own mind about it. Our decision varies according to how busy we are and how much spare capacity we have.


If you can’t trust, you have to pay attention in order to verify–and verifying is expensive. It requires our attention: our time, our cognition. Verifying means sampling the ongoing process of interest regularly, like a tax. A simplistic summary goes like this:

  • Trustworthiness is our assessment of past reliability in keeping promises (advertised behaviour). Because we don’t have data on everyone all the time, it’s treated casually and is used like an exchange currency. We borrow impressions and reputational hints from anywhere. The key is that assessment should be quick and cheap.

  • Mistrust, on the other hand, is the amount of attention we invest as a result of that assessment of trustworthiness. The more we mistrust, the more effort we invest in verifying and validating behaviour. The more we trust, the more we look away and save ourselves that effort. Trust then is an important work saver–the decision to forgo attentiveness.

Studies tend to muddle active trust with passive trustworthiness, perhaps because our language for these (in English) is ambiguous. It’s normal to comment mainly about trustworthiness, yet it shouldn’t escape our notice that for the active trust there’s also a continuum of degrees between innocent curiosity and apparently more toxic mistrust. In essence, these two are the same phenomenon. Only our somewhat fluid semantic interpretation of them is different.

The key insight from Promise Theory was that the two kinds of trust play basically the same kind of predictive role as potential and kinetic energy play for physical processes. Trustworthiness is a summary of past predictability, i.e. reliability or continuity in keeping promises, while mistrust is a level of kinetic activity or busy work induced by that assessment. It’s about an economy of attention.

Where we focus attention

Trust clearly plays a role in shaping society, but it turns out that it doesn’t work in quite the way we think. Our preoccupation with moral issues has left us confused. We’d like to believe that goodness brings us together, but that’s not what the data reveal. According to group studies, we don’t come together because we trust: we come together because we align our intent to mistrust. Often we are curious or driven to pay attention to some unifying element around us. It might be a leader, a common threat, a promise or a shared task. We stick together because our interests align and we suffer the mistrust of others until we can no longer justify it.

In the past, society meant dancing around fires, enduring feudal rulers; we embraced slavery, and hailed emperors. We’ve been through peaks of civility, rigid in protocol, and low points of savagery, none of which were heights of morality. Trust has been a part of us all this time. So did trust change? Whatever trust does, it has allowed us humans to manage the invention of society, or the scaling of stable and coherent action in groups that don’t (immediately) disintegrate into conflict. It overcomes group pressures and maintains continuity of collective action–but it does so with widely varying semantics.

Attention is not just for humans. As we build more proxy technologies and depend on them as infrastructure, attention becomes a machine concern too. Not only do we have to balance when the self-driving car is paying attention with when the backup driver is paying attention, all the so-called "generative AI" technologies use selective attention to contextualize situations and frame appropriate actions.

Are we still paying attention?


bottom of page