Infrastructures for cooperation

Ellie Rennie
9 min readJan 18, 2021

Blockchain is often referred to as a trust machine because it minimises uncertainty. People can perform various actions online using blockchain applications, knowing that those actions will not be undermined by others. The consequence is that we do not need to rely on external agents or processes to provide us with assurance that things will go according to plan. To borrow social theorist Diego Gambetta’s definition of trust, blockchain is “a device for coping with the freedom of others” (Gambetta, 1988, p. 219).

I prefer to think of blockchain as a cooperation machine. Cooperation occurs when those involved are incentivised to behave in a way that meets each other’s interests. The opposite of cooperation is the failure to achieve a shared goal because self-interest reigns or where incompatible objectives cannot be reconciled. Throughout history, people have created organisations, institutions and infrastructures to enable them to cooperate more easily, including in situations where trust is unobtainable, such as when we cannot know the other party or where there are unequal power relationships (Cook, Hardin & Levi, 2005; North, 1991). Cooperation often entails access to information and shared standards, which do not involve trust. Blockchain uses various mechanisms — incentives at the code layer, standards, a shared ledger of information — to remove uncertainty and make it easier for us to cooperate.

Cooperation is important in the humanitarian sector, particularly in emergency response work that requires getting assistance and resources to people when there may be a breakdown in infrastructures and institutions (such as a natural disaster or war). If blockchain is an infrastructure of cooperation, what does it do to humanitarian work? Before approaching this question, I first discuss the three main critiques of technology for humanitarian work: privacy and safety; the assumption that technology is more neutral than other processes; and replacing people-centred processes with automation. I use the Trust Alliance to illustrate what cooperation through blockchain looks like for the humanitarian sector.

Photo by Max Bender on Unsplash

How technologies can change humanitarian work

Humanitarian organisations often work with people who are displaced, unbanked and without identification. They must determine who is vulnerable in a particular permanent or impermanent situation and do so with fewer data points than other domains of social administration. Organisations are also under pressure to achieve faster response times and to allocate resources where they are needed most, which may prompt them to use technologies to automate or streamline some processes. This results in the first significant problem for humanitarian work: the processes of social sorting (Lyon, 2006) — classifying some individuals as requiring different treatment from others — can result in some groups being marginalised or victimised. When technology is used for collecting and sorting personal information it can result in bias and increase privacy risks. In the case of humanitarian work, if that personal information ends up in the wrong hands, it can be used for the further persecution of particular groups, state surveillance, and discrimination against individuals.

Secondly, technologies can present as neutral when the design decisions that go into their creation are not. In her critique of the humanitarian sector’s use of technology, Katja Lindskov Jacobsen (2015) observes the temptation to see automated processes as incorruptible, objective machines with processing powers. Neutrality, in the sense of not taking sides, is one of the core principles of humanitarian work, alongside humanity, impartiality (assistance according to need) and independence (Fast, 2015, p. 111). Jacobsen gives the following example of how this can be carried into assumptions about technology:

If it is an objective, neutral and impartial machine that ‘decides’ whether a given refugee claimant can be recognised as a legitimate recipient of assistance, then this presumably exempts humanitarians from claims that their practices are partial and influenced by donor policies (Jacobsen, 2015, p. 28).

The concept of neutrality in humanitarian work is contested (Anderson, 1999). Technologies are also not neutral and can result in unintended consequences, such as privacy problems that arise when so-called ‘personal’ devices are used in cultures that prioritise reciprocity (Rennie et al., 2018). Harm can also arise from design decisions even when used in predictable ways. In essence, technologies produce actions and outcomes that have direct bearing on people’s capabilities and freedoms.

A third major critique of the use of technology in humanitarian work involves the reasons it is introduced in the first place. The creation of technological tools for humanitarian action can be a response to the shrinking of the humanitarian arena due to denial of access and disregard for international humanitarian law (Sandvik, 2016, p. 18). Duffield (2013) considers this to be akin to ‘remote management’, the reduction of face-to-face encounters with the withdrawal of staff from the field.

Finally, a problem with technology projects to date is that they can complicate data and processes rather than simplifying them. In their review of information flows during the humanitarian response to the 2010 Haiti earthquake, Altay and Labonte (2014) found that decision-making was hampered “by the technical aspects of HIME [humanitarian information management and exchange], including accessibility, formatting inconsistency and storage media misalignment” (p. S52) and that there was an “unwillingness of international humanitarian actors to match rhetoric with reality regarding participatory approaches to information sharing and exchange” (p. S65).

The humanitarian arena and cooperation

The operational aspects of humanitarian work fall within a humanitarian arena which, as Hilhorst and Jansen write, is often crowded: “humanitarian situations are not blank slates to be occupied by lone agencies, but are shaped by social negotiations over inclusion and exclusion” (Hilhorst & Jansen, 2010, p. 1133). Recent explorations of the humanitarian arena have demonstrated that beneficiaries also have considerable agency, and this agency extends through their own uses of communication technologies (Wall, Campbell & Janbek, 2017). What does it mean to bring blockchain-based applications into situations characterised by complexity?

A report by the Overseas Development Institute notes that humanitarian work requires clear links between assessment and assistance in order to demonstrate “appropriate and principled” responses (Coppi & Fast, 2019, p. 25). The authors argue that the principles underlying humanitarian work are fundamentally different to those that underpin blockchain, which is designed on the premise that subjectivity and interference are inferior to automated systems that enact pre-determined rules. At risk are “shared humanitarian principles, collaborations over time and across emergencies, relationships between individuals working for an organisation, or in the nature of the giver–receiver transaction” (p. 25).

However, if we take cooperation as the starting point, then minimising uncertainty in some processes can enable other processes. Blockchain technology, like institutions, does not make us cooperate, it simply removes some of the uncertainties that can get in the way. Douglass C. North (1993) described institutions as “the humanly devised constraints that structure human interaction and they exist to reduce the ubiquitous uncertainty arising from that interaction” (p. 15). If blockchain is an institutional technology (Berg et al., 2019), then its devised constraints can make other interactions easier.

To take the Trust Alliance as an example, using blockchain for verified claims (including credentials and other identity-related information) provides workers and those interacting with services with a means to share information in a privacy-preserving manner (Byrne et al., 2020). The Trust Alliance therefore explicitly addresses the first problem with technology use in the humanitarian sector: privacy and security. One consequence is that there may be a reduction in misconduct among workers as the possibility of deceit is reduced (bad actors are unable to falsify documents and apply for work). Secondly, those receiving assistance may be empowered to use more than one service as they do not need to repeat the onboarding process. In both instances, organisations are not sharing information about the worker or the receiver; they are simply providing that individual with information that concerns them in a form that makes it easier for others to accept (to trust). Cooperation is made easier as the ability to verify claims from another organisation can mean that individuals have more options open to them, and organisations are helping each other by overcoming information gaps that constrain responsivity.

As the Trust Alliance example suggests, to fully understand whether blockchain increases cooperation, we need to look beyond its institutional properties and at the capabilities that arise from these properties. For instance, workers and recipients may end up with greater choice, including the ability to move as needed through that humanitarian arena. Understanding capabilities involves looking beyond what the platform itself does or does not do and observing the actions it makes possible or closes off. In other words, we cannot assume that automation erases human processes. In some situations, it may restore human agency and enable new adaptive processes to emerge in the non-technology domains of activity.

What might go wrong? For the Trust Alliance, it will be important to observe the extent to which a local organisation is able to have their own workers and credentials considered. If local organisations are not using the technology the Trust Alliance needs to understand why and consider what barriers might exist. While there are increasing calls for ‘user-led’ design in relation to humanitarian projects, in this instance that might be less about user experience and focusing instead on open standards so that others can build complementary applications. It would also mean ensuring systems will work in places with poor connectivity so that local actors can participate on equal footing with international organisations. Moreover, the Trust Alliance does not address issues of information-sharing, which can be important for decision-making by organisations as mentioned in the Haiti example above. The extent to which this radically different approach to information coordination influences decision-making processes between organisations will be important to measure.

It should also go without saying that using experimental technologies on vulnerable populations is a bad idea. So far, this project and others in the humanitarian sector have been cautious with blockchain pilots for good reason. However, we also need to be alert to the fact that blockchain is often used in conjunction with other technologies. A frequently cited example is a project that used blockchain for transactions in a refugee camp store but relied on biometric retina scanners to identify people in order to access their store credit (a technology that was already in use before the blockchain component commenced). While the project was found to reduce costs and administrative processes, the biometric scanners were a privacy risk and data mismatches locked people out of assistance (Latonero, 2019).

In summary, the way to understand cooperation through blockchain is to look beyond the technology itself and observe the capabilities that arise from it. That requires looking at the cultural and social context within which these technologies are being used. As Amartya Sen observes, institutions are an important input into a just society, but they are only one input: “The nature of the society that would result from any given set of institutions, must, of course, depend also on non-institutional features, such as actual behaviours of people and their social interactions” (2009, p. 6). While blockchain can help set the terms for cooperation, it is the capabilities that arise from its use that really matter.

Ellie Rennie is a Professor at RMIT University and an Australian Research Council Future Fellow (2020–2025, FT190100372). She is conducting research with the Trust Alliance and was a member of the Trust Alliance Steering Committee and Research Working Group in 2020. For more information on the Trust Alliance, see the Trust Alliance Lite Paper. Thanks to the Trust Alliance steering committee for feedback on this article.

References

Altay, N. & Labonte, M. (2014) Challenges in humanitarian information management and exchange: evidence from Haiti, Disasters, 38(S1): S50−S72.

Anderson, M. B. (1999). Do No Harm: How Aid Can Support Peace — or War. Boulder: Lynne Rienner.

Berg, C., Davidson, S. & Potts, J. (2019). How to Understand the Blockchain Economy. Cheltenham: Edward Elgar.

Byrne, N., Jurko, I., Kraguljac, D., Rennie, E., Robinson, A., & Southall, K. (2020) Trust Alliance Lite Paper. Version 1, November 2020. Trust Alliance: Melbourne.

Cook, K. S., Hardin, R., & Levi, M. (2005). Cooperation without Trust. Russel Sage Foundation.

Coppi G. & Fast, L. (2019). Blockchain and distributed ledger technologies in the humanitarian sector. Humanitarian Policy Group (ODI) report. London: Overseas Development Institute.

Hilhorst, D., & Jansen, B.J. (2010). Humanitarian Space as Arena: A Perspective on the

Everyday Politics of Aid, Development and Change 41(6), 1117–1139.

Duffield, M. (2013). Disaster-Resilience in the Network Age: Access-Denial and the Rise of Cyber-Humanitarianism. DIIS Working Paper 23, Copenhagen: Danish Institute for International Studies.

Fast, L. (2015). Unpacking the principle of humanity: Tensions and implications. International Review of the Red Cross, 97 (897/898), 111–131. doi:10.1017/S1816383115000545

Gambetta, D. (1988). Can we trust trust? In Diego Gambetta (Ed.) Trust: Making and breaking cooperative relationships, pp. 213–238. Oxford: Basil Blackwell.

Jacobsen, K.L. (2015). The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity. London: Routledge.

Latonero, M. (2019). Stop Surveillance Humanitarianism, The New York Times, July 11. Available from: https://www.nytimes.com/2019/07/11/opinion/data-humanitarian-aid.html

Lyon, D. (2006). The search for surveillance theories. In David Lyon (Ed.) Theorizing Surveillance: The panopticon and beyond, pp. 3–20. Abingdon: Routledge.

North, D.C. (1991). Institutions. Journal of Economic Perspectives, 5(1), 97–112.

North, D.C. (1993). Institutions and credible commitment. Journal of Institutional and Theoretical Economics / Zeitschrift für die gesamte Staatswissenschaft, 149(1), 11–23.

Rennie, E., Yunkaporta, T., Holcombe-James, I. (2018), Privacy versus Relatedness: Managing Device Use in Australia’s Remote Aboriginal Communities, International Journal of Communication, 17.

Sandvik, K.B. (2016). The humanitarian cyberspace: shrinking space or an expanding frontier? Third World Quarterly, 37(1), 17–32. doi:10.1080/01436597.2015.1043992.

Sen, A. K. (2009). The idea of justice. Harvard University Press.

Wall, M., Campbell, M. & Janbek, D. (2017). Syrian refugees and information precarity, New Media & Society, 19(2), 240–254.

--

--

Ellie Rennie

Professor at RMIT University, Melbourne. Australian Research Council Future Fellow 2020–2025: “Cooperation Through Code” (FT190100372) Twitter: @elinorrennie