Knowledge is power. The knowledge that online platforms have about you – the user – can be monetised by allowing third-parties to target advertisements, influence opinions and even swing democratic elections. As technology companies become bigger and smarter, the scales tip further in their favour and users become more vulnerable to exploitation and profiling by algorithms that tie their data points together. So, how does the average user stand up to these omniscient behemoths in order to protect their interests and their personal information?
The idea of data ‘trusts’ or ‘intermediaries’ is not a new one, though the push for enhanced privacy rights and legislation (and its introduction in the form of the EU’s Digital Services Act, for example) has coincided with the universal desire to rethink how data is governed in modern society, where a handful of organisations hold a monopoly on personal data. A data trust is defined by the Data Trusts Initiative as ‘a mechanism for individuals to take the data rights that are set out in law and pool these into an organisation — a trust — in which trustees make decisions about data use on their behalf.’ The idea that a collective would advocate on behalf of the individual is a proven one and is perhaps most clearly demonstrated by trade unionism. When individuals unite around a shared purpose, they undoubtedly have greater bargaining power and so data trusts do represent a viable way of redressing the balance of power between the user and the online platforms that hoard their data.
If the data trust model is to become the preferred way in which citizens choose to govern their data, there should be thorough consideration of what those trusts will look like, how they will act and whether they will be truly accountable and answerable to their members. The Ada Lovelace Institute has furthered research in relation to ‘data stewardship’ and suggests that ‘stewardship’ could be based on Elinor Ostrom’s principles for ‘Governing the Commons’ (i.e. common resources). Those principles — applied by a data trust on behalf of its members — would be a good basis on which to begin conceptualising these novel entities.
The danger with pursuing a totally new governance model such as this, is that the trusts themselves may be conflicted in their interests. The TRUSTS Project — funded by the EU’s Horizon 2020 research programme — will create a pan-European pool of personal and nonpersonal data, that will become a marketplace for technology companies. Organisations will pay for access to the data and data subjects will receive a ‘data dividend’. Herein lies the problem with trusting an intermediary to act on your behalf. If the personal data of a citizen becomes an asset, then the trust becomes a ‘broker’ and the citizen’s interests may be sacrificed on the altar of data capitalism. Another argument against data trusts could also be made on the basis that citizens will still not be making informed choices about how their information is used and could be unduly influenced by the incentive of larger ‘dividends’.
The need to address the imbalance of power between tech companies and the user grows with each new Netflix exposé, but solving the key issues will not be easy. Clearly, the solutions to the new data governance problems faced by society are moving targets and they will require a great deal of careful thought and pragmatism.
Further reading: ODI — What are data institutions and why are they important?