what it is
The description of digital ethics is often attributed to Luciano Floridi, and I think one of the more accessible explanations is here. Digital ethics is described by Floridi as part of the digital governance process that an organisation might implement.
Digital ethics is an important part of digital governance because it describes decisions that a company might go to that go beyond what it is legally obligated to do, and which are driven by different motivations than profit.
Trying to work out how to have digital services that operate ethically, morally, or within expected parameters is a really big issue for users, governments, and platform operators. Companies like Alphabet, Apple, and Meta have both put together boards or panels for trying to establish internal standards for their digital operations.
This hasn’t always gone down well. Different people have different ethics, and this leads to tensions. As such, the expectations of what a platform should implement is complicated because users and observers with different ethical stances come into conflict over what ethical principles should be emphasised.
It’s easy enough to imagine a tension between a strong ‘free speech’ libertarian and an anti-racist activist in terms of how content should be moderated. Equally, firms don’t exactly want to introduce their own restrictive limits on what they must do/can do/can’t do.
In my experience, and from what I can read in the news, the people involved in creating digital ethics within organisations do really believe in it. They have a really genuine belief that they can help the company to do better, be better, and overcome hurdles.
We can see technologies like cryptocurrencies being established around a set of ethical principles that are bound up into the technology. We can debate about how effective this implementation is or how valuable the ethics are, but the direct line between ethical stances and implementation is clear.
It is easy to see that different digital ethical values are already well established in computer science and IT. The principles of security, privacy, appropriate use, timesharing, and so on are implemented technically into many of our systems.
If we pause for a second on all our own morality and ethics, it is easy to see that digital ethics is not useful for companies seeking to compete in a marketplace. If we are seeking to quantify how data ethics are good for a business in their own terms, we’ll probably start talking about second order features – ‘risk’, ‘consumer confidence’, ‘branding’, ‘public relations’. If you talk to folk in or adjacent to digital ethics decisions, these are often how digital ethics is justified.
This means that digital ethics can suffer from a lack of sincerity, where the ethics is never given priority over other values, such as revenue, brand identity, or insurance. This also means that when we do hear about revenue-driven organisations talking about digital ethics, we need to ask ourselves how this is justified internally.
The idea that ‘code is law‘ is one of those conceptual phrases that people pick up and recycle without investigating the deeper framing and its acknowledged limits, but the fact that people do invest in it is important in any case. It’s an important idea even if people don’t engage with the specifics.
What’s important about the idea of ‘code is law’ is that it really does demonstrate the degree to which people believe that technology can implement human social systems – like the legal system or an ethical system. This is important because we can see that when new legal systems are introduced that require services to operate differently, then we see intelligent folk in the tech community raise concerns. This doesn’t stop the laws getting implemented, but it does show that these concerns are already endemic to some parts of the tech community, we’re just seeking to establish them more widely.
Data ethics is a particular kind of digital ethics. As is probably obvious from its name, data ethics is more specifically about developing organisational principles and practices (and at times governance and policy) around the use of data.
This can impact a great deal of things: from the holotype entry for a database, the management of data transactions, through to willingness to comply with legal obligations. Even decisions about how to represent and aggregate data are important parts for thinking about data ethics.
I think there are two big questions for data ethics that could do with more attention:
- how data is produced.
- how the terms of data production can be stamped onto a dataset to retain it into the future.
These two questions have shaped my thinking in terms of data ethics, and have generally come together in the idea of data provenance, which I’ve worked on with my good friends Suneel Jethani and Kate Mannell.