Technology is creating massive change in society. In many cases, that’s for good, but there is certainly the potential for great harm. A lot of press coverage focused on anecdotal incidents of bad outcomes. These are sometimes outliers or red-herrings (deaths from autonomous vehicles, for example, are terrible for those involved, but it is still likely that autonomous driving will be much much safer in the long run). Some of the most serious problems go woefully under-reported though: Virginia Eubanks’ Automating Inequality book, for example, shows how systemic errors can lead to awful outcomes for large groups of people.
The challenge for technologists and society is how to enable an open discourse on technology risk, on ethics and how to derive real actionable guidelines from it. Engineers need a frame of reference for the kinds of things they need to think about when building new technology.
A couple of great existing resources for this are the IEEE Code of Ethics, a Computer.org version, and Robert C. Martin’s Programmer’s Oath. These provide great high-level thoughts on “doing the right thing”. This is a necessary grounding, but beyond this, the question then often becomes: what is the right thing? how do I know when I might be doing something “wrong”?
Last year August, a group collaboration between the Institute for the Future and the Tech/Society Solutions Lab between launched one of the first toolkits/guides to try to be more specific about what types of risks to look for. The toolkit is called EthicsOS and aims to help engineers anticipate and prevent bad actors from using tech in harmful ways. The toolkit does this by identifying some key areas where unintended consequences could emerge, the types of scenarios to watch out for and finally some strategies for combating these risks.
Obviously, this is a huge undertaking and it’s questionable if it’s even possible to map out enough risk areas so as to be complete. On the other hand, it is genuinely a good thing to try and simply by doing this it spurs creative thinking and awareness.
The key risk zones EthicsOS identifies are shown in the figure below (the full kit is here):

Going through the risks zones and scenarios is actually very thought-provoking. The 8 zones are each highly significant and it’s great to see broad topics such as “Economic & Asset Inequalities” identified. it’s well worth reading through and considering how one’s own technology work might impact these areas.
On the more negative side, these challenge areas are very broad in some cases and often connected not just with technology issues but societal ones. Certain technologies can and accelerate inequality, but to some extent, this has always been the case with technology. The question is really how to stop the flywheel of change being too dramatic and how to spread the benefits rather than stopping development.
The future-proofing strategies in the guide are also clearly actionable:
- Tech Ethics 101: a basic grounding for everybody in tech.
- A Hippocratic Oath for Data Workers: specific guidelines and “do no harm” commitments from all data workers.
- Ethical Bounty Hunters: rewarding the highlighting of ethical issues.
- Red Flag Rules: clear pathways to report risk.
- Healthy platforms: commitments from major platforms to support open, fair, healthy discourse.
- License to Design: industry standards and qualifications for certain types of tech roles.
Each of these would involve debate on exactly how to implement things, however, they are all excellent seeds for organizations and individuals in tech to begin improving things.
Lastly, lest we think the scenarios in the guide a fanciful .. some are already playing out in real time. The digital surveillance state scenarios are already far advanced in some countries such as China. The orientation of today’s primary tech platforms to garner eyeballs and attention also clearly drive addictive tendencies in many people.
One of the scenarios “Venge” (Scenarios 13 aptly) is eerily reminiscent of the Bruce Sterling novel Distraction where social media is used to incite violence anonymously. We’re seeing the beginnings of this already with hate campaigns against prominent political figures in the United States and Britan.
While it’s a simple start, looking at EthicsOS is well worth it as a starting point for discourse within your own organization.
Here is another post on Medium to dig deeper into EthicsOS.
[1]: There are some great talks by Uncle Bob on Youtube on this topic including this one on expecting professionalism.
One thought on “Getting Better at Technology Ethics: EthicsOS”