The EthicsNet Guardians' Challenge aims to enable kinder machines.
The Challenge seeks ideas on how to create the best possible set of examples of prosocial behaviour for AI to learn from (i.e. a machine learning dataset).
We welcome entrepreneurs, researchers, scientists, students, and anyone eager to contribute, to jump into this challenge and to help propose a solution. To register for the challenge, click the “ACCEPT CHALLENGE” button above. The EthicsNet Forum is your space to share thoughts and ideas and spark with others with similar visions.
EthicsNet has been established to help accelerate the development of machine ethics technologies, primarily through encouraging the crowdsourced co-creation of a public dataset, or set of datasets, to empower machine ethics systems.
EthicsNet is modelled after ImageNet, a dataset for machine vision which has been instrumental not only in providing actionable data for new machine vision algorithms to use, but also in providing a rallying-point and benchmarking tool for rapid development within this space.
At present, although there are many organisations focussed on AI Alignment or AI Bias, there appear to be no professional organisations devoted to Machine Ethics (or Computational Ethics) per se, and no official or professional qualifications in this space either.
There are many organisations that advocate for attention to be given to risks from AI, or that advocate for more international collaboration, or that attempt to forge general principles of how technology should be applied.
However well-intentioned, discussion and principles alone cannot actually effect a leap forward in the state of the art. To advance the start of the art absolutely requires dataset(s) through which to inform machines via moral examples. Without a dataset, all discussion of this space remains purely academic.
There are multiple potential ways that we could proceed in the process of making a dataset of prosocial behaviours, and many of those could lead to a dead-end. Before we commit further resources on developing a dataset, we decided to pause, to ask the global community for advice.
Thus, the EthicsNet Guardians' Challenge was born.
We talk of ‘Parents and Guardians’ in respect to socializing children.
At EthicsNet, we believe that humans have a similar responsibility to act as Guardians for Artificial Intelligence – To protect, to nurture, and to safeguard.
The EthicsNet Guardians’ Challenge invites contributors to share ideas on how to create the best possible set of examples through which to teach intelligent machines how to behave nicely.
You may be the one who unlocks a crucial technique in our journey towards the ‘raise of the machines’.
This dataset is not designed to answer difficult philosophical questions. This dataset is designed to be a description of pro-social (nice, polite, socially welcomed) behaviours, especially generally universal ones.
The imagined goal is to create a system that can make simple social decisions that one might expect of a well-raised young human child or dog. We want to capture simple social rules like the following:
We have some ideas already of how an efficient machine ethics dataset might be constructed. Perhaps some of these ideas might fit with your own:
Other, more wacky ideas:
EthicsNet, a non-profit, is building a community with the purpose of co-creating a dataset for machine ethics algorithms.
Machine Intelligence requires large, well-documented datasets for it to be trained upon. Datasets often matter more than algorithms per se, though they rarely get credit for the value that they can create.
Datasets such as Fei-Fei Li's ImageNet have enabled the recent expansion in capability of machine intelligence in powerful new ways that otherwise would be impossible.
We want to do the same for the space of ethics – a living dataset that can expand in scope and nuance over time, and empower socially-aware thinking machines for generations to come.
We have a vision of building a safer and more just world, by enabling humans and machines to make better decisions, and creating trust through verifiable behavioural rulesets.
We recognise that ethics are varied across time, geography, and culture. We intend to democratize access and opportunity for contribution to this world-changing system, that is currently concentrated in the hand of a few AI researchers/engineers.
However, there are multiple potential ways that we could proceed, and many of those are likely to lead to a dead-end. Before we commit further resources on developing a dataset, we decided to pause, to ask the global community for advice.
Thus, the EthicsNet Guardians' Challenge was born.
This project is indeed challenging. However, it is less challenging than it may at first appear. We are not trying to ‘solve ethics’ per se, or answer abstract questions such as trolley problems.
Even very young children and toddlers are capable of regulating their behaviour based upon social cues, and both babies and non-human animals can apply a sense of fairness to situations. This hints that the underlying principles are not necessarily any more complex than the sophisticated machine vision systems that we have seen debut in recent years.
However, deploying such capability will require a dataset of examples to work from, one we seek to help create. Rather than attempt to model ethics per se in discrete mathematics, this approach is more focussed on capturing good examples of kindness and care.
It is therefore somewhat ‘wooly’ and less rational or computational than might be ideal. However it ought to be enough to simulate a very basic level of human (or dog) morality, and perhaps even be more akin to biological brains.
We reason that this seems to be a simple way to begin the process of teaching machines how to be friendly. Think MNIST, but for social interactions. A dataset that is highly simplistic, yet effective and useful within its narrow domain.
Think ‘Wright Brothers’, not ‘747’. Sometimes simply proving the fundamental aspects is more than enough to change the world.
Much of human cognition is social, and humans and animals learn a great deal from each other in an indirect manner. Children do not have the level of experience of adults, but they do not require direct experience in order to learn good rules of thumb.
Machines need to learn from the social interactions of humans, and by their social interactions with humans. By giving them opportunities to learn from us, machine intelligence systems will become much more sophisticated, and also much more human-like in their behaviour.
Furthermore, it may be the case that many human beings only try to be about as moral as their peers. This enables a ‘good enough’ level of behaviour for most human beings most of the time, and this model should suffice for teaching machines to behave decently also. We do not need machines to act like saints; in fact, if they are highly scrupulous they may find it more difficult to integrate.
The EthicsNet dataset should primary act as a census of Descriptive Ethics (.e. How people typically act in certain contexts) of contemporary societies in multiple geographic locations, and varying creeds.
The process of mapping Descriptive Ethics should be reasonably accessible and easy for most people to be able to contribute to – They simply project their own cultural awareness and ethical intuitions into the system. Results can then be aggregated, and cross-referenced with demographic information obtained anonymously.
Having a map of the space of ‘common’ ethical rules in given scenarios should be sufficient for most agents to integrate safely into most situations, most of the time.
Normative Ethics (how people should ideally act) would be a secondary, longer-term objective. This would be more about applying techniques that are broadly similar to the scientific method (prediction, and observation), and also deriving knowledge via first principles. This is likely to be far more cognitively challenging for a lot of people. Ethics derived through such tools are intended to converge towards (though perhaps not upon) something objective. This is much more challenging, and contentious, as it is less likely to fit with common expectations of behaviour. EthicsNet is not concerned with normative ethics, certainly at this stage.
We consider that possessing (or creating) a secondary dataset of emotional valence may also be valuable. This would have examples of images or situations that would likely provoke an emotional response in a human (for example, scary, cute, or outrageous images).
Understanding which situations are likely to provoke an emotional response is an important aspect of empathy, one that human sociopaths typically lack full awareness of.
Any and all IP is open-sourced, as per the competition Guidelines.
This project has already been generously supported by grants from Stichting SIDNfonds, and work by teammembers on this project has been sponsored by Innogy. We have also received amazing support from Stockholm School of Entrepreneurhsip Ventures, Y Combinator’s Startup School, and The IBM AI X-Prize,
Yes, but it’s quick and easy. Just click the “Join Us” button at the top of the page and follow the instructions to complete your registration. All you need to provide is your name and email address.
If you have a question not answered in the FAQ, we recommend that you post it in the EthicsNet Forum where someone will respond to you. This way, others who may have the same question will be able to see it.