The EthicsNet Guardians' Challenge aims to enable kinder machines.
The Challenge seeks ideas on how to create the best possible set of examples of prosocial behaviour for AI to learn from (i.e. a machine learning dataset).
We welcome entrepreneurs, researchers, scientists, students, and anyone eager to contribute, to jump into this challenge and to help propose a solution. To register for the challenge, click the “ACCEPT CHALLENGE” button above. The EthicsNet Forum is your space to share thoughts and ideas and spark with others with similar visions.
- Our interactions with intelligent machines are becoming ever more intimately entwined. Machines are capable of amazing learning capability, simply through observation. However, if we are to enjoy the benefits of AI, trust is essential. One aspect of trust is consistent behaviour over time. However, that behaviour must also be accepted. Objectionable behaviour must be able to be modified to something more preferable.
- Socialization is the process by which we teach someone how they should behave if they want to interact with us. We demand that certain boundaries are maintained in exchange for accepting someone within our social circle.
- For all of human history, people have taught their children and pets how to behave, and given feedback to family members and peers. We believe that it is only natural that humans should interact with intelligent machines in a similar way. People tend to prefer when they have influence over how autonomous systems behave. Such systems are much easier to trust, accept, and work with.
- As human societies have evolved, they have developed rules and taboos to help manage social interactions. If machines are to be socialised, they must learn these behavioural codes, in order to respond appropriately to certain situations.
The Data-driven Approach
- The term Machine Ethics describes a variety of techniques designed to help machines such as computers and robots to make decisions that are more in line with the cultural and moral expectations of society. Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality.
- Machine Ethics is sometimes compared to terms such as AI Alignment. However, Machine Ethics (Computational Ethics) is somewhat more broad than AI Alignment per se, as this space has many important interactions with human ethics and culture also. Machine Ethics may in fact be described as an ‘industrialisation of values and morality’. Such processes are, however, naturally interwoven with the task of teaching intelligent machines how they ought to interact with other agents.
- EthicsNet believes that Machine Ethics has the potential to become as significant to global society as the World Wide Web has been. Machines that understand human values can enable a profound shift in social welfare, economics, and mass psychology in the coming decade.
- This is an exciting new space, but one that currently has few or no deployable engineering solutions. We believe that the key missing component is having the right data to train Machine Ethics technologies upon. We observe that datasets (sets of virtual experiences for machine intelligence) are generally that which has made the most significant contribution to the deployment of machine intelligence technologies in recent years, and we expect this trend to continue.
EthicsNet has been established to help accelerate the development of machine ethics technologies, primarily through encouraging the crowdsourced co-creation of a public dataset, or set of datasets, to empower machine ethics systems.
EthicsNet is modelled after ImageNet, a dataset for machine vision which has been instrumental not only in providing actionable data for new machine vision algorithms to use, but also in providing a rallying-point and benchmarking tool for rapid development within this space.
At present, although there are many organisations focussed on AI Alignment or AI Bias, there appear to be no professional organisations devoted to Machine Ethics (or Computational Ethics) per se, and no official or professional qualifications in this space either.
There are many organisations that advocate for attention to be given to risks from AI, or that advocate for more international collaboration, or that attempt to forge general principles of how technology should be applied.
However well-intentioned, discussion and principles alone cannot actually effect a leap forward in the state of the art. To advance the start of the art absolutely requires dataset(s) through which to inform machines via moral examples. Without a dataset, all discussion of this space remains purely academic.
Designing A Dataset
- We want to gather basic ethical intuitions in a fast and simple way. The more fast and simple we can make things, the more inclusive we can make the process of gathering this information, along with the speed of doing so.
- We are trying to provide sufficient good-enough examples of pro-social behaviour to teach machines to behave in a manner comparable to that of a well-raised young human child or friendly dog.
- How can we best create a collection of data that can meaningfully teach machines and that we can use to benchmark the progress of relative algorithms in interpreting that data?
- The dataset should focus primarily on pro-social behaviours, with less focus on anti-social behaviours, though incorporating both into the dataset is important.
The Breakthrough We Are Asking For
There are multiple potential ways that we could proceed in the process of making a dataset of prosocial behaviours, and many of those could lead to a dead-end. Before we commit further resources on developing a dataset, we decided to pause, to ask the global community for advice.
Thus, the EthicsNet Guardians' Challenge was born.
- By harnessing the power of the crowd, we hope to find the right idea, or combination of ideas, that will enable us to help create an ideal dataset for machine ethics.
We talk of ‘Parents and Guardians’ in respect to socializing children.
At EthicsNet, we believe that humans have a similar responsibility to act as Guardians for Artificial Intelligence – To protect, to nurture, and to safeguard.
The EthicsNet Guardians’ Challenge invites contributors to share ideas on how to create the best possible set of examples through which to teach intelligent machines how to behave nicely.
You may be the one who unlocks a crucial technique in our journey towards the ‘raise of the machines’.
What You Can Do To Cause A Breakthrough
- Click ACCEPT CHALLENGE above to sign up for the challenge.
- Read the Challenge Guidelines to learn about the requirements and rules.
- Share this challenge on social media using the icons above. Please show your friends, your family, or anyone you know who has a passion for building a better society through AI.
- Join the conversations in our Forum, ask questions, and connect with other innovators.
Things to consider
This dataset is not designed to answer difficult philosophical questions. This dataset is designed to be a description of pro-social (nice, polite, socially welcomed) behaviours, especially generally universal ones.
The imagined goal is to create a system that can make simple social decisions that one might expect of a well-raised young human child or dog. We want to capture simple social rules like the following:
- It’s not nice to stare at people.
- If you see trash lying around and you aren’t doing anything particularly time sensitive, pick it up and put it in a wastebin.
- If you see someone accidentally drop something (e.g. their wallet), alert them, and return it.
Dataset Design Suggestions
We have some ideas already of how an efficient machine ethics dataset might be constructed. Perhaps some of these ideas might fit with your own:
- Apply a model similar to ReCAPTCHA to enable mass collaboration on small Human Intelligence Tasks, similarly to how such systems have assisted with text and object recognition in the past.
- Apply Swarm Intelligence (synchronous or asynchronous) mechanics to the process of human decision making on this dataset. Then perhaps train an ML system to mimic the swarm, or apply a Wisdom of the Artificial Crowd-type algorithm.
- Explicitly breaking down ethical dilemmas into more or less preferable outcomes, in a manner akin to GenEth.
- Use Generative Adversarial Networks to amplify datasets, to automatically generate nuances between various edge-cases, or to generate example dilemmas or moral guesses that can then be verified by humans.
- Perhaps such a system could act as an automated ethical Turing test self-simulator, given enough human training.
- Generate best and worst possible outcomes from a situation.
- Play the debate game (https://debate-game.openai.com/) and try to either improve the method or implement parts of it with GANs or other existing ML techniques
- Find the edge cases, and use machine learning to fill in the gaps.
- Tokenizing ethics, so that an AI earns tokens for good behaviour (and perhaps forfeits some for poor behaviour) similarly to what we do for problem children, or some experimental methods of assisting persons with psychopathy.
Other, more wacky ideas:
- Apply energy (caloric usage) or entropy mechanics. Perhaps something akin to Causal Entropic Forcing (Maximising Future Freedom of action) could be applied to pro-social behaviour)
- A multi-dimensional vector space or generative manifold of ethics, to help analysis of cultures, to check if a given behaviour is likely to be culturally acceptable/permissible.
- Mine ethics from internet conversational analysis, flagged content, or content that caused a user’s account to be suspended.
- Mine internet content for things that people think ‘should’ happen, and then present those to humans to check if those are indeed worthy of being debated or analysed.
- How could we apply a mass collaboration model like ReCAPTCHA or Duolingo to get people to contribute to and curate the dataset?
- Should we apply Swarm Intelligence techniques to collecting ethical examples, applying techniques such as those of Unanimous.ai? What might be the best way to implement this?
- Should we ask people to create a passport, so that they can define their personal values, and thereby perhaps influence the provision of products or services, or perhaps more easily connect to others similar to themselves?
- Are examples of prosocial behaviours enough, or should we include examples of anti-social behaviour also?
- In what ways can we ensure that demographic, geographic, and cultural context is taken into account?
- There are a variety of possible source datasets which could include:
- 3D data (from games or simulations)
- Pictures or photos
- Some combination
- Emotional Valence describes whether an emotion is agreeable or disagreeable, and can provide cultural context for emotion. Is a model of emotional valence required for machines to fully understand why certain stimulus evokes a certain response from humans?
- Common Law is, in a sense, a genetic process with recombination over many generations. Could something similar be done for machines?
- Is there a place for specific role-models or for humans to act as behavioural exemplars?
- Could decisions made whilst roleplaying ‘good’ characters in RPG games inform a dataset?