12,798
How would you teach AI to be kind?

How would you teach AI to be kind?

Help us design the best possible set of examples for teaching machines how to behave in pro-social ways. Read Overview...
$10,000
Overview

Overview

The EthicsNet Guardians' Challenge aims to enable kinder machines.

The Challenge seeks ideas on how to create the best possible set of examples of prosocial behaviour for AI to learn from (i.e. a machine learning dataset).

We welcome entrepreneurs, researchers, scientists, students, and anyone eager to contribute, to jump into this challenge and to help propose a solution. To register for the challenge, click the “ACCEPT CHALLENGE” button above.  The EthicsNet Forum is your space to share thoughts and ideas and spark with others with similar visions. 

 

The Problem

  • Our interactions with intelligent machines are becoming ever more intimately entwined. Machines are capable of amazing learning capability, simply through observation. However, if we are to enjoy the benefits of AI, trust is essential. One aspect of trust is consistent behaviour over time. However, that behaviour must also be accepted. Objectionable behaviour must be able to be modified to something more preferable.

 

  • Socialization is the process by which we teach someone how they should behave if they want to interact with us. We demand that certain boundaries are maintained in exchange for accepting someone within our social circle.

 

  • For all of human history, people have taught their children and pets how to behave, and given feedback to family members and peers. We believe that it is only natural that humans should interact with intelligent machines in a similar way. People tend to prefer when they have influence over how autonomous systems behave. Such systems are much easier to trust, accept, and work with.

 

  • As human societies have evolved, they have developed rules and taboos to help manage social interactions. If machines are to be socialised, they must learn these behavioural codes, in order to respond appropriately to certain situations.

 

The Data-driven Approach

  • The term Machine Ethics describes a variety of techniques designed to help machines such as computers and robots to make decisions that are more in line with the cultural and moral expectations of society. Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality.

 

  • Machine Ethics is sometimes compared to terms such as AI Alignment. However, Machine Ethics (Computational Ethics) is somewhat more broad than AI Alignment per se, as this space has many important interactions with human ethics and culture also. Machine Ethics may in fact be described as an ‘industrialisation of values and morality’. Such processes are, however, naturally interwoven with the task of teaching intelligent machines how they ought to interact with other agents.

 

  • EthicsNet believes that Machine Ethics has the potential to become as significant to global society as the World Wide Web has been. Machines that understand human values can enable a profound shift in social welfare, economics, and mass psychology in the coming decade.

 

  • This is an exciting new space, but one that currently has few or no deployable engineering solutions. We believe that the key missing component is having the right data to train Machine Ethics technologies upon. We observe that datasets (sets of virtual experiences for machine intelligence) are generally that which has made the most significant contribution to the deployment of machine intelligence technologies in recent years, and we expect this trend to continue.

 

Our Vision

  • EthicsNet has been established to help accelerate the development of machine ethics technologies, primarily through encouraging the crowdsourced co-creation of a public dataset, or set of datasets, to empower machine ethics systems.

  • EthicsNet is modelled after ImageNet, a dataset for machine vision which has been instrumental not only in providing actionable data for new machine vision algorithms to use, but also in providing a rallying-point and benchmarking tool for rapid development within this space.

  • At present, although there are many organisations focussed on AI Alignment or AI Bias, there appear to be no professional organisations devoted to Machine Ethics (or Computational Ethics) per se, and no official or professional qualifications in this space either.

    There are many organisations that advocate for attention to be given to risks from AI, or that advocate for more international collaboration, or that attempt to forge general principles of how technology should be applied.

  • However well-intentioned, discussion and principles alone cannot actually effect a leap forward in the state of the art. To advance the start of the art absolutely requires dataset(s) through which to inform machines via moral examples. Without a dataset, all discussion of this space remains purely academic.

 

Designing A Dataset

 

  • We want to gather basic ethical intuitions in a fast and simple way. The more fast and simple we can make things, the more inclusive we can make the process of gathering this information, along with the speed of doing so.

 

  • We are trying to provide sufficient good-enough examples of pro-social behaviour to teach machines to behave in a manner comparable to that of a well-raised young human child or friendly dog.

 

  • How can we best create a collection of data that can meaningfully teach machines and that we can use to benchmark the progress of relative algorithms in interpreting that data?

 

  • The dataset should focus primarily on pro-social behaviours, with less focus on anti-social behaviours, though incorporating both into the dataset is important.

 

The Breakthrough We Are Asking For

  • There are multiple potential ways that we could proceed in the process of making a dataset of prosocial behaviours, and many of those could lead to a dead-end. Before we commit further resources on developing a dataset, we decided to pause, to ask the global community for advice. 

    Thus, the EthicsNet Guardians' Challenge was born.

  • By harnessing the power of the crowd, we hope to find the right idea, or combination of ideas, that will enable us to help create an ideal dataset for machine ethics.
  • We talk of ‘Parents and Guardians’ in respect to socializing children.

    At EthicsNet, we believe that humans have a similar responsibility to act as Guardians for Artificial Intelligence – To protect, to nurture, and to safeguard.

    The EthicsNet Guardians’ Challenge invites contributors to share ideas on how to create the best possible set of examples through which to teach intelligent machines how to behave nicely.

    You may be the one who unlocks a crucial technique in our journey towards the ‘raise of the machines’.

 

What You Can Do To Cause A Breakthrough

  • Click ACCEPT CHALLENGE above to sign up for the challenge.
  • Read the Challenge Guidelines to learn about the requirements and rules.
  • Share this challenge on social media using the icons above. Please show your friends, your family, or anyone you know who has a passion for building a better society through AI.
  • Join the conversations in our Forum, ask questions, and connect with other innovators.

 

Things to consider

This dataset is not designed to answer difficult philosophical questions. This dataset is designed to be a description of pro-social (nice, polite, socially welcomed) behaviours, especially generally universal ones.

The imagined goal is to create a system that can make simple social decisions that one might expect of a well-raised young human child or dog. We want to capture simple social rules like the following:

  • It’s not nice to stare at people.
  • If you see trash lying around and you aren’t doing anything particularly time sensitive, pick it up and put it in a wastebin.
  • If you see someone accidentally drop something (e.g. their wallet), alert them, and return it.

 

Dataset Design Suggestions

We have some ideas already of how an efficient machine ethics dataset might be constructed. Perhaps some of these ideas might fit with your own:

  • Apply a model similar to ReCAPTCHA to enable mass collaboration on small Human Intelligence Tasks, similarly to how such systems have assisted with text and object recognition in the past.
  • Apply Swarm Intelligence (synchronous or asynchronous) mechanics to the process of human decision making on this dataset. Then perhaps train an ML system to mimic the swarm, or apply a Wisdom of the Artificial Crowd-type algorithm.
  • Explicitly breaking down ethical dilemmas into more or less preferable outcomes, in a manner akin to GenEth.
  • Use Generative Adversarial Networks to amplify datasets, to automatically generate nuances between various edge-cases, or to generate example dilemmas or moral guesses that can then be verified by humans.
    • Perhaps such a system could act as an automated ethical Turing test self-simulator, given enough human training.
    • Generate best and worst possible outcomes from a situation.
  • Play the debate game (https://debate-game.openai.com/) and try to either improve the method or implement parts of it with GANs or other existing ML techniques
  • Find the edge cases, and use machine learning to fill in the gaps.
  • Tokenizing ethics, so that an AI earns tokens for good behaviour (and perhaps forfeits some for poor behaviour) similarly to what we do for problem children, or some experimental methods of assisting persons with psychopathy.

 

Other, more wacky ideas:

  • Apply energy (caloric usage) or entropy mechanics. Perhaps something akin to Causal Entropic Forcing (Maximising Future Freedom of action) could be applied to pro-social behaviour)
  • A multi-dimensional vector space or generative manifold of ethics, to help analysis of cultures, to check if a given behaviour is likely to be culturally acceptable/permissible.
  • Mine ethics from internet conversational analysis, flagged content, or content that caused a user’s account to be suspended.
  • Mine internet content for things that people think ‘should’ happen, and then present those to humans to check if those are indeed worthy of being debated or analysed.

 

Open Questions

  • How could we apply a mass collaboration model like ReCAPTCHA or Duolingo to get people to contribute to and curate the dataset?
  • Should we apply Swarm Intelligence techniques to collecting ethical examples, applying techniques such as those of Unanimous.ai? What might be the best way to implement this?
  • Should we ask people to create a passport, so that they can define their personal values, and thereby perhaps influence the provision of products or services, or perhaps more easily connect to others similar to themselves?
  • Are examples of prosocial behaviours enough, or should we include examples of anti-social behaviour also?
  • In what ways can we ensure that demographic, geographic, and cultural context is taken into account?
  • There are a variety of possible source datasets which could include:
    • 3D data (from games or simulations)
    • Pictures or photos
    • Text
    • Some combination
  • Emotional Valence describes whether an emotion is agreeable or disagreeable, and can provide cultural context for emotion. Is a model of emotional valence required for machines to fully understand why certain stimulus evokes a certain response from humans?
  • Common Law is, in a sense, a genetic process with recombination over many generations. Could something similar be done for machines?
  • Is there a place for specific role-models or for humans to act as behavioural exemplars?
  • Could decisions made whilst roleplaying ‘good’ characters in RPG games inform a dataset?

 

Guidelines
Timeline
Updates 2
Forum 2
Community 171
Entries
Press
Resources
FAQ

EthicsNet, a non-profit, is building a community with the purpose of co-creating a dataset for machine ethics algorithms.

Machine Intelligence requires large, well-documented datasets for it to be trained upon. Datasets often matter more than algorithms per se, though they rarely get credit for the value that they can create.

Datasets such as Fei-Fei Li's ImageNet have enabled the recent expansion in capability of machine intelligence in powerful new ways that otherwise would be impossible.

We want to do the same for the space of ethics – a living dataset that can expand in scope and nuance over time, and empower socially-aware thinking machines for generations to come.

We have a vision of building a safer and more just world, by enabling humans and machines to make better decisions, and creating trust through verifiable behavioural rulesets.

We recognise that ethics are varied across time, geography, and culture. We intend to democratize access and opportunity for contribution to this world-changing system, that is currently concentrated in the hand of a few AI researchers/engineers.

However, there are multiple potential ways that we could proceed, and many of those are likely to lead to a dead-end. Before we commit further resources on developing a dataset, we decided to pause, to ask the global community for advice. 

Thus, the EthicsNet Guardians' Challenge was born.

This project is indeed challenging. However, it is less challenging than it may at first appear. We are not trying to ‘solve ethics’ per se, or answer abstract questions such as trolley problems.

 

Even very young children and toddlers are capable of regulating their behaviour based upon social cues, and both babies and non-human animals can apply a sense of fairness to situations. This hints that the underlying principles are not necessarily any more complex than the sophisticated machine vision systems that we have seen debut in recent years.

 

However, deploying such capability will require a dataset of examples to work from, one we seek to help create. Rather than attempt to model ethics per se in discrete mathematics, this approach is more focussed on capturing good examples of kindness and care.

 

It is therefore somewhat ‘wooly’ and less rational or computational than might be ideal. However it ought to be enough to simulate a very basic level of human (or dog) morality, and perhaps even be more akin to biological brains.

 

We reason that this seems to be a simple way to begin the process of teaching machines how to be friendly. Think MNIST, but for social interactions. A dataset that is highly simplistic, yet effective and useful within its narrow domain.

 

Think ‘Wright Brothers’, not ‘747’. Sometimes simply proving the fundamental aspects is more than enough to change the world.

Much of human cognition is social, and humans and animals learn a great deal from each other in an indirect manner. Children do not have the level of experience of adults, but they do not require direct experience in order to learn good rules of thumb.

 

Machines need to learn from the social interactions of humans, and by their social interactions with humans. By giving them opportunities to learn from us, machine intelligence systems will become much more sophisticated, and also much more human-like in their behaviour.

 

Furthermore, it may be the case that many human beings only try to be about as moral as their peers. This enables a ‘good enough’ level of behaviour for most human beings most of the time, and this model should suffice for teaching machines to behave decently also. We do not need machines to act like saints; in fact, if they are highly scrupulous they may find it more difficult to integrate.

The EthicsNet dataset should primary act as a census of Descriptive Ethics (.e. How people typically act in certain contexts) of contemporary societies in multiple geographic locations, and varying creeds.

 

The process of mapping Descriptive Ethics should be reasonably accessible and easy for most people to be able to contribute to – They simply project their own cultural awareness and ethical intuitions into the system. Results can then be aggregated, and cross-referenced with demographic information obtained anonymously.

 

Having a map of the space of ‘common’ ethical rules in given scenarios should be sufficient for most agents to integrate safely into most situations, most of the time.

 

Normative Ethics (how people should ideally act) would be a secondary, longer-term objective. This would be more about applying techniques that are broadly similar to the scientific method (prediction, and observation), and also deriving knowledge via first principles. This is likely to be far more cognitively challenging for a lot of people. Ethics derived through such tools are intended to converge towards (though perhaps not upon) something objective. This is much more challenging, and contentious, as it is less likely to fit with common expectations of behaviour. EthicsNet is not concerned with normative ethics, certainly at this stage.

We consider that possessing (or creating) a secondary dataset of emotional valence may also be valuable. This would have examples of images or situations that would likely provoke an emotional response in a human (for example, scary, cute, or outrageous images).

 

Understanding which situations are likely to provoke an emotional response is an important aspect of empathy, one that human sociopaths typically lack full awareness of.

Any and all IP is open-sourced, as per the competition Guidelines.

This project has already been generously supported by grants from Stichting SIDNfonds, and work by teammembers on this project has been sponsored by Innogy. We have also received amazing support from Stockholm School of Entrepreneurhsip Ventures, Y Combinator’s Startup School, and The IBM AI X-Prize,

Yes, but it’s quick and easy. Just click the “Join Us” button at the top of the page and follow the instructions to complete your registration. All you need to provide is your name and email address.

If you have a question not answered in the FAQ, we recommend that you post it in the EthicsNet Forum where someone will respond to you. This way, others who may have the same question will be able to see it.