Every day we make decisions about whether the people and information sources around us are reliable, honest, and trustworthy – the person, their actions, what they say, a particular news source, or the actual information being conveyed. Often, the only tool to help us make those decisions are our own judgments based on current or past experiences.
For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, asking specific questions, looking at a user badge or rating system, asking for confirming information from other people - or in more formal settings, verifying biometrics or recording someone’s physiological responses, such as is the case with the polygraph. Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how accurate such tools really are.
Countless studies have tested a variety of credibility assessment techniques and have attempted to use them to rigorously determine when a source and/or a message is credible and, more specifically, when a person is lying or telling the truth. Despite the large and lengthy investment in such research, a rigorous set of valid methods that are useful in determining the credibility of a source or their information across different applications remains difficult to achieve.
This challenge is focused on the methods used to evaluate credibility assessment techniques or technologies, rather than on the techniques or technologies themselves. In this context, a method is a detailed plan or set of actions that can be easily followed and replicated.
In this challenge, we ask that your solution is a method for conducting a study, which includes background information, the objectives of the research, study design, the logistics and means for running the study, and details about what data would be collected if your solution were implemented.
The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges facing the US Government’s Intelligence Community (IC). An important challenge facing the IC is knowing who, or what, is credible. By sharing this challenge with the Hero-X community, IARPA seeks to motivate good ideas from people with diverse backgrounds. Successful solutions could be used to inform future research efforts, to help the Government evaluate new tools, and to develop a deeper understanding of what it means to be credible and how we can evaluate credibility across diverse domains – in person, in virtual spaces, and in the information and media we consume.
The challenge of developing a useful evaluation of credibility assessment techniques and technologies lies in the method’s design for drawing out real behavior, credible or not, from an individual or other source and then having a valid means to test it. This can be difficult as many current techniques involve actors or games where individuals may not feel that they need to be honest or do not truly act as they would in a real-life scenario. How to Get Involved
The IARPA Credibility Assessment Standardized Evaluation (CASE) Challenge seeks novel methods to measure the performance of credibility assessment techniques and technologies.
Credibility Assessments are tests or tools used to evaluate how credible a source of information is and/or the credibility of specific information or claims. These assessments include written tests, personal observations, checking different corroborating sources, and other tools and technologies (for example, a background check, polygraph examination, interviews, automated language analysis tools, or analyzing metadata).
We are looking for participants from a wide range of backgrounds and professions, who can create and submit a complete and creative solution that meets the challenge criteria.
Please follow this link to register: https://www.iarpa.gov/challenges/casechallenge/docs/CASE_Challenge_Teaming.pdf
All Solvers will need to register for the HeroX platform, and then register for the CASE Challenge at the following link. If you are already registered on the HeroX platform then you only need to register for the CASE Challenge.
Yes, all participants can either work alone or with a team – each submission will contain one main point of contact.
The CASE Challenge is seeking a method to evaluate current and future credibility assessment techniques or technologies. In this context, a method is a detailed plan or set of actions that can be easily replicated or followed. A complete solution will include information such as background information, the objectives of the research, study design, the logistics and means for running the study, and details about what data would be collected if the solution were implemented. There are four main criteria that will be used to evaluate each solution and participants should review these criteria to understand what would be included in a good solution. Note: The CASE Challenge involves submitting a description of your solution. The solution does not need to be tested or implemented at this time.
No, both field and lab protocols are of interest.
No. The CASE Challenge is interested in solutions that could be used to evaluate any credibility assessment techniques / technologies – even ones that haven’t been developed yet!
No. The CASE Challenge is not focused on assessing whether specific credibility assessment techniques or technologies can be “beat.” The CASE Challenge is focused on the development of protocols to evaluate the performance of both existing and new credibility assessment techniques and technologies.
Submissions should not assume the level of knowledge that study participants have about the goals and purpose of the study, or their specific role in it. Protocols can choose to inform, or opt not to inform, study participants about certain aspects of the study, such as the fact that their credibility is being assessed and evaluated. However, protocols that do not inform participants of study details a priori can be challenging to implement and may require extensive participant debrief at the end of the study.
Yes, Solvers or teams can submit more than one Solution to the challenge. Each Solution is considered stand alone and will be evaluated independently of each other.
Please refer to the CASE Challenge Rules document for information regarding creating your solution.
Solvers will have until 4/14/2019 at 10pm EST to submit their solution for review.
Winning submissions are expected to be shared. The intent of the CASE Challenge is for solutions to serve as a public good in order to inspire and advance credibility assessment research and practice going forward.
The CASE Challenge Rules can be found at the following link: https://www.iarpa.gov/challenges/casechallenge/docs/CASE_Challenge_Rules.pdf
The members of the CASE Challenge evaluation panel have not been announced. The panel will be comprised of qualified technical experts from diverse backgrounds who understand credibility assessment protocols and credibility assessment techniques and technologies. It is anticipated that the panel will consist of credibility assessment researchers, individuals who evaluate credibility assessment techniques and technologies, and those who use these tools
The following Prizes will be awarded for the challenge. Stage 1 Prizes, except for the Credibility Champions, will be paid within 60 days of Stage 1 winner announcements. Stage 2 Prizes, to include Credibility Champions, will be paid within 60 days of Stage 2 winner announcements.
Yes, but it’s quick and easy. Just click the “Accept Challenge” button on this page and follow the instructions to complete your registration. All you need to provide is your name and email address.
If you have a question not answered in the FAQ, we recommend that you post it in the Forum where someone will respond to you. This way, others who may have the same question will be able to see it.