Every day we make decisions about whether the people and information sources around us are reliable, honest, and trustworthy – the person, their actions, what they say, a particular news source, or the actual information being conveyed. Often, the only tool to help us make those decisions are our own judgments based on current or past experiences.
For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, asking specific questions, looking at a user badge or rating system, asking for confirming information from other people - or in more formal settings, verifying biometrics or recording someone’s physiological responses, such as is the case with the polygraph. Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how accurate such tools really are.
Countless studies have tested a variety of credibility assessment techniques and have attempted to use them to rigorously determine when a source and/or a message is credible and, more specifically, when a person is lying or telling the truth. Despite the large and lengthy investment in such research, a rigorous set of valid methods that are useful in determining the credibility of a source or their information across different applications remains difficult to achieve.
This challenge is focused on the methods used to evaluate credibility assessment techniques or technologies, rather than on the techniques or technologies themselves. In this context, a method is a detailed plan or set of actions that can be easily followed and replicated.
In this challenge, we ask that your solution is a method for conducting a study, which includes background information, the objectives of the research, study design, the logistics and means for running the study, and details about what data would be collected if your solution were implemented.
The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges facing the US Government’s Intelligence Community (IC). An important challenge facing the IC is knowing who, or what, is credible. By sharing this challenge with the Hero-X community, IARPA seeks to motivate good ideas from people with diverse backgrounds. Successful solutions could be used to inform future research efforts, to help the Government evaluate new tools, and to develop a deeper understanding of what it means to be credible and how we can evaluate credibility across diverse domains – in person, in virtual spaces, and in the information and media we consume.
The challenge of developing a useful evaluation of credibility assessment techniques and technologies lies in the method’s design for drawing out real behavior, credible or not, from an individual or other source and then having a valid means to test it. This can be difficult as many current techniques involve actors or games where individuals may not feel that they need to be honest or do not truly act as they would in a real-life scenario. How to Get Involved
Note: Other Government Agencies, Federally Funded Research and Development Centers (FFRDCs), University Affiliated Research Centers (UARCs), or any other similar organizations that have a special relationship with the Government, that gives them access to privileged or proprietary information, or access to Government equipment or real property, will not be eligible to participate in the prize challenge.
The methodology you present in your solution, that is the way the solution’s method is implemented, will impact the type of information or personal attributes being evaluated. Depending on the approach, the motivations to be credible, as well as subsequent penalties, will vary. In some cases the ground truth about an individual’s credibility or the credibility of their information will be difficult, if not impossible, to establish.
This highlights one of the roadblocks to testing a new credibility assessment method for use in practical applications - there is no universally accepted method by which to establish the performance of a new technique or compare across methods.
A key difficulty in validating that a new method can be used for practical, everyday purposes is in the attempt to move research findings into the real world. Several limitations exist for current methods including, but not limited to:
These limitations are particularly impactful when considering applications for national security, where an individual may feel that their livelihood, core values and beliefs, safety, or freedom are in jeopardy if someone doesn’t believe that they are credible. It is difficult to build such motivation and jeopardy into a replicable methodology that is safe, ethical, AND could be used as a common standard across some or all credibility assessment applications.
Another limitation of methods used today is the large amount of work devoted to retrospective evaluations, or situations in which a participant is evaluated based on past experiences. This is in contrast to prospective evaluations that require credibility to be based on future intent or behaviors, which is very challenging to objectively measure. While prospective screening represents the majority of uses of credibility assessment methods by the US Government, for example as screening during employee hiring, it is relatively underrepresented in the research done on credibility assessment.
Stage 1 Prizes
Stage 2 Prizes
All solvers will submit their solutions in the prescribed format, via the HeroX CASE Challenge site. You may find the CASE Challenge Solution Submission Template here. A Word version of the Solution Submission Template can be found here.
CASE Challenge solution scoring will consist of three phases: Compliance Review, Stage 1 Scoring, and Stage 2 Scoring. When a solution submission is received, the ‘Solver Names’ field and associated content will be removed and each submission will be given a unique ID. Additionally, throughout the scoring process each solution will be evaluated separately (i.e., multiple solutions submitted by the same respondent will be separated and evaluated individually rather than as a group).
Please see CASE Challenge Rules for additional details
The CASE Challenge is the first concerted effort to invite interested individuals to develop credibility assessment evaluation methods that can be used to objectively evaluate both existing and future credibility assessment techniques/technologies. In doing this the CASE Challenge strives to incentivize a broad range of new ideas, while still ensuring their utility to real-world applications. To meet this goal, a scoring panel of experts will evaluate each solution based on the background and strength of the methods, how well it reflects realistic conditions, how creative and clever it is compared to currently used methods like the mock crime scenario, and how well it ensures the responsible care and consideration of participants.
Please see CASE Challenge Microsite for more information.