MicRic
April 8, 2019
3:34 p.m. PDT
edited

Clarification: Meta-Method Evaluation or New CA Method?

My clarification question arises out of reviewing my notes from the Pollina webinar and comparing them to the challenge details. To wit:

In that webinar, Dr. Pollina lists a number of points, strategies, techniques/technologies that could be brought to bear in one's challenge solution (e.g., he mentions 'compliance gaming strategy', sentiment analysis, NLP, human-machine interaction, and later speculates as to the viability of a "neuropolygraph" and later wonders if there might emerge a "neurobiology of deception", etc.). These mentions -- including his oft-mentioned interest in "technology driven versus Theory driven" solutions -- seem to be asking for new methods (or theories) of credibility assessment (with or without comparison to the existing/conventional CA tools).

On the other hand, the IARPA Challenge details state that the solution "is a method for conducting a study" and, in a Q & A section, that "The CASE Challenge is interested in solutions that could be used to evaluate any credibility assessment techniques / technologies – even ones that haven’t been developed yet!". In another section, the IARPA folks list some standard CA methods (and notes that there are different tools for each), and then states that there "are no standardized and rigorous tests to evaluate how accurate such tools really are." This would seem to indicate that the challenge is seeking methods for evaluating each CA tool/method (or one or more in combination) in terms of how accurate each is/ they are, etc. and that it is NOT seeking any NEW tool or technique for assessing credibility. Later on, another passage states the challenge is looking for participants to "develop a novel and creative method for measuring the accuracy of current and future credibility assessment techniques " So, these challenge details quotes seem to be asking for (what I have called) a 'meta-method' (a method to evaluate [present or future] methods)...NOT new techniques/technologies. However, that said, there is the reference to "future" technologies, and again, Dr. Pollina states he is interested in seeing "new theoretical" arguments (or models) for CA.

Sorry for the lengthy setup there, but can you clearly and concisely state what a solution MAY address (= content will be considered for an award) and what a solution MAY NOT address (= content will be rejected as non-compliant with the rules)? How flexible in terms of content ('meta-method' of evaluation, or new method of CA, or BOTH) may a solution be?
1 Reply

MicRic
April 8, 2019
4:10 p.m. PDT
I should note that in the Webinar, Pollina states (to the effect) that a solution need NOT be "full end to end" (but "addresses some aspect of CA"). So, is Dr Pollina's webinar to be viewed as a 'refinement' (or actually, a 'broadening') of the IARPA challenge details?
Post Your Reply
Let these people know about your message