menu

Dynamic Risk

 45,654

Cognitive Computing Challenge

Build a cognitive system that can read a document, then load a database with what it finds.

This challenge is closed

stage:
Closed
prize:
$200,000

This challenge is closed

more
Summary
Timeline
Updates21
Forum29
Teams477
FAQ
Summary

Overview

Dynamic Riska leader in pipeline integrity engineering, is excited to sponsor this challenge that can fundamentally change many industries.
 
Imagine if computers could read and interpret documents. Humans could focus their efforts on understanding what analysis results mean to make better decisions. Our interest, at Dynamic Risk, is to improve the safety and reliability of energy pipeline networks by taking full advantage of the vast amounts of data locked in cumbersome formats, handwritten documents, drawings, photographs, and in paper archives.
 
We want to fundamentally change how we ask questions and receive answers. Today, we ask questions based on the data we have available in structured databases. In the future we want to ask questions and not worry about having the data readily available in a structured format.
 
To move forward towards this grand vision, we are sponsoring a challenge to solve the first part of this puzzle. We want a solution that can not only learn how and what data to extract from data sources like spreadsheets, word processor files, and computer generated PDF files but also learn how to map the data found in these documents to specified target fields in a database.


The Problem

The user will be presented a document to be "read". They will manually identify the locations in the document that contains the data required and teach the system how it maps to the target fields in the database including any manipulations to the data (ie: unit conversions) that are necessary. The software will need to learn, using a series of documents which will vary in the formatting of the same target content. For example, the learning process will need to process reports containing the same target information but enclosed in reports by different service providers, therefore the formats can be quite different.
 
Ideally, the solution must be able to empirically rate its level of confidence in its results as well as identify what it cannot process.


The Challenge Breakthrough

If computers could read and analyse vast amounts of information, we can focus our efforts on understanding what the results means and make better decisions. The Cognitive Computing Challenge will break the barriers that prevent us from accessing the majority of information in this world locked in written documents which require humans to interpret and extract the useful information. Our interest is to change how energy pipeline networks are managed, improve their safety, and ultimately save lives. However, this technology has broad application to multiple industries.
 
These types of tasks are currently done manually by humans. A solution to this challenge will automate the repetitive tasks required to do complex analysis which will eliminate most of the time required and minimize errors. Ideally, there should be as little human intervention as possible.
 
 

Challenge Guidelines

The winning solution for this challenge will be the one that is not only the most accurate, but also the most flexible, easy to use, and can be trained with minimal documents to extract any type of data that is required.
 
 

Challenge Structure

The Cognitive Computing Challenge has four stages:
 
1. Qualifying Challenge
This is a qualifying problem which requires you to process MLS (Multiple Listing Service) data from 300 MLS training records into a database. You will be provided with a document summarizing the correct format of each field to be extracted. 
 
The files required for this Qualifying Challenge can be downloaded here:
URL: ftp.dynamicrisk.net
User: CCChallenge
PW: @@@halleng@
Filename: DRAS_sample_v1_20150605.zip
 
The document format is identical for all of the training documents. You must train your system to process and load the data into a database with a structure of your choice. Teams must submit a successful solution for this Qualifying Challenge before they can compete for the Cognitive Computing Challenge.
 
The submitted system will be tested by Dynamic Risk by processing a separate set of documents in the same format. The resulting data produced will be scored and each teams total score and the components of their score will be posted on a leaderboard visible to the public. Only scores will be published. The teams methodology and questionnaire responses will not be posted. Submissions for all teams will be tested with the same documents to derive their scores.
 
Feedback will be provided to all teams regarding the scoring of their submission with suggestions on where to focus their efforts to improve their scores. Teams whose approaches, in our opinion, are not suitable for the Cognitive Computing Challenge will be encouraged to resubmit an entry for the Qualifying Challenge with a different approach.
 
2. Cognitive Computing Challenge - Similar to the Qualifying Challenge, a set of training documents and a full description of the target attributes will be provided. The difference with this challenge is the following:
  • A smaller number of training documents (approx. 100)
  • A larger variance for the responses in each target field. This will require greater emphasis on cleaning the data extracted
  • Variances in the document format and structure. It will not be one consistent format as per the Preliminary Challenge
Materials required to complete this stage of the challenge can be downloaded here:
URL: ftp.dynamicrisk.net
User: CCChallenge
PW: @@@halleng@
Filename: DRAS_sample_v2_20151104.rar
 
3. Submissions for the Cognitive Computing Challenge
 
4. Judging and announcement of the winner
 
 

Challenge Criteria and Challenge Prize award

One prize will be awarded at the end of the Cognitive Computing Challenge from the team or individual with a submission that meets the judging criteria will be the sole winner.
 
 

Who Can Participate?

The challenge is open to individuals, teams and organizations globally. To be eligible to compete, innovators must comply with all the terms of the challenge as defined in the Challenge Specific Agreement.
 
 

Judging Panel

The challenge will be judged by the official Judging Panel. The Judging Panel holds responsibility for evaluating all submissions against the winning criteria and the guidelines and rules for the challenge. The panel will be responsible for evaluating compliance with these rules and guidelines and will have the sole and absolute discretion to select the challenge prize recipient. All decisions made by the Judging Panel shall be rendered by a majority of the judges and are final and binding on both the competitors and Dynamic Risk Assessment Systems, Inc. and are not subject to review or contest.

 

A panel of highly qualified individuals will be selected to serve as the judges. All members of the Judging Panel will be required to sign Non-Disclosure Agreements acknowledging that they make no claim to the Intellectual Property developed by individual competitors, teams, team sponsors, or partners.

 
 

Submission Requirements and Rules

All submissions must meet the following requirements to be included in the judging process for the challenge. A submission that does not meet these requirements will be considered incomplete and will not be eligible for judging.

 

  1. The submission must be original work and be owned or the property of the competitor.
  2. All platform use agreements must be satisfied, if third-party technology is used, the competitor must have the right to use the technology in their submission.
  3. ​For all submissions, competitors must assign a royalty free, irrevocable, perpetual, transferable, assignable, worldwide license for the Intellectual Property and intellectual property rights to Dynamic Risk Assessment Systems, Inc. for commercial use. The license assigned will be exclusive for applications in the Energy industry. Competitors will own the Intellectual Property and intellectual property rights for their submissions.
  4. Award of the prize will be subject to international laws, regulations, withholding and reporting requirements where required.
  5. If any provision of the Challenge Specific Agreement is held unenforceable, the competitor agrees that such provision shall be modified to the minimum extent necessary to render it enforceable (including the adoption of equivalent terms that are specific to the jurisdiction applicable to the competitor).
  6. By completing the registration for this challenge, the competitor is deemed to have read, accepted and agreed to be bound by the Challenge Specific Agreement.

 

Challenge submissions must include:
  • URL, remote access instructions, or download link with credentials to utilize the working system
  • all set up instructions where necessary
  • instructions for the use of the system
  • completed questionairre
 
All submissions for the Cognitive Computing Challenge must be downloadable and will be installed by Dynamic Risk locally for judging.
 
 

Winning Criteria

Judging in each of the two stages will be based on:

  • Accuracy (60% of the score) - Your trained solution will be tested on separate set of records reserved for judging. The data set produced by your solution will be compared to the data set that has been correctly processed. The comparison will be based on the following parameters:
    • Precision*
    • Recall
    • f-measure with Precision weighted 2 to 1 vs Recall 
A simple definition of these parameters can be found here:
http://en.wikipedia.org/wiki/Precision_and_recall
*Note: Precision will be based on the correct insertion of a clean record into the database which will includes conversion to the appropriate units, correct number of significant digits, removal of abbreviations in text, correction of spelling errors, correct capitalization, etc.

 

  • Usabilty Testing (15% of the score) - Internal engineers at Dynamic Risk will assess the user interface and the practical usability of the resulting database when they apply the Judging data set. The speed of processing will also be noted. A score from 0-10 will be assessed

     

  • Questionnaire (25% of the score) - The Judging Panel will assess the innovators’ approach to solve the problem and score the submissions from 0 to 10 based on their flexibility and extensibility to accommodate other types of documents, different document formats, and its performance when limited training documents are available.


 

Selection of a Winner

Based on the above criteria, a single submission will be selected with the highest overall score as the winning innovation and will receive the prize. In case of a tie, the winner will be selected at the discretion of the Judging Panel.

 

Challenge Guidelines are subject to change. Registered competitors will receive notification when changes are made, however, we highly encourage you to visit the Challenge Site often to review updates.

 

 

Schedule

Milestone  Date
Challenge is live May 15, 2015
Registration is open June 8, 2015
Submission questionnaire is available June 18, 2015
Leaderboard updates for the Qualifying Challenge

ongoing (June through
December 2015)

Competition closes/Final Challenge submissions due April 11, 2016
Judging April 12 to
May 31, 2016
(may be extended
depending on the
number of teams
competing)
Winner Announced June 1, 2016

 

Questions

Teams/Innovators will have an opportunity to check in with the Challenge Team upon request via video conference. Details will be announced after registration opens.

 

Timeline
Updates21

Challenge Updates

The Go Champion, the Grandmaster, and Me

April 8, 2016, 8:48 a.m. PDT by Nidhi Chaudhary

Hey guys -- 

Machines won again.

A few weeks ago, AlphaGo, a program developed by Google defeated Korean grandmaster Lee Sedol in four out of five matches in Go. Ken Jennings, the winningest player in Jeporady history said, "The nightmarish robot dystopias of science-fiction movies just got one benchmark closer."

Knowing all to well what it is to face a computer (he has lost head to head to IBM's Watson), he continues on to say, "...in a very real way, your opponent isn’t just a room full of servers or a few thousand lines of code. It’s the Future, the possibility that your own individual talent, the thing that’s made you special your whole life, can now be replaced by a sufficiently clever algorithm....I assume that Lee, like Kasparov and me before him, will eventually make it through the five stages of automation obsolescence and accept his pioneering role in the early history of “thinking” machines. But what about all those newly replaceable souls who come after us, in a seismic shift that seems about to reshape our entire economy? For now, it’s just a handful of chess and Go and Jeopardy! champions who no longer feel needed and useful. But what happens to society when it’s tens of millions of us?

Does AI give you hope? Or make you despair for our future? Read more from Ken's essay in Slate here.

I can't quite decide...what do you think?

Cheers,

Nidhi

 


Amazon Hosted A Secretive Robotics Conference In Florida

March 30, 2016, 11 a.m. PDT by Nidhi Chaudhary

Amazon recently hosted a secret, invite-only, robotics conference called MARS in Palm Springs, Florida. MARS, or Machine-Learning (Home) Automation, Robotics and Space Exploration, brought together experts in the fields of robotics, artificial intelligence, space exploration, and home automation -- whoa. Amazon didn't officially comment, but of course social media gave as a peak into the conversations. Ron Howard and Jeff Bezos spoke, and CEO's and reps from Toyota, Rethink Robotics, iRobot, MIT, and University of California, Berkeley were in the room -- oh what it would have been to be a fly on the wall. 

Sounds like there were seminars about imbuing machines with human values and experiences for attendees to make their own axes to split wood (huh?), trying virtual-reality devices, and enjoying food and drink on tables set on top of Amazon's Kiva robots.

Here's hoping that we all get an invitation for next year!

Cheers,

Nidhi

Source: http://www.fastcompany.com/3058178/fast-feed/amazon-hosted-a-secretive-robotics-conference-in-florida


Google launches new machine learning platform

March 25, 2016, 5:23 a.m. PDT by Nidhi Chaudhary

Hey guys!

Google believes that machine learning is "what's next" -- duh! So, yesterday, they announced a new machine learning platform for developers called Cloud Machine Learning. The service is available in preview and will make it easier for developers to use the machine learning smarts that Google already users to power their "Smart Reply" feature in Gmail Inbox. 

“Major Google applications use Cloud Machine Learning, including Photos (image search), the Google app (voice search), Translate and Inbox (Smart Reply),” the company says. “Our platform is now available as a cloud service to bring unmatched scale and speed to your business applications.”

The platform consists of two parts: the first allows developers to build machine learning models from their own data, and the other gives developers a pre-trained model.

“Cloud Machine Learning will take care of everything from data ingestion through to prediction,” the company says. “The result: now any application can take advantage of the same deep learning techniquesthat power many of Google’s services.”

Read more about the announcement here: http://techcrunch.com/2016/03/23/google-launches-new-machine-learning-platform/

Can you use Cloud Machine Learning to help solve the Cognitive Computing Challenge?!

Cheers!

Nidhi


More answers to common questions in the FAQ

March 2, 2016, 2:22 p.m. PST by Dynamic Risk

Hi everyone! We get a lot of questions and a lot of them are very similar. We have added a few new entries to the FAQ which we get asked a lot. They are:

1. Is it ok to leverage IBM Watson or Google's TensorFlow?

2. An explanation why we are asking for an exclusive license.

3. More detail on the judging criteria.

4. Can you annotate the training data?

Click this link to check it out!

https://herox.com/CognitiveComputing/edit/faq


We want your feedback!

Feb. 11, 2016, 4:58 p.m. PST by Nidhi Chaudhary

Hi Innovators!

We are a few months away from the deadline and we just wanted to touch base to ask for your feedback. We hope that you can take a few moments to tell us about your experience with the challenge.

How's it going? Are you making progress? Are you getting stuck? Would you like feedback and support on your solution?

We have put together a (very) short survey to checkin and to offer mentorship -- if you would like it! 

Click here to complete the: Innovator Checkin

We would appreciate your response and we look forward to supporting you as you continue to work on your solutions. And in case you haven't seen it, we hosted a webinar in January where we responded to a number of questions about the challenge. To watch it, click here: CCC Webinar

Don't hesitate to reach out with any questions and we look forward to your feedback!


Forum29
Teams477
FAQ

Frequently Asked Questions

We encourage you to integrate these into your solution. We are not expecting the winning solution to be built from scratch.

Yes.

The terms of the challenge require you to grant us a royalty free and perpetual license to commercially use what you submit for this challenge. This also includes transfer and assignment rights in case we want to use it in a subsidiary or a JV. It is also important if our company is acquired. The acquiring company will want us to assign the license so they can continue to run the company seamlessly. Our license must also be exclusive for any use in the Energy industry.

We are not taking ownership of any part of your submission. You continue to own it and can exploit it as you wish so long as it doesn't violate our license.

In the Challenge agreement, all competitors are required to grant an exclusive license for the Energy Industry for their submission to Dynamic Risk. This does not mean that Dynamic Risk will own the solution, the competitor will still own all of the IP and the code for the solution itself. 

This does not mean that Dynamic Risk will accept all license grants either. It is unlikely that we will want a license for a submission that is not the winner, but it could be a possibility so we want to leave that open.

The intent of the exclusive license is to discover a usable solution for our business while at the same time we do not want to encourage the development of a solution that will be available to our competitors.

The HeroX team also has standard competition terms which require all competitors to submit a license as a condition of entry. From their experience, based on past challenges, I am told this is necessary to ensure the intent of a competitor is to win the challenge rather than joining to access data and collect feedback at no cost and intentionally come in second. The exclusive license requirement, is one extra condition that we added because we do not want to lose the competitive advantage we gain by sponsoring this Challenge.

There are some concerns stated that we could obtain a license without awarding the prize. This is not the intent. If there was a solution that met our business needs, we certainly will award the prize. However, the problem is quite difficult and it is possible that no solutions will meet our needs. In this case, we will consider extending the timeline for the challenge, or funding some of the best approaches separately. Our intent is to incent the development of an appropriate solution.

If you wish to suggest a licensing solution that meets our competitive advantage concerns we are willing to consider it. Perhaps a solution is an exclusive license only for the winning solution and the standard non-exclusive license grant for the non-winning solutions? I would love to hear your thoughts.

The fundamental research and papers presented are open source and in the public domain. However, what we are looking for is a usable system that a reasonably skilled user can operate. An exclusive license for that executable should be granted to us and should not violate open source agreements. Please contact us directly at glenn_yuen@dynamicrisk.net should you have any questions or concerns.

The judging is split into three parts:

1. Accuracy (60%)

Your trained solution will be used to process a separate set of documents that are of the same type as the training documents provided. This will be compared to a database which we have created that represents the correct solution for the data extraction. The two databases will be compared field by field to calculate Precision and Recall statistics. A simple definition of these parameters can be found here: http://en.wikipedia.org/wiki/Precision_and_recall. These statistics will be combined into a single number with a f measure that is weighted 2 to 1 in favor of Precision. This will serve as the value for this element of the competition.

2. Usability (15%)

We intentionally left the user interface requirements vague because we wanted competitors to focus on the data processing accuracy for the challenge rather than build an elegant UI that is consistent with a finished commercial product. The weighting of this is relatively low compared to the quality of the data extracted to reinforce that. The user interface can be anything you wish. Command line is fine. A simple graphical UI is fine as well even if it is rough around the edges. We are a dev shop so it doesn't need to be pretty. But it does need to work reliably and it should be reasonably easy to figure out by someone with programming skills. The scoring for Usability will be based on how quickly one of our engineers can get up to speed. Therefore a pretty design in the UI won't likely result in an advantage in the scoring.

3. Questionairre responses (25%)

We are specifically grading for the extensibility of the solution to other domains, for other document types, other languages, the ability to extend to charts/drawings/graphs, the ability to derive context from the text, etc. This functionality is not required for the challenge but if the possibility is there to extend and improve it over time to cover these things then we are more interested in this solution vs one that cannot. In other words, the more generic the solution is, the higher it will grade. If it is a custom solution that will only work for the datasets in the challenge, then it will not grade well and it won't likely pass judging.

The documents for this challenge will be presented in English. However, submissions will be evaluated for their ability to process in other languages.

We have found that the following FTP clients will connect successfully to our server:

Windows 7: Firefox (v39), Chrome (v32.x), and IE 11.x

OSX: Chrome (v43.x). Note Safari has trouble connecting. We do not recommend using Safari.

We chose two datasets that we thought a layperson could understand. The first dataset is real estate listings which everyone should be familiar with. The second data set is incident reports which are technical, but are not so technical that the average person couldn't understand the complete content. We did not annotate the fields on purpose because that is part of the scoring for the Precision and Recall statistics we are calculating. In some cases, the adjectives, adverbs, and interjections used in the sentences can convey some data as well. We are curious to see which solutions are able to capture those subtleties.

For judging, we prefer that you submit remote access to your working solution. If it is web based that is great too. In order to streamline the evaluation process, we would like to eliminate the need to duplicate your environment to install your deliverables.

We are targeting a week to review each submission. It will depend on the timing of the submissions by the competitors. If there is a flood all at once, it may take a little longer. We will be in contact with each team to let them know when to expect a response.