99,441
AlphaPilot – Lockheed Martin AI Drone Racing Innovation Challenge

AlphaPilot – Lockheed Martin AI Drone Racing Innovation Challenge

AlphaPilot is the first large-scale open innovation challenge of its kind focused on advancing artificial intelligence (AI) and autonomy. Read Overview...
Overview

 

 

Calling all coders, gamers, race fans, and drone enthusiasts...

 

Lockheed Martin and The Drone Racing League (DRL) challenge you to participate in AlphaPilot, an open innovation challenge, developing artificial intelligence (AI) for high-speed racing drones.  

 

Enter the AlphaPilot Innovation Challenge today for your chance to master autonomous flight and win more than $2,000,000 in cash prizes.

 

AlphaPilot will challenge teams of up to 10 participants each to design an AI framework, powered by the NVIDIA Jetson platform for AI at the edge, that is capable of flying a drone -- without any human intervention or navigational pre-programming. Autonomous drones will race head-to-head through complex, three-dimensional tracks in DRL’s new Artificial Intelligence Robotic Racing (AIRR) Circuit, starting in 2019.

AlphaPilot aims to unite a diverse community of practicing and emerging AI experts, researchers and students to inspire the next generation of autonomous drone technology. By participating in this challenge, your knowledge and ideas can contribute directly toward the future of autonomous transportation, delivery, disaster relief, and even space exploration!

 

 

Why Drones?

Drone racing is a futuristic sport, and a big draw for millennials and K-12 students with an interest in technology — many of whom will become future STEM professionals, drone pilots and engineers. Lockheed Martin recognizes the important role in helping to develop a workforce with skills to compete in a 21st century high-tech economy. Lockheed Martin and DRL are targeting U.S. undergraduate and graduate students to apply for AlphaPilot; however, the competition is open to drone enthusiasts, coders and technologists of all ages from around the world.

 

Why is Lockheed Martin doing this?

For more than 100 years, Lockheed Martin has been redefining flight — from the fastest speeds, to the edge of space, to unmatched maneuverability and stealth. AI-enabled autonomy promises to fundamentally change the future of flight, and we are actively developing disruptive new AI technologies that will help our customers accomplish their most important missions – from reaching Mars to fighting wildfires.

 

WHAT CAN I DO RIGHT NOW?

  • Click ACCEPT CHALLENGE above to apply for AlphaPilot.
  • Read the Challenge Guidelines to learn about competition.
  • Share this challenge on social media using the icons above. Show your friends, your family, or anyone you know who has a passion for discovery.
  • Start a conversation in our Forum to join the discussion, ask questions or connect with other innovators.

ABOUT LOCKHEED MARTIN

Headquartered in Bethesda, Maryland, Lockheed Martin is a global security and aerospace company that employs approximately 100,000 people worldwide and is principally engaged in the research, design, development, manufacture, integration and sustainment of advanced technology systems, products and services. This year, the company received three Edison awards for groundbreaking innovations in autonomy, satellite technology and directed energy. For more information, please visit www.lockheedmartin.com/alphapilot.

 

 

 

ABOUT DRONE RACING LEAGUE

DRL is the professional drone racing circuit for elite FPV pilots around the world. A technology, sports and media company, DRL combines world-class media and proprietary technology to create thrilling 3D drone racing content with mass appeal. In 2018 DRL hosted a global series of seven races, the Allianz World Championship, that is airing on ESPN, Sky Sports, ProSiebenSat.1 Media SE, Groupe AB, Disney XD, OSN, and FOX Sports Asia. For more information, please visit www.drl.io

Guidelines
Timeline
Updates 25
Forum 83
Community 2.5K
Leaderboard
Resources
FAQ
Sponsors
Test 1
Test 2
Test 3

AlphaPilot Qualifier

Test #3 – Guidance, Navigation & Control

 

Overview:

The third component of AlphaPilot qualifications will focus on a team’s ability to design algorithms for the guidance, navigation, and control (GNC) of an autonomous drone. The test will utilize a simulator framework that provides users with the tools needed to test their drone racing algorithms using realistic dynamics and exteroceptive sensors. These skills are essential for competition in AlphaPilot and the test is considered a precursor to work conducted by Finalist teams in preparation for each AIRR race event.

 

Updates

What’s New? What’s Different?

We want to ensure that all teams get the best chance to format their Test 3 submissions correctly given the new guidelines below, and as such, we have decided to make a final extension of the Test 3 deadline to Friday, March 22nd at 5:00PM EST.

As a few updates were made to the simulation, we suggest all teams pull the latest version of FlightGoggles from GitHub.

Some gate perturbations in Test 3 were placing gates very close to walls making them hard to traverse. Our intention is not to make teams do obstacle detection and avoidance in this test. As a result, the Challenge Files were regenerated, and the new Leaderboard Challenge Files can be downloaded here:

A few small updates were made to Test 3 including:

  • In FlightGoggles, the race time now starts when the drone is armed, so teams will get a chance to initialize algorithms
  • Minor bug fix on scorer.sh, which was indexing incorrectly
  • Minor bug fix on scorer.py, which was not able to figure out the path if run from a non-standard location

Further details about submission and testing requirements are added below. Most importantly, teams should note the maximum upload size of 1GB for submissions, and additionally, the Test 3 testing environment will have ROS Kinetic, OpenCV 3.4, and FlightGoggles already installed for teams. Otherwise, this testing environment is as previously described and is an instance of Ubuntu 16.04.5 LTS running in AWS type p3.2xlarge.

For more details on the evaluation, resources, algorithm and submission requirements, and testing, please read the following sections. A summary of the most frequently asked questions on the Test 3 Forum Questions page and their responses will be continually added to the new ‘FAQ’ section at the end of this tab.

What’s yet to come?

Updates will be posted as needed. Otherwise, teams should submit their algorithm source code archive and technical report by March 22nd! Good luck!! ** Please note Test 1 and Test 2 are due on March 8th. Only Test 3 is due on March 22nd.

 

Evaluation:

Goal

Teams must develop GNC algorithms to fly a simulated drone through a structured test environment utilizing a typical ACRO/RATE flight mode for control inputs and exteroceptive sensors for feedback. Teams are tasked with:

  1. Developing GNC algorithms to pilot an autonomous drone through FlightGoggles
  2. Describing their GNC algorithms in a 2-page Technical Report

The GNC algorithms must be capable of navigating through gates (see Figure 1) in a FlightGoggles challenge course. The objective is to pass through all gates in the defined order, as quickly as possible, without crashing.

 

Figure 1: Shows a screenshot of an example drone race course in MIT’s FlightGoggles Simulation. 

Test 3 Scoring

Each team will receive a Test #3 score (max 100 points) that combines an objective score from their racing performance with a qualitative review of their written submission:

  • Algorithm Score: 70% of total Test 3 score
  • Technical Report Score: 30% of total Test 3 score

 

Algorithm Score

Each team’s drone performance will receive an Algorithm Score (max 70 points) that is based on a metric evaluating the ability of their submitted algorithms to navigate through race gates. Total lap time will be the primary measure of performance, along with points awarded for each successful gate fly-through. The Reporter node in FlightGoggles tracks these metrics and outputs this score per “run”:

Run Score = +10 pts per gate successfully passed – race completion time (in seconds)

Since algorithms are tested on 25 slight variations (“runs”) of the same challenge course, the Racing Score is then calculated by looking at the best 5 runs from the 25 and averaging them. Since it is possible to hit obstacles and crash, simulator runs with a crash or object strike will receive 0 pts.

This averaged Racing Score is reported on the Leaderboard and scaled slightly for the Final Algorithm Score, so the best team receives the maximum allowable 70 pts.

 

Technical Report Score

Each team’s Technical Report Score (max 30 pts.) is based on a rubric evaluated by judges from academia, industry, and government. The judges will review each report for technical merit, applicability to AlphaPilot, and presentation and clarity of the approach.

 

Resources:

Test #3 is built on the Massachusetts Institute of Technology (MIT) FlightGoggles simulator, a first-of-its-kind virtual reality environment for drone research and development. MIT has modified FlightGoggles to include software for a drone racing simulation environment based on the Unity3D game engine. It includes Robotic Operating System (ROS) binding for integrating autonomy software and Unity assets for environments that resemble drone racing scenarios.

The FlightGoggles simulator and several training courses are now available open-source to the public and are hosted on GitHub. You can find the source code for the simulator, more information about how to get started, and some example challenge files here: http://flightgoggles.mit.edu/

 

Vehicle Model:

The simulator emulates a high-performance drone racer with high pitch rate and collective thrust command inputs. Note that this drone is not an exact model of the one used by AlphaPilot in AIRR race events. A Vehicle Dynamics node will keep track of the simulation clock and adjust the simulation clock ratio, which can be seamlessly lowered in real-time if the host computer is unable to provide real-time performance due to high-load or can be raised if faster real-time simulation is desired. More details about the drone (vehicle dynamics, sensor models, specifications, etc.) will be added to the GitHub repo.

 

Training Courses & Course Developer:

The simulator environment includes several pre-developed challenge courses and an open-map course creator for easy development and testing. Teams can use both resources for practice and may submit performance data on a Leaderboard Challenge Course to be included on a HeroX Leaderboard. The Final Challenge Course files will be developed by the AlphaPilot administration and kept on a separate instance. This course will be used for scoring submitted algorithms and determining teams final Test 3 Algorithm Scores.

The simulator is run with a Challenge File in YAML format which corresponds to a challenge course, and it will produce a Results YAML File after it is done. The Challenge File describes the following details:

  • Initial position and orientation of the drone
  • Challenge Name
  • Timeout Specification
  • A list of gate names
  • Width of all gates
  • Location of all the gates, determined by four points that make a rectangle

 

Simulator Output:

The simulator is equipped with a Reporter node that tracks ground truth information and outputs, via command line, and yields metrics on algorithm performance as a Results YAML File for each run on a Challenge YAML File. The file will contain metrics on drone performance attributes useful in logging, evaluating, and visualizing algorithm performance.

Additionally, the Simulator Reporter will output a YAML file with details about the run including output one of the following results:

  • Interrupted: If the grader process is interrupted, e.g., via ctrl+c.
  • Timeout: The timeout on the challenge file is reached
  • Crashed: The drone crashed into an obstacle in the sim
  • Completed: The drone passed through the last gate in the sequence.

The Reporter will also keep track of when the drone reaches each gate and record the time. If a gate in order is skipped, then the grader will put down “Success: False” for that gate.

With the Leaderboard Challenge Course files, teams have scripts to help with scoring the Results YAML file and outputting a Scores YAML file. See the included ‘Scorer Scripts’ in the challenge files as well as the README_v2.md file for more details on running these scripts. Teams may submit their Scores YAML file of their summarized results for scoring and placement on the Leaderboard.

The Final Challenge Course files will not be released to teams and will be used to evaluate each team’s algorithm and determine their final FlightGoggles score. See the ‘Testing’ section below for more details on Leaderboard and Final Testing.

 

Tech Support:

Teams may file a report on the simulator GitHub page if they encounter any bugs or errors. Issues will be addressed by the MIT team as quickly as possible. Any specific questions about the AlphaPilot Test 3 challenge should be posted and addressed on the HeroX Test 3 Forum page.

 

Scorer Scripts

Challenge_Leaderboardtest.zip\ChallengeLeaderboardtest\launch – scripts to score Results YAML file and output Scores YAML file. 

README_v2.md – contains challenge installation instructions, scorer usage guide, and list of allowed and prohibited information for Test 3.

 

Submission Requirements:

Teams will submit two attachments via the Test #3 entry form:

(1) Autonomous drone racer source code in a zipped archive named ‘submission’ with maximum upload size of 1GB

(2) Technical Report of the drone racer in PDF, maximum 2 pages.

Teams will have internet access during installation, but there will not be internet access during testing. Teams will not have root access at any point. In your archived named ‘submission’, at the highest level of the directory, please include the following:

  1. scorer.launch
    Team archives must contain and edit `scorer.launch` to include all ROS nodes required for completing the challenges. This will be the launch file used to run your team’s algorithm.
  2. install.bash
    If desired, teams may include an install.bash file which can be run to automatically install algorithm dependencies. Your install.bash file can run commands such as catkin build, rosdep, etc. as needed. Alternatively, teams can include compiled libraries in their archive as noted below. Please note the limitation on installation time of 1 hour.
  3. catkin_ws
    Teams must include a catkin workspace with the team’s own ROS packages separate from the flightgoggles catkin workspace. It should be named ‘catkin_ws’ and include the following folders:
    1. src ‘src’ folder should include all source code for the team’s ROS packages
    2. devel – the ROS packages should already be compiled into the ‘devel’ folder, the development space, and should include setup.bash (which is usually autogenerated when you build with catkin)
  4.  Compiled Libraries and External Code
    Teams can include additional code anywhere in the archive as needed for your algorithms to run. This can be compiled libraries, source code, etc. Note that the testing environment will have ROS Kinetic, OpenCV 3.4, and FlightGoggles already installed for teams so no need to include those.

Test 3 Source Code and IP Concerns

As a reminder, the judges will have access to teams’ submitted source code to help them understand the technical approach taken for Test 3. The AlphaPilot team suggests that if there are any concerns with IP, then that IP should be compiled into an executable that is called from your source code and that executable should be included in your archive. However, by abstracting functionality away in executables, it makes it more difficult for the judges to verify that teams have the technical skills and capabilities to succeed during the AlphaPilot Competition. As a result, please balance your team’s individual needs to obscure IP with building transparency of your approach.

Algorithm Requirements:

Teams must develop GNC algorithms to fly the simulated drone through a structured test environment utilizing a typical ACRO/RATE flight mode for control inputs and only on-board sensory feedback. As is typical in drone racing, teams know the approximate locations of their starting point and the gate locations, and they must deal with slight variations during the actual race. In Test 3, teams are challenged to create a single algorithm that can be tested on 25 slight variations of the same Challenge Course.

Teams are given Challenge YAML files on which teams will be tested that indicate the location of the initial conditions for each exam. For Leaderboard Testing, teams are given all 25 Challenge YAML Files and self-report their scores. For Final Testing, teams are not given the variations on the Challenge YAML Files, and their algorithms will be run and scored by HeroX.

Algorithms are not allowed to use ground-truth state data; only inputs from their sensors. These vehicle and sensor feeds are modelled to include real-world errors like thrust-limited motors and rate-based control drift. Teams will have access to this noisy data, as well as some visual processing outputs. In summary, the allowable sensory inputs for algorithms include:

  • Photorealistic 60Hz RGB camera feeds – mono and stereo allowed
  • “Perfect” (not noisy) gate detection algorithm output (polygon coordinates) in the camera frame
  • “Perfect” (not noisy) IR marker locations in the camera frame
  • Noisy IMU data
  • Noisy downward-facing laser range-finder to aid altitude estimation. The laser range finder points in the negative z direction. Its measurements are defined in the drone body frame, and thus range measurements will have a negative value.

Using these sensory inputs, teams are permitted to conduct any processing that they desire. No visual-based obstacle avoidance is necessary (course paths are mostly clear of obstacles). That being said, it is possible to crash into things and fail the run.

  • Note: AlphaPilot is aware that it will be possible to back-out the exact global 3D locations of gates. Teams are not allowed to utilize this data in their algorithms, and any teams that do so will be flagged during judging.

Algorithm entries for the Final Challenge Course (see more details about ‘Final Testing’ below) must be submitted as source code before the final deadline. The final submission form will contain fields for attachment of a zip file of your code archive and technical report.

Entries to both the Leaderboard and Final Exam must be submitted using the Test #3 entry form provided on the AlphaPilot HeroX website. Entries for the Leaderboard Challenge Course must be a single Scores YAML file and submitted as an attachment. This file contains scores from all 25 Results YAML files and is used by HeroX to calculate teams Leaderboard Algorithm Score (where top 5 scores are found and averaged).

Algorithm entries for the Final Challenge Course (see more details about ‘Final Testing’ below) must be submitted executable programs as well as source code before the final deadline. The final submission form will contain fields for attachment of a zip file of your executable code, source code, and technical report.

See the ‘Testing’ section for more details about how algorithms will be tested.

Note: GPU and CUDA drivers will already be installed in the testing environment. The following instance will be used at the basis for the testing environment: https://aws.amazon.com/marketplace/pp/B077GCZ4GR

 

Technical Report Requirements:

In the Technical Report, teams must document how their algorithms work and how they conducted testing and analysis. Teams should detail their use of any libraries or well-known approaches. In addition to describing their approach, teams must also address:

  • How the team plans to build onto or modify their drone racing algorithms for use in the AlphaPilot competition should they qualify.
  • Any technical issues the team ran into and how they overcame them.

Reports need to be in PDF format and 2-pages maximum, single-spaced. Font size should be minimum 11pts. Insertion of text, equations, images, figures, plots, code, and pseudo-code is accepted but will be included in the 2-page limitation.

 

Testing:

The Leaderboard and Final Challenge Course are now out! The Leaderboard represents a practice exam and will give teams a rough idea of how well they will ultimately perform. Please note HeroX AlphaPilot will only accept leaderboard submissions at this time, and there is no need to submit source code until the final deadline.

 

Overview

Challenge 3 is modeled to closely resemble a real-world FPV drone racing scenario. As such, participants know the nominal gate locations from their practice rounds. However, at race time, the actual locations of the gates may be slightly perturbed (unknown to the participants).  Similar to human FPV racers, autonomy algorithms should also be able to navigate even when the gates are slightly perturbed. For the challenge, this robustness is addressed as follows: The nominal gate locations are available for the participants to use in their algorithms. But for the evaluation of their algorithms, small perturbations will be added to these nominal gate locations. The resulting perturbed gate locations are used in the evaluation runs, but are not known a priori to the autonomy algorithms. The bounds on the gate location perturbations are known to the contestants and can be read from the ROS parameter server (see README_v2.md). 

The figure below shows a snapshot of the Test 3 Challenge Course. Contestants will start in a known starting location in the upper right corner of the figure. Contestants must traverse gates in order along the course marked in red and must finish by passing through the gate in the lower left corner of the figure (gate 6) within a known time limit. Missing a gate does not lead to disqualification, however if contestants miss a gate in the ordering, they forgo the points for that gate and cannot traverse it (for reward) after passing a gate later in the gate ordering. Gate 6 (the finish line) is unskippable and must be traversed in order to complete the course. In the figure below, gate IDs belonging to gates along the race path are marked in yellow. Gate IDs in blue are not part of the race course and will not change in position. The ordering of the gates to traverse will not change. Other obstacles in the environment will not change. For Leaderboard score submission, teams are given all 25 gate perturbation layouts, and they will self-report their scores to HeroX. For Final Testing, teams are not given gate perturbations, and their algorithms will be run and scored by HeroX. The course order is shown in Table 1. 

 

Course

Gate 10, 21, 2, 13, 9, 14, 1, 22, 15, 23, 6.

Table 1. The ordered list of gates to traverse to successfully complete the course.

Figure 2: Shows a screenshot of the Leaderboard/Final Testing Challenge Course in MIT’s FlightGoggles Simulation.

 

Leaderboard Testing

Teams can now download the Leaderboard Challenge Course files (challenge_leaderboardtest.zip) and submit their Scores YAML file to the leaderboard for evaluation.

The Leaderboard Challenge Course folder contains the 25 YAML files which are slight variations on the above Challenge Course. Teams are expected to test their algorithms on each one of these YAML files and output a Results YAML file per run (as done by the Reporter node). Using the scoring scripts provided, teams can generate the Scores YAML file to submit to the Leaderboard. The Scores YAML file will read, the top 5 scores will be found and averaged, and this final score will be posted on the Leaderboard.

The first round on the leaderboard will close at Monday, Feb. 18th at 11:59PM PST, and any submissions will appear on the leaderboard by 9AM PST the following day. After this, the leaderboard will refresh daily until the final deadline. Teams can update their submission multiple times per day, but only their most recently uploaded YAML files will be evaluated and scored.

 

Final Testing

The Final Challenge Course is also based on the above Challenge Course, but teams will not receive the 25 Challenge YAML files, which again are slight variations on the initial conditions. Teams may assume that the stochastic changes will mirror those provided in Leaderboard Challenge Course Challenge YAML files.

By the final deadline, teams must submit source code for evaluation on these 25 sequestered Challenge YAML files. The testing environment is an instance of Ubuntu 16.04.5 LTS running in AWS type p3.2xlarge. The following instance will be used as the basis for the testing environment: https://aws.amazon.com/marketplace/pp/B077GCZ4GR

Additionally, this environment will have ROS Kinetic, OpenCV 3.4, and FlightGoggles already installed for teams.

The final algorithm testing process will be conducted as such:

  1. The Flight Goggles catkin workspace with the addition of “scorer.py” in flightgoggles/launch will be sourced.
  2. A new Linux user home directory is created. The team’s archive will be unzipped and ‘submission’ folder will be unpacked into the user’s home directory.
  3. A symbolic link “scorer.launch” will be added to flightgoggles/launch that links to “scorer.launch” in the submission root directory
  4. Team dependencies will be added by running the following command in the submission root directory:
    >> ./install.bash
    Note: there is a cut-off for installation time for a team’s algorithm, and it is 1 hour. If a team’s source code takes longer than this to install dependencies, the script will be stopped, and the team will be given an Algorithm Score of 0 points.
  5. The scorer will run the launch file 25 times with the perturbed gates in `flightgoggles/config/gate_locations_x.yaml` and accumulate the results from the reporter in a `results` folder. At the end of the evaluation, `scorer.py` is run which generates a `scores.yaml` which contains the individual scores for each run. This will be implemented in testing using the following command:
    >> source catkin_ws/devel/setup.bash && rosrun flightgoggles scorer.sh
  6. The score.yaml file will be read, the top 5 scores will be found and averaged, and this final result will be used for the algorithm score.
    Note: there is a cut-off for the total submission evaluation time (Final Testing steps 5 and 6) and that is 2 hours. If a team’s algorithm takes longer than this to be evaluated, the testing will be stopped, and the team will be given an Algorithm Score of 0 points.

Separately, judges will read and score each team’s technical report which will be subsequently used to calculate their total final score for Test 3.

Test 3 Submission Checking

It is essential that teams follow the above submission requirements exactly. However, in the event your team makes a small mistake, we have implemented a Submission Checker for Test 3. Each team has the opportunity to try to fix their submission as many times as needed before Monday, April 1st at 12PM EST if their submission fails the algorithm submission testing described above. In the event of a submission failure, your team will receive an email with the error and will have a short window to re-submit. If we suspect teams are abusing this system, it will be investigated and scored accordingly.

 

Frequently Asked Questions (FAQs)

Can teams utilize the global 3D locations of the gates?

As is typical in drone racing, teams know approximately the initial drone and gate locations and must deal with slight variations during the actual race. Teams are challenged to create a single algorithm that can be tested on 25 slight variations of the same Challenge Course.

Teams are given Challenge YAML files that indicate the initial location of the drone and gates for each exam and on which team algorithms will be tested. For Leaderboard Testing, teams are given all 25 Challenge YAML Files and self-report their scores. For Final Testing, teams are not given the variations on the Challenge YAML Files, and their algorithms will be run and scored by HeroX.

So in general, no, teams cannot utilize the exact global 3D locations of the gates. However, they can approximate it from visual processing and other on-board feedback.

Note: AlphaPilot is aware that there is a hack that makes it possible to back-out the exact global 3D locations of gates. Teams are not allowed to utilize this data in their algorithms, and any teams that do so will be flagged during judging.

 

Will we have to do obstacle avoidance?

No. Visual-based obstacle avoidance is not needed (course paths are mostly clear of obstacles). That being said, it is possible to crash into things and fail the run.

 

Can teams utilize the stereo cameras?

Yes. You can use the photorealistic 60Hz RGB camera feeds – mono and stereo allowed.

 

Will you give teams information about the drone model?

More details about the drone (vehicle dynamics, sensor models, specifications, etc.) are added to the GitHub repo.

 

What are some good resources to learn ROS and learn how to add our nodes onto the flightgoogles?

Please refer to the official ROS tutorials: http://wiki.ros.org/ROS/Tutorials

 

I couldn't find the API documentation for FlightGoggles to send http messages over to the simulation. Can I use any ROS API to do that?

FlightGoggles uses standard ROS message types for communication. As such please refer to the ROS message publisher/subscriber tutorial to familiarize yourself with this message passing concept: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29

 

Will FlightGoggles be updated to depend on the current recommended ROS version ( --rosdistro melodic v.s. --rosdistro kinetic)?

We decided to start with Kinetic for stability. MIT will update and maintain flightGoggles in the future. We will consider an update to melodic in the future. However, it will happen after test 3 is completed.

 

Are we allowed to change the camera angle in the FlightGoggles simulator? Can you set the camera angle to something greater than 0?

No, we are not allowing teams to change the camera angle. All teams are challenged to use the same hardware setup in AlphaPilot.

 

What data inputs can be used in Test 3?

Please see the README file for Test 3 for a complete list of allowed and prohibited inputs.

 

Are the gates disturbances in the IR beacons sensor or in the real position of the gates in map?

The gate disturbances are from the nominal position of the gates in the map.

 

Will there by a left/right camera version of the IR beacons in Flight Goggles?

Please see the description provided by alphapilot for allowed topics and params: https://www.herox.com/alphapilot/resource/320

 

In Test 3, it's hard to know when we fly through a gate, without ground-truth data. Can we get an event for us to subscribe to?

Unfortunately, we will not provide this for teams as this is not information you would realistically have during a drone race.

 

Will all target gates be vertical, or will there be horizontal gates (flying through vertically)?

All target gates in the virtual qualifiers will be vertical for both Test #2 and Test #3.

 

For test 3, will the gates locations (not including the perturbations) be changed for the final test?

No, the nominal positions of the gates are the same for both the Leaderboard practice and final tests.

 

How will the race be timed? Will the timing start from begin of code launch or will it start when flying through the first gate or leaving a bounding box? (technically: how will time to initialize estimators etc. be penalized?)

The race will be timed from the moment of the drone's first takeoff movement to the time when the drone arrives at the finish gate.

 

From the latest update it would seem that the ROS server is sending a message about the location of each gate.  Is the messages sent from the given ROS node providing a x,y from the camera view from the drone's perspective?

The "/uav/camera/left/ir_beacons" ROS topic publishes all unoccluded IR beacons within the camera's field of view. This message contains IR marker detections in the image space of the left camera.

 

What are minimum requirements for FiightGoggles video card wise?

  • Minimum Local Hardware Requirements
    • A Vulkan-supported GPU with >=2.1GB of VRAM.
    • Native Ubuntu 16.04 installation with ROS Kinetic on x86/x64.
    • Virtual machines do not have native access to the GPU and therefore are not supported.
    • nvidia-docker2 does not currently support the Vulkan API and therefore cannot run the FlightGoggles renderer binary (see workarounds in issue #46).
  • We have tested this project on two different setups: High end Desktop computer with:
    • Processor: Intel i9 extreme (i9-7980XE)
    • RAM: 32Gb
    • GPU: Titan V
  • We have also tested on the following AWS instances:
    • p3.2xlarge
    • g3s.xlarge
  • Please see https://github.com/mit-fast/FlightGoggles/wiki/Prerequisites-and-Testing-Setup

 

Does the flightgoggles simulator take .yaml files as an input or will we have to convert the files?

Teams can use the Challenge YAML files as input to FlightGoggles for their own development and testing.

However, users of the FlightGoggles simulator should not have to read or modify any YAML files for correct operation of the simulation system.

All relevant information to the FlightGoggles simulator is exposed through the ROS Parameter API: https://github.com/mit-fast/FlightGoggles/wiki/ros-params

 

Can you simulate more than one drone?

We intentionally did not put support for multi-drone simulation in order to avoid confusion and to keep the initial FlightGoggles code as simple as possible. Before the challenge is done, we will not provide the software for simulating multiple drones. That said, you are able to run multiple instances of FlightGoggles if you would like to conduct several tests, of course.

 

Keras is now a part of tensorflow. Can we use keras to develop the AI model.

Solutions need to be compatible with the algorithm and testing requirements. Otherwise, there are no restrictions on the approach and libraries teams can use to develop their solutions.

 

If you have any specific questions on Test #3, please comment in this forum thread.