Task description

Goal Participants are asked to submit automated predictions of lesion segmentation boundaries from dermoscopic images. Data Lesion segmentation data includes the original image, paired with the expert manual tracing of the lesion boundaries in the form of a binary mask. Training Data Dermoscopy Image Data 2000 images are provided as training data. The training data file is a ZIP file, containing dermoscopic lesion images in JPEG format. All images are named using the scheme ISIC_<image_id>.jpg, where <image_id> is a 7-digit unique identifier. EXIF tags in the images have been removed; any remaining EXIF tags should not be relied upon to provide accurate metadata. Ground Truth Segmentations The training ground truth file is a ZIP file, containing 2000 binary mask images in PNG format. All masks are named using the scheme ISIC_<image_id>_Segmentation.png, where <image_id> matches the corresponding Training Data image for the mask. All mask images will have the exact same dimensions as their corresponding lesion image. Mask images are encoded as single-channel (grayscale) 8-bit PNGs (to provide lossless compression), where each pixel is either: * 0: representing the background of the image, or areas outside the lesion * 255: representing the foreground of the image, or areas inside the lesion Masks were created by an expert clinician, using either a semi-automated process (using a user-provided seed point, a user-tuned flood-fill algorithm, and morphological filtering) or a manual process (from a series of user-provided polyline points). Participants are not strictly required to utilize the training data in the development of their lesion segmentation algorithm and are free to train their algorithm using external data sources. Any such external sources of data must be properly cited in the supplied abstract. Submission Instructions This year, there are two phases for result submission:

  • An optional Validation Phase, with 150 images. Submissions to the Validation Phase are immediately evaluated and made public, allowing participants to test their submission systems and get some feedback on the performance of their submitted algorithm.
  • An official Test Phase, with 600 images. Submissions to the Test Phase are made against a blind held-out dataset and are immediately evaluated, but not made public until after the final submission date, as they constitute the final evaluation of participants' algorithms.
Participants may make unlimited and independent submissions to each phase, but only the most recent submission to the Test Phase will be used for official judging. Evaluation Participants will be ranked and awards granted based only on the Jaccard index. Additionally, submitted segmentations will be compared the following variety of metrics, for scientific completeness: Some useful resources for metrics computation include: * the ROC curve * sklearn library metric functions * jaccard * average precision


You must be logged in to participate in this task. Signup or login to continue.