Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/1542
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Orlando, J.I. | - |
dc.contributor.author | Fu, H. | - |
dc.contributor.author | Breda, J.B. | - |
dc.contributor.author | Keer, K. | - |
dc.contributor.author | Bathula, D.R. | - |
dc.contributor.author | Diaz-Pinto, A. | - |
dc.contributor.author | Fang, R. | - |
dc.contributor.author | Heng, P. | - |
dc.contributor.author | Kim, J. | - |
dc.contributor.author | Lee, J. | - |
dc.contributor.author | Lee, J. | - |
dc.contributor.author | Li, X. | - |
dc.contributor.author | Liu, P. | - |
dc.contributor.author | Lu, S. | - |
dc.contributor.author | Murugesan, B. | - |
dc.contributor.author | Naranjo, V. | - |
dc.contributor.author | Phaye, S.S.R. | - |
dc.contributor.author | Shankaranarayana, S.M. | - |
dc.contributor.author | Son, J. | - |
dc.contributor.author | Hengel, A.V.D. | - |
dc.contributor.author | Wang, S. | - |
dc.contributor.author | Wu, J. | - |
dc.contributor.author | Wu, Z. | - |
dc.contributor.author | Xu, G. | - |
dc.contributor.author | Xu, Y. | - |
dc.contributor.author | Yin, P. | - |
dc.contributor.author | Li, F. | - |
dc.contributor.author | Zhang, X. | - |
dc.contributor.author | Xu, Y. | - |
dc.contributor.author | Bogunovi ´c, H. | - |
dc.date.accessioned | 2020-03-17T06:07:14Z | - |
dc.date.available | 2020-03-17T06:07:14Z | - |
dc.date.issued | 2020-03-17 | - |
dc.identifier.uri | http://localhost:8080/xmlui/handle/123456789/1542 | - |
dc.description.abstract | Glaucoma is one of the leading causes of irreversible but preventable blindness in working age popula- tions. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE ( https://refuge.grand-challenge.org ), held in conjunction with MIC- CAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glau- coma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encour- aging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma clas- sification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Glaucoma | en_US |
dc.subject | Fundus photography | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Image segmentation | en_US |
dc.subject | Image classification | en_US |
dc.title | REFUGE challenge: a unified framework for evaluating automated methods for glaucoma assessment from fundus photographs | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 2.91 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.