Abstract
The US Department of Veterans Affairs has been acquiring store and forward digital diabetic retinopathy surveillance retinal fundus images for remote reading since 2007. There are 900+ retinal cameras at 756 acquisition sites. These images are manually read remotely at 134 sites. A total of 2.1 million studies have been performed in the teleretinal imaging program. The human workload for reading images is rapidly growing. It would be ideal to develop an automated computer algorithm that detects multiple eye diseases as this would help standardize interpretations and improve efficiency of the image readers. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs have been developed and there are needs for additional image data to validate this work. To further this research, the Atlanta VA Health Care System (VAHCS) has extracted 112,000 DICOM diabetic retinopathy surveillance images (13,000 studies) that can be subsequently used for the validation of automated algorithms. An extensive amount of associated clinical information was added to the DICOM header of each exported image to facilitate correlation of the image with the patient's medical condition. The clinical information was saved as a JSON object and stored in a single Unlimited Text (VR = UT) DICOM data element. This paper describes the methodology used for this project and the results of applying this methodology.
from #Head and Neck by Sfakianakis via simeraentaxei on Inoreader https://ift.tt/2FUD5Eo
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.