Abstract:AIM: To assess the performance of a bespoke software for automated counting of intraocular lens (IOL) glistenings in slit-lamp images. METHODS: IOL glistenings from slit-lamp-derived digital images were counted manually and automatically by the bespoke software. The images of one randomly selected eye from each of 34 participants were used as a training set to determine the threshold setting that gave the best agreement between manual and automatic grading. A second set of 63 images, selected using randomised stratified sampling from 290 images, were used for software validation. The images were obtained using a previously described protocol. Software-derived automated glistenings counts were compared to manual counts produced by three ophthalmologists. RESULTS: A threshold value of 140 was determined that minimised the total deviation in the number of glistenings for the 34 images in the training set. Using this threshold value, only slight agreement was found between automated software counts and manual expert counts for the validating set of 63 images (κ=0.104, 95%CI, 0.040-0.168). Ten images (15.9%) had glistenings counts that agreed between the software and manual counting. There were 49 images (77.8%) where the software overestimated the number of glistenings. CONCLUSION: The low levels of agreement show between an initial release of software used to automatically count glistenings in in vivo slit-lamp images and manual counting indicates that this is a non-trivial application. Iterative improvement involving a dialogue between software developers and experienced ophthalmologists is required to optimise agreement. The results suggest that validation of software is necessary for studies involving semi-automatic evaluation of glistenings.