Indian Journal of Pathology and Microbiology
Home About us Instructions Submission Subscribe Advertise Contact e-Alerts Ahead Of Print Login 
Users Online: 254
Print this page  Email this page Bookmark this page Small font sizeDefault font sizeIncrease font size
ORIGINAL ARTICLE
Year : 2020  |  Volume : 63  |  Issue : 5  |  Page : 25-29

A grading dilemma; Gleason scoring system: Are we sufficiently compatible? A multi center study


1 Department of Pathology, Faculty of Medicine, Mugla Sitki Kocman University, Izmir, Turkey
2 Department of Pathology, Tepecik Training and Research Hospital, Izmir, Turkey
3 Department of Pathology, Çiğli Region Education Hospital, Izmir, Turkey
4 Department of Pathology, Tinaztepe Special Hospital, Izmir, Turkey
5 Department of Pathology, Faculty of Medicine, 9 Eylul University, Izmir, Turkey
6 Department of Urology, Faculty of Medicine, Mugla Sitki Kocman University, Muğla, Turkey
7 Department of Pathology, Faculty of Medicine, Adnan Menderes University, Aydin, Turkey

Correspondence Address:
Yelda Dere
Department of Pathology, Mugla Sitki Kocman University, Faculty of Medicine, Mugla
Turkey
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/IJPM.IJPM_288_18

Rights and Permissions

Objective: Gleason scoring is the grading system which strongly predicts the prognosis of prostate cancer. However, even being one of the most commonly used systems, the presence of different interobserver agreement rates push the uropathologists update the definitons of the Gleason patterns. In this study, we aimed to determine the interobserver agreement variability among 7 general pathologists, and one expert uropathologist from 6 different centers. Methods: A set of 50 Hematoxylin & Eosin stained slides from 41 patients diagnosed as prostate cancer were revised by 8 different pathologists. The pathologists were also grouped according to having their residency at the same institute or working at the same center. All pathologists' and the subgroups' Gleason scores were then compared for interobserver variability by Fleiss' and Cohen's kappa tests using R v3.2.4. Results: There were about 8 pathologists from 6 different centers revised all the slides. One of them was an expert uropathologist with experience of 18 years. Among 7 general pathologists 4 had surgical pathology experience for over 5 years whilst 3 had under 5 years. The Fleiss' kappa was found as 0.54 for primary Gleason pattern, and 0.44 for total Gleason score (moderate agreement). The Fleiss' kappa was 0.45 for grade grouping system. Conclusion: Assigning a Gleason score for a patient can be problematic because of different interobserver agreement rates among pathologists even though the patterns were accepted as well-defined.


[FULL TEXT] [PDF]*
Print this article     Email this article
 Next article
 Previous article
 Table of Contents

 Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
 Citation Manager
 Access Statistics
 Reader Comments
 Email Alert *
 Add to My List *
 * Requires registration (Free)
 

 Article Access Statistics
    Viewed3596    
    Printed81    
    Emailed0    
    PDF Downloaded175    
    Comments [Add]    
    Cited by others 3    

Recommend this journal