Original articles
Published: 2022-09-22

Use of digital pathology and artificial intelligence for the diagnosis of Helicobacter pylori in gastric biopsies

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Unit of Gastroenterology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
of Clinical Engineering ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
Engineering Ingegneria Informatica S.p.A., Rome, Italy
Engineering Ingegneria Informatica S.p.A., Rome, Italy
Engineering Ingegneria Informatica S.p.A., Rome, Italy
digital pathology artificial intelligence helicobacter pylori

Abstract

Objective. A common source of concern about digital pathology (DP) is that limited resolution could be a reason for an increased risk of malpractice. A frequent question being raised about this technology is whether it can be used to reliably detect Helicobacter pylori (HP) in gastric biopsies, which can be a significant burden in routine work. The main goal of this work is to show that a reliable diagnosis of HP infection can be made by DP even at low magnification. The secondary goal is to demonstrate that artificial intelligence (AI) algorithms can diagnose HP infections on virtual slides with sufficient accuracy.
Methods. The method we propose is based on the Warthin-Starry (W-S) silver stain which allows faster detection of HP in virtual slides. A software tool, based on regular expressions, performed a specific search to select 679 biopsies on which a W-S stain was done. From this dataset 185 virtual slides were selected to be assessed by WSI and compared with microscopy slide readings. To determine whether HP infections could be accurately diagnosed with machine learning. AI was used as a service (AIaaS) on a neural networkbased web platform trained with 468 images. A test dataset of 210 images was used to assess the classifier performance. Results. In 185 gastric biopsies read with DP we recorded only 4 false positives and 4 false negatives with an overall agreement of 95.6%. Compared with microscopy, defined as the “gold standard” for the diagnosis of HP infections, WSI had a sensitivity and specificity of 0.95 and 0.96, respectively. The ROC curve of our AI classifier generated on a testing dataset of 210 images had an AUC of 0.938.
Conclusions. This study demonstrates that DP and AI can be used to reliably identify HP at 20X resolution.

Introduction

With the advent of the COVID-19 pandemic many pathology departments are considering the benefits of remote reporting by DP 1. Although previous validation studies support the safe use of DP in primary gastrointestinal pathology 2, a common source of concern about whole slide imaging (WSI) is that limited image resolution could be a reason for an increased risk of malpractice. In a systematic analysis 10% discordance was attributed to the inability to find a small diagnostic/prognostic object on WSI.3 A lack of image clarity is known to be associated with difficulties in the identification of microorganisms 4, a commonly cited challenge encountered in DP reporting 5. Indeed, a frequent question raised about this technology is whether Helicobacter pylori (HP) can be reliably detected by WSI in gastric biopsies, which may constitute a considerable proportion of a pathologist’s daily workload. In a study conducted by Snead et al. 6, pathologists felt that WSI was directly responsible for a variance in reporting HP and slides needed a re-scan at a higher magnification to become visible. While our study was under review, Mayall et al. published their findings on multisite networked digital pathology reporting in England. In this study, seven out of nine significant diagnostic discrepancies concerned misrecognition of HP in gastric biopsies 7. HP is a gram-negative, spiral-shaped bacterium that infects 50% of the world’s population 8 and is known to be a major cause of chronic gastritis, peptic ulcers, gastric adenocarcinomas and lymphomas 9. In 1994, the World Health Organization (WHO) and the International Agency for Research on Cancer classified HP as a class 1 carcinogen 10.

Although several non-invasive methods for the detection of HP infection are available 11, this diagnosis often is made by conventional histology on gastric biopsies taken because of endoscopic abnormalities 12. Diagnosis of HP can be performed in hematoxylin and eosin (H&E) stained sections with a reported high specificity only when moderate to severe lymphocytic infiltration are present. However, H&E diagnosis of HP can be tedious and time consuming with a sensitivity that drops to 3% when plasmacytic, lymphocytic, and neutrophilic infiltration are absent or mild 13. Furthermore inter-observer variability achieved on H&E diagnosis was found to be high and unacceptably subjective 14.

This can be improved by special stains such as Giemsa, Warthin-Starry silver (W-S), Genta, and immunohistochemistry. Even though immunohistochemistry is more sensitive and more specific, it is not an up-front ancillary study 15. Giemsa stain is the most commonly used in first-line routine clinical practice. However, with this histochemical method, a high resolution (40X) digital scan is required to maintain the diagnostic performances of traditional microscopy 16.

The main goal of this work is to show that a reliable diagnosis of HP infection can be made by DP even at relatively low magnifications. A high viewing magnification (×40, 0.23 μm/pixel) 17 requires large amounts of storage space and bandwidth consumption, affecting the speed of image update.

The method we propose is based on the W-S stain which enhances the visualization of HP and allows its faster and more accurate detection (Fig. 1). To check for any differences, diagnoses of HP infection made with WSI were compared with those made with traditional microscopy (TM).

The secondary goal is to demonstrate that an AI algorithms 18, based on “Deep Learning” can diagnose HP infections with sufficient accuracy on DP images. Large datasets generated by DP have offered additional opportunities for AI-aided diagnostics which now can classify slides based on their specific patterns 19. This approach can be used only after it has been established, with a high degree of certainty, that the diagnostic performances of TM and WSI are comparable to the extent that one can replace the other. As is well known, HP detection in histological slides is a repetitive task that comprises a significant part of a pathologist’s workload. If an AI algorithm could reach a point to be able to render automatic this part of diagnostic work, substantial benefits could be obtained.

As already done by Klein et al. 20 we used a convolutional neural network (CNN) based deep learning system and similarly the training set was made of smaller image patches of subdivided whole slide images. But differently from the cited study, we used 20X scanned slides for the detection of HP. In addition in the present study, AI was used as a Service (AIaaS) from a third-party that provides out-of-the-box AI solutions, a novel approach that benefits from the availability of deep learning (DL) web platforms without cost-prohibitive investments in massive hardware and programmers 21.

Materials and methods

The histopathology laboratory is equipped with a high resolution scanner (Nanozoomer-XR, Hamamatsu Photonics K.K., Japan) that is capable of rapid automatic processing of up to 320 slides. As previously described 22 a web platform for remote access to virtual slide trays (Cloud Pathology Group, Milan, Italy) allowed case searching by unique case numbers. All slides in this study were anonymized and scanned at 20X (0.46 μm/pixel) resolution to save archiving space and to speed up image download. The scanner is connected to the hospital network and its software receives slides data from the anatomic pathology laboratory information system (LIS) (Winsap, Turin, Italy).

CASE SELECTION

We retrospectively reviewed pathology reports of gastric biopsies made between January 2019 and September 2020 and stored them in our LIS database. All biopsies were performed both in the antrum and in the corpus with at least two samples for each region according to European guidelines 23. Through a query 1783 cases of gastric biopsies were retrieved and saved as a comma separated values (csv) file containing the accession number and the complete text of the final diagnosis. Within this data set, we selected all biopsies on which a W-S stain was done so that they could be searched in our virtual slide repository.

To accomplish this, a more specific search was conducted by designing a software tool developed in the Python programming language (Python 3.7) 24 that used regular expressions to perform pattern matching in pathology reports 25. With this software unstructured free-text could be turned into computationally usable data and stored in tabular format (Python sources available in Supplemental File). The regular expression tool first searched for all diagnoses in which HP was looked for, then on those cases a search was done to select only those biopsies in which the “Warthin-Starry” substring occurs at least once. On these latter cases a third level search was performed to separate HP positive from HP negative biopsies. Finally, only in HP positive biopsy reports the strings “1+”, “2+” and “3+” were searched as HP were classified into three grades, in accordance with the Updated Sydney System 26 reflecting the average density on the surface and the foveolar epithelium 27. Using this search tool, 679 biopsies were identified and, to approximate actual daily working conditions, from this set each pathologist examined 46 slides. As a result 185 virtual slides were selected to be assessed by WSI and compared with microscopy slide readings. All pathologists were allowed to view the corresponding digital slides of H&E stained sections of the cases on which a Warthin-Starry stain was available. To maintain a well-balanced ratio between cases, 101 biopsies were HP negative and 84 were HP positive. Digital slides were read by four independent expert pathologists with more than 8 years experience, blind to the TM diagnosis, defined as reference standard. To establish a strong gold standard and also to determine what types of slides can lead to discrepancies, discordant cases were reviewed for a consensus from all pathologists.

WARTHIN-STARRY STAINING

W-S silver stain 28 was performed using the Artisan automatic stainer (Artisan Link Pro Special Staining System, Agilent Dako, Denmark), capable of reproducibly process up to 48 slides in around 7 minutes. The procedure was performed following manufacturer’s instructions, as previously published 29.

ARTIFICIAL INTELLIGENCE

To test whether HP infections could accurately be diagnosed by DL we used an AI as a service (AiaaS) web platform optimized to analyze images of no more than 4 megapixels (Microsoft Custom Vision, Redmond, Washington, USA). Based on their visual characteristics a classification model was created that learned from a series of labeled histological fields of W-S stained biopsies.

In this study, the recognition of HP by the classifier was evaluated independently from the gastric inflammatory status. For the training and testing of the predictive model we used two subsets of cases in which there was an absolute subclass concordance between the TM and WSI diagnoses.

From a set of HP positive and HP negative WSI cases, two distinct groups (Fig. 2) were randomly selected:

  1. the first group served as a labeled training set for a supervised learning of the classifier. It was composed of 12 virtual slides at 20X magnification. From this group, a series of images with an average size of 2000×2000 pixels were cropped, 229 representing HP positive fields and 239 HP negative fields. This training set was initially used to build the model.
  2. the second group, used for testing, was composed of 69 HP positive and 141 HP negative cropped images. The performance of the classifier was evaluated using this testing set.

This part of the image cropping and file processing was automatically managed by a python script (Fig. 3).

STATISTICAL ANALYSIS

Most data processing, plotting and statistical analysis were performed using Python 3.7 and the related packages (Pandas, NumPy, SciPy). Cramer’s Phi test and the Kendall’s tau test were uses as a measure of correlation. Cohen’s kappa statistical coefficient was used to test inter-rater agreement of categorical data when WSI and TM were considered as two distinct raters, rating the same case. A ROC curve and the corresponding area under the curve (AUC) was plotted to evaluate the diagnostic performance of the machine learning classification model at all thresholds. The performance measurement for the effectiveness of our supervised ML classifier was achieved by computing a confusion matrix 30, accuracy, precision, recall (sensitivity) and the F1 score, a weighted average of the precision and recall.

This study was conducted following approval of the Institutional Review Board of the ASL BI, Nuovo Ospedale degli Infermi (study n. CE 291/20).

Results

DIGITAL PATHOLOGY

Of 185 cases reviewed over WSI, we recorded 4 false positives and 4 false negatives with an overall agreement of 95.6%. Table I shows the significant association between TM and WSI (p < 0.0001) and both modalities detected 101 HP negative and 84 HP positive cases, with an excellent correlation (Pearson’s Phi = 0.913).

If we looked at what was the HP score assigned to the false positive and false negative cases, we found that both fell into the “1+” category. Compared with TM, defined as the “gold standard” for the diagnosis of HP infections, WSI had a sensitivity, specificity and accuracy of 0.95, 0,96 and 0.95, respectively (Tab. II). However when the semi-quantitative assessment of HP infection was rated on the three point scale (1+ to 3+) the correlation, although significant (p < 0.001), was rather low (Kendall’s tau = 0.594). If these two reading methods were considered as two different observers and a Cohen’s kappa for inter-observer agreement was computed, we obtained a value of 0.137, while if only the positive and negative were compared, the value was close to 1 (Cohen’s kappa = 0.913).

ARTIFICIAL INTELLIGENCE

The confusion matrix (Tab. III) describes the performance of our classification model on the set of 210 test images for which the HP values were known.

Among the image fields of the testing data-set the true positives were 62, the true negative 123, the false positive and false negatives were 18 and 7, respectively (p < 0.0001). Table IV recapitulates the diagnostic performance of our DL algorithm. The balance accuracy and F1 score values of 0.88 and 0.82, respectively, are metrics of the binary classifier, particularly useful when the classes are imbalanced as in our case, taking in account the higher frequency of HP negative cases compared to the HP positive. To gauge the diagnostic power of the ML approach, a ROC curve was plotted (Fig. 4) that gave a AUC of 0.93. Our machine learning model was also tested against the grade of HP infection giving, not surprisingly, only a mediocre result in the relationship between the actual and the predicted ranks (Kendall Tau = 0.58)

Discussion

In the first part of our study, we investigated whether WSI can be used to reliably diagnose HP infections in gastric biopsies. This was not only our concern, since the same doubts have been raised at numerous meetings we have attended and in several articles on DP 31. To facilitate the diagnosis of HP infections with WSI we have adopted W-S silver stain 32, a method that allows faster identification of HP even at a low resolution.

One of the aspects that may be stressed of the present work is methodological. All the cases needed for the WSI, and AI were selected using a software tool expressly designed to extract structured data from unstructured textual reports. Pathologists were allowed to see the H&E stained slides, and therefore they were aware of the gastric inflammatory state. This, of course, facilitated the pathologist’s task to render a proper diagnosis for the known relationship between HP and gastritis 33.

Only 8 cases (4.3%) read with the WSI did not correspond to the initial diagnosis since 4 false positives and 4 false negatives were found. The latter of course occurs in practice, but the former, relatively rare, can result from a bacterial overgrowth due to proton pump inhibitors 34. All the WSI diagnoses discordant with those carried out under the microscope fell into the category of “1+”, the one in which the degree of diagnostic agreement between pathologists is rather low.

WSI was absolutely comparable with TM in its ability to discriminate between HP negative and HP positive gastric biopsies, as demonstrated by the high level of accuracy. The comparison between WSI and TM was also analyzed from a different point of view by considering the computer and the microscope as two different observers. The kappa statistic indicates an almost perfect agreement between the two modalities.

Much lower was the agreement in assessing the degree of HP infection using a three tiers rank, showing that in routine pathology an absolute rigor in grading the HP density is by no means a trivial exercise. However, for clinical management the most important piece of information is whether HP is present. The data presented in our study clearly demonstrate that using the W-S stain a diagnosis of HP infections with WSI is both possible and accurate even at a 20X magnification.

These results convinced us that WSI could be used to train a DL algorithm with an AI based image recognition web platform. By examining the DL Confusion Matrix, if we compared the actual and the predicted HP status, our classifier was able to make a correct diagnosis in over 80% of cases. The level of discordance is apparently high, but it must be stressed that, contrary to the pathologists, the ML classifier was at a disadvantage not being informed of the degree and type of gastritis. Despite this, the ROC analysis showed an AUC of 0.938 and demonstrated the accuracy of our classifier.

We are convinced that if the algorithm could directly access, as a training set, all the digital archive of W-S stained biopsies together with the H&E virtual slides, the classifier would exceed the accuracy of a pathologist. This scenario has been the subject of interesting studies recently published by Campanella et al. 35 and by Iizuka et al. 36. As noted by the latter author, the main challenge is the sheer size of a WSI that can contain several billion pixels, while the area of interest can be extremely limited. We show that we are at a point in which AI is no longer optional or limited to a small group of experts. We believe that our approach to AI will broaden its applicability to the diverse and challenging problems of histopathology image analysis.

Conclusions

We believe that the method used in this work accomplished the main goal of reliably diagnose HP infections with WSI even at a 20X magnification.

The secondary goal was also met, as we have shown that AI algorithms, based on deep learning, can diagnose HP infections with sufficient accuracy on DP images.

One of the aspects that may be stressed in the present study is methodological. All the cases needed for WSI and AI were selected using a software tool expressly designed to extract structured data from unstructured textual reports.

Another methodological aspect is the use of AI as a service (AiaaS) a ready-made AI solution delivered over a web platform for the automatic recognition of histological images. To our knowledge, this has never been done before with WSI.

ACKNOWLEDGMENTS

We wish to thank the Fondazione Edo ed Elvo Tempia, Biella, Italy, and the Fondazione Cassa di Risparmio di Biella for their support.

We wish also to thank Prof. Mark Bernheim for his thoughtful suggestions and Prof. Umberto Dianzani for his invaluable help and guidance.

Figures and tables

Figure 1.Illustrative figures of a HP positive case (3+) displayed by 20X digital images. A) H&E stain B) organisms are clearly visible by W-S stain. The following figures (C and D) are an example of a false positive 1+ score. This case was in the AI testing set and shows pseudo-bacterial particles mistaken for HP by the algorithm.

Figure 2.Cropped digital images of HP positive and negative of W-S stained slides were used for training and testing a convolutional neural network (CNN). The classifier also computes the precision and recall values.

Figure 3.Workflow showing the design and operational schematic diagram of the AI as a service (AIaaS) platform used in this study. It was optimized to analyze virtual slides of W-S stained gastric biopsies.1) The original WSI files were in the original ndpi format that was converted to tiff in order to facilitate a further processing. Images, initially scanned at 20X magnification, were saved in a shared folder.2) A Python script imported the converted digital slides from the shared folder and it divided them in patches of 2000x2000 px to keep all the files under a 4 MB limit required by the Microsoft Custom Vision platform.3) After being cropped, a number of images were chosen to assemble a training set such that in the positive cases bacteria were clearly displayed. This selection ensured that the training was appropriate for the classifier we were developing.4) Custom vision triggered the CNN that started its training. After completing the training step the model was tested with a testing set to assess the accuracy of the predictions (see “Training and Testing” at figure 1 for an insight).5) The CNN returns the model performance to Custom Vision so that it can be directly displayed at any moment of the training procedure.6) The python script receives from Custom Visions the results that are generated from the CNN. The results can be directly processed to calculate the metrics, like the confusion matrix, performance statistics, ROC curve and AUC value.7) All results and metrics are saved in a database.

Figure 4.ROC Curve showing true positive rate vs. false positive rate for the detection of HP at different thresholds of our ML classifier. The diagonal red line serves as a reference since it is the curve of a classifier that selects cases at random.

TM
WSI HP negative HP positive Totals
HP negative 97 4 101
HP positive 4 80 84
Totals 101 84 185
Table I.The table shows the significant association between HP diagnosed with a traditional microscope (TM) and with digital pathology (WSI). The observations are from pure visual reading on WSI without an artificial intelligence classifier.
Accuracy 0.957
Sensitivity 0.952
Specificity 0.960
Positive Predictive Value (Precision) 0.952
Negative Predictive Value 0.960
Table II.Diagnostic performance measures of WSI for the detection of HP in 185 gastric biopsies. Traditional microscopy was used as the reference standard.
Predicted HP negative Predicted HP positive Totals
Actual HP negative 123 18 141
Actual HP positive 7 62 69
Totals 130 80 210
Table III.The Confusion Matrix describes the performance of our supervised machine learning classification model on a set 210 test images cropped from DP virtual slides for which the true HP diagnoses were known.
Accuracy 0.880
True positive rate (Recall) 0.897
True negative rate (Specificity) 0.872
Positive Predictive Value (Precision) 0.772
Negative Predictive Value 0.946
F1 score 0.829
Table IV.Measures of performance of our deep learning classifier that provide an understanding of the diagnostic accuracy of the trained model. The F1 score, the harmonic mean of the precision and recall, is a measure of the classifier selecting power.

References

  1. Williams BJ, Brettle D, Aslam M. Guidance for Remote Reporting of Digital Pathology Slides During Periods of Exceptional Service Pressure: An Emergency Response from the UK Royal College of Pathologists. J Pathol Inform. 2020; 11:12. DOI
  2. Loughrey MB, Kelly PJ, Houghton OP. Digital slide viewing for primary reporting in gastrointestinal pathology: a validation study. Virchows Arch. 2015; 467:137-144. DOI
  3. Williams BJ, DaCosta P, Goacher E. A Systematic Analysis of Discordant Diagnoses in Digital Pathology Compared With Light Microscopy. Arch Pathol Lab Med. 2017; 141:1712-1718. DOI
  4. Araújo ALD, Arboleda LPA, Palmier NR. The performance of digital microscopy for primary diagnosis in human pathology: a systematic review. Virchows Arch. 2019; 474:269-287. DOI
  5. Lujan GM, Savage J, Shana’ah A. Digital pathology initiatives and experience of a large academic institution during the Coronavirus disease 2019 (COVID-19) pandemic. Archives of Pathology & Laboratory Medicine. 2021. DOI
  6. Snead DRJ, Tsang Y-W, Meskiri A. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology. 2016; 68:1063-1072. DOI
  7. Mayall FG, Smethurst H-B, Semkin L. A feasibility study of multisite networked digital pathology reporting in England. Journal of Pathology Informatics. 2022; 13:4. DOI
  8. Marshall BJ, Warren JR. Unidentified curved bacilli in the stomach of patients with gastritis and peptic ulceration. Lancet. 1984; 1:1311-1315. DOI
  9. Sipponen P, Price AB. The Sydney System for classification of gastritis 20 years ago. J Gastroenterol Hepatol. 2011; 26:31-34. DOI
  10. Wu J-Y, Lee Y-C, Graham DY. Eradication of Helicobacter pylori to Prevent Gastric Cancer: a Critical Appraisal. Expert Rev Gastroenterol Hepatol. 2019; 13:17-24. DOI
  11. Wang Y-K, Kuo F-C, Liu C-J. Diagnosis of Helicobacter pylori infection: Current options and developments. World J Gastroenterol. 2015; 21:11221-11235. DOI
  12. El-Zimaity H, Serra S, Szentgyorgyi E. Gastric biopsies: The gap between evidence-based medicine and daily practice in the management of gastric Helicobacter pylori infection. Can J Gastroenterol. 2013; 27:e25-e30.
  13. Smith SB, Snow AN, Perry RL. Helicobacter pylori: to stain or not to stain?. Am J Clin Pathol. 2012; 137:733-738. DOI
  14. Lee JY, Kim N. Diagnosis of Helicobacter pylori by invasive test: histology. Ann Transl Med. 2015; 3DOI
  15. Wang XI, Zhang S, Abreo F. The role of routine immunohistochemistry for Helicobacter pylori in gastric biopsy. Ann Diagn Pathol. 2010; 14:256-259. DOI
  16. Uguen A. Detection of Helicobacter pylori in virtual slides requires high resolution digitalisation. J Clin Pathol. 2021. DOI
  17. García-Rojo M. International Clinical guidelines for the adoption of digital pathology: a review of technical aspects. Pathobiology. 2016; 83:99-109. DOI
  18. Moxley-Wyles B, Colling R, Verrill C. Artificial intelligence in pathology: an overview. Diagnostic Histopathology. 2020; 26:513-520. DOI
  19. Tizhoosh HR, Pantanowitz L. Artificial intelligence and digital pathology: Challenges and opportunities. Journal of Pathology Informatics. 2018; 9:38. DOI
  20. Klein S, Gildenblat J, Ihle MA. Deep learning for sensitive detection of Helicobacter Pylori in gastric biopsies. BMC Gastroenterology. 2020; 20:417. DOI
  21. King T. The 16 Best Data Science and Machine Learning Platforms for 2021. Best Business Intelligence and Data Analytics Tools, Software, Solutions & Vendors. 2021. Publisher Full Text
  22. Liscia DS, Bellis D, Biletta E. Whole-slide imaging allows pathologists to work remotely in regions with severe logistical constraints due to Covid-19 pandemic. J Pathol Inform. 2020; 11:20. DOI
  23. Pimentel-Nunes P, Libânio D, Marcos-Pinto R. Management of epithelial precancerous conditions and lesions in the stomach (MAPS II): European Society of Gastrointestinal Endoscopy (ESGE), European Helicobacter and Microbiota Study Group (EHMSG), European Society of Pathology (ESP), and Sociedade Portuguesa de Endoscopia Digestiva (SPED) guideline update 2019. Endoscopy. 2019; 51:365-388. DOI
  24. van Rossum (Guido) G. Python reference manual. Department of Computer Science [CS]; 1995. Publisher Full Text
  25. Thompson K. Programming Techniques: Regular expression search algorithm. Commun. ACM. 1968; 11:419-422. DOI
  26. Dixon MF, Genta RM, Yardley JH. Classification and grading of gastritis. The updated Sydney System. International Workshop on the Histopathology of Gastritis, Houston 1994. Am J Surg Pathol. 1996; 20:1161-1181. DOI
  27. Chen XY, van der Hulst RW, Bruno MJ. Interobserver variation in the histopathological scoring of Helicobacter pylori related gastritis. J Clin Pathol. 1999; 52:612-615.
  28. Warthin AS, Starry AC. The Staining of Spirochetes in Cover-Glass Smears by the Silver-Agar Method. The Journal of Infectious Diseases. 1922; 30:592-600.
  29. Farouk WI, Hassan NH, Ismail TR. Warthin-Starry Staining for the Detection of Helicobacter pylori in Gastric Biopsies. Malays J Med Sci. 2018; 25:92-99. DOI
  30. Ting KM. Encyclopedia of Machine Learning. Springer US: Boston, MA; 2010. DOI
  31. Norgan AP, Suman VJ, Brown CL. Comparison of a Medical-Grade Monitor vs Commercial Off-the-Shelf Display for Mitotic Figure Enumeration and Small Object (Helicobacter pylori) Detection. Am J Clin Pathol. 2018; 149:181-185. DOI
  32. Hood D, Learn DB. Consistent Warthin-Starry Staining Technique. Journal of Histotechnology. 1996; 19:339-340. DOI
  33. Stolte M, Meining A. The updated Sydney system: classification and grading of gastritis as the basis of diagnosis and treatment. Can J Gastroenterol. 2001; 15:591-598. DOI
  34. Williams C, McCOLL KEL. Review article: proton pump inhibitors and bacterial overgrowth. Alimentary Pharmacology & Therapeutics. 2006; 23:3-10. DOI
  35. Campanella G, Hanna MG, Geneslaw L. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019; 25:1301-1309. DOI
  36. Iizuka O, Kanavati F, Kato K. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep. 2020; 10:1504. DOI

Affiliations

$authorString->getOrcid() =>

$authorString->getFullName() => Daniel S. Liscia

$authorString->getUrl() => https://orcid.org/0000-0002-3850-3952

Daniel S. Liscia

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""https://orcid.org/0000-0002-3850-3952

$authorString->getOrcid() =>

$authorString->getFullName() => Mariangela D’Andrea

$authorString->getUrl() =>

Mariangela D’Andrea

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Elena Biletta

$authorString->getUrl() =>

Elena Biletta

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Donata Bellis

$authorString->getUrl() =>

Donata Bellis

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Kejsi Demo

$authorString->getUrl() =>

Kejsi Demo

Unit of Pathology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Franco Ferrero

$authorString->getUrl() =>

Franco Ferrero

Unit of Gastroenterology, Department of Surgery ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Alberto Petti

$authorString->getUrl() =>

Alberto Petti

of Clinical Engineering ASL BI, Nuovo Ospedale degli Infermi, Ponderano (BI) Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Roberto Butinar

$authorString->getUrl() =>

Roberto Butinar

Engineering Ingegneria Informatica S.p.A., Rome, Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Enzo D’Andrea

$authorString->getUrl() =>

Enzo D’Andrea

Engineering Ingegneria Informatica S.p.A., Rome, Italy
non esiste orcidID ""

$authorString->getOrcid() =>

$authorString->getFullName() => Giuditta Davini

$authorString->getUrl() =>

Giuditta Davini

Engineering Ingegneria Informatica S.p.A., Rome, Italy
non esiste orcidID ""

Copyright

© Società Italiana di Anatomia Patologica e Citopatologia Diagnostica, Divisione Italiana della International Academy of Pathology , 2022

How to Cite

[1]
Liscia, D.S., D’Andrea, M., Biletta, E., Bellis, D., Demo, K., Ferrero, F., Petti, A., Butinar, R., D’Andrea, E. and Davini, G. 2022. Use of digital pathology and artificial intelligence for the diagnosis of Helicobacter pylori in gastric biopsies. Pathologica - Journal of the Italian Society of Anatomic Pathology and Diagnostic Cytopathology. 114, 4 (Sep. 2022), 295-303. DOI:https://doi.org/10.32074/1591-951X-751.
  • Abstract viewed - 1227 times
  • PDF downloaded - 470 times