Advanced deep learning approaches for the automated classification of macrofungal species in biodiversity monitoring
PDF
Cite
Share
Request
Research Article
VOLUME: 26 ISSUE: 2
P: 203 - 212
October 2025

Advanced deep learning approaches for the automated classification of macrofungal species in biodiversity monitoring

Trakya Univ J Nat Sci 2025;26(2):203-212
1. Ankara University, Faculty of Engineering, Department of Computer Engineering, Ankara, Türkiye
2. Ankara University, Faculty of Engineering, Department of Computer Sciences, Ankara, Türkiye
3. Ankara University, Graduate School of Natural and Applied Sciences, Ankara, Türkiye
4. Ankara University, Institute of Artificial Intelligence, Ankara, Türkiye
5. Ankara University, Faculty of Engineering, Department of Artificial Intelligence and Data Engineering, Ankara, Türkiye
6. Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland
7. Ankara University, Faculty of Science, Department of Biology, Ankara, Türkiye
No information available.
No information available
Received Date: 29.05.2025
Accepted Date: 14.09.2025
Publish Date: 15.10.2025
E-Pub Date: 30.09.2025
PDF
Cite
Share
Request

Abstract

Macrofungal species attract significant attention due to their critical roles in ecosystems and widespread industrial applications. Traditional species identification methods are expertise-intensive and time-consuming processes. Artificial intelligence (AI) techniques, especially, deep learning (DL), have been employed to accelerate these processes and improve result accuracy. This article aimed to classify five macrofungi using AI, specifically DL. The study focuses on classifying Amanita muscaria, A. phalloides, Lepista nuda, Macrolepiota procera, and Craterellus cornucopioides, utilizing various DL models, including DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101. The dataset comprised 683 images across five classes. The data were collected in a balanced manner, and the model’s effectiveness was evaluated based on accuracy, precision, recall, and F1-score metrics. Additionally, Grad-CAM visualizations were utilized to analyze the regions of focus. The best-performing model achieved 93% accuracy (7% error), outperforming a simple Convolutional Neural Network baseline with 70% accuracy (30% error). Overall, all transfer-learning models achieved accuracies of ≥ 90%. In particular, the DenseNet121 and Xception models achieved the maximum success by correctly identifying relevant regions of these species. The study demonstrates that AI, particularly DL-based techniques, can be effectively applied in species identification. Expanding datasets could further enhance their performance. The novelty of this study is the use of a combination of transfer-learning and Grad-CAM explainability to provide an interpretable and biologically meaningful framework for macrofungi identification.

Keywords:
deep learning, macrofungi classification, artificial ıntelligence, Grad-CAM visualization, biodiversity monitoring

Introduction

Fungi play vital roles in ecosystems and have significant industrial and medicinal applications (Chugh et al., 2022; Llanaj et al., 2023). Macrofungi, primarily belonging to the Basidiomycota and Ascomycota phyla, are notable for their large fruiting bodies and comprise ~41,000 known species, with > 2,000 being edible (Priyamvada et al., 2017; Li et al., 2021; Ekinci et al., 2025). They contribute to forest health management through symbiotic relationships with trees, breaking down organic matter and recycling nutrients (de Mattos-Shipley et al., 2016; Ye et al., 2019; Ozsari et al., 2024). In addition to their ecological roles, macrofungi are valued for their bioactive compounds, which are utilized in medicine and explored for their potential applicability in biodegradation and as renewable resources (Cheong et al., 2018; Hyde et al., 2019; El-Ramady et al., 2022).

Integrating ML and computer vision into mycological research and citizen science is revolutionizing macrofungal identification and classification (Picek et al., 2022; Ozsari et al., 2024; Korkmaz et al., 2025). Traditional identification methods, which require extensive expertise and are often time-consuming, are being superseded by artificial intelligence (AI)-based systems capable of rapidly analyzing large datasets and distinguishing minute variations in shape, color, and texture (Yan et al., 2023). AI democratises this process by providing intuitive mobile applications, enabling individuals who are not experts to identify macrofungi from photographs, thereby substantially mitigating the risk of misidentification (Chathurika et al., 2023; Ekinci et al., 2025; Kumru et al., 2025). These AI models evolve by incorporating community-contributed data, ensuring their relevance to new findings (Bartlett et al., 2022). Moreover, the ability of AI to analyze environmental variables along with visual characteristics enhances ecological research and conservation efforts. By engaging scientists and the general public in collaborative biodiversity monitoring, AI helps generate more precise and comprehensive datasets, which are crucial for informed conservation strategies (Yan et al., 2023). The application of AI in mycology markedly advances macrofungal identification by offering speed, precision, accessibility, and robust data handling, thereby benefiting from scientific research and public involvement in research on fungal biodiversity (Picek et al., 2022). Although these studies demonstrate the potential of AI, they have not meaningfully integrated performance with interpretability, highlighting a gap that the present study seeks to address.

This study aims to classify five species of macrofungi: Amanita muscaria, A. phalloides, Lepista nuda, Macrolepiota procera, and Craterellus cornucopioides using deep learning (DL). The five macrofungal species were specifically chosen because they represent both toxic and edible taxa with high ecological and societal importance. Their morphological similarities often led to frequent misidentifications in the field. These were selected for their ecological significance and morphological diversity. The objective was to develop a model that can quickly and accurately identify these species based on their visual characteristics.

The novelty of this study lies in the application of DL techniques to achieve high-accuracy classification. Its contributions may represent a significant advancement in the in silico identification of macrofungal species, supporting the conservation of natural habitats and biodiversity monitoring. The technical contributions of this study include a systematic comparison of six transfer-learning architectures, the integration of Grad-CAM to ensure biologically meaningful interpretability, and benchmarking against a Convolutional Neural Network (CNN) baseline to demonstrate methodological advances. AI-based approaches have been applied in mycology, systematic research combining multiple DL architectures with interpretability-associated analyses, such as Grad-CAM, remains limited (Raghavan et al., 2024). Addressing these challenges, this study not only provides a performance benchmark applicable to varied state-of-the-art DL architectures but also an interpretable framework through Grad-CAM, thereby extending beyond the available DL-based biodiversity monitoring works that mainly emphasized accuracy. This study, therefore, contributes to taxonomy by providing a more comprehensive evaluation that highlights the significance of model performance and interpretability in macrofungal classification. In this context, the models DenseNet121 (Huang et al., 2017), InceptionV3 (Szegedy et al., 2016), MobileNetV2 (Sandler et al., 2018), Xception (Chollet, 2017), VGG16 (Simonyan & Zisserman, 2014), and ResNet101 (He et al., 2016), which have demonstrated considerable efficacy in classification, were employed.

Materials and Methods

Dataset

For this study, some macrofungal images were sourced from publicly accessible website primarily the Global Biodiversity Information Facility (www.gbif.org). We photographed the other specimens. Before their use, the publicly available images were modified, and all sources have been cited. The number of photographs taken by the authors was lower than that acquired from the websites (GBIF.org). The primary reason for this difference was the difficulty in locating all species simultaneously. Furthermore, obtaining the required quantity of data for each species was time-consuming and costly. Figure 1 displays a sample image for each macrofungal species. The dataset comprised 683 original images across five classes. Online data augmentation (horizontal flipping and 0.2° rotation) was applied during training; the augmented images were not stored offline, and all reported counts referred to the original images. Ensuring a balanced dataset is crucial for DL models; therefore, data were collected for all macrofungi species. An imbalance in data quantity can lead to models having an inherent majority bias.

Application of DL Methods

Training and testing procedures employed an equal number of data samples for each species. Table 1 illustrates the distribution of
data samples across the training, validation, and testing phases. The images employed for each phase were entirely distinct. Specifically, the images used for training did not include any photos that were used for verification or testing. The validation images did not contain any test data.

Metrics

The effectiveness of all the methods employed was assessed based on the parameters of accuracy, precision, recall, and F1-score.

• Accuracy is the ratio between the correctly predicted and the total number of instances within each dataset.

• Precision is the proportion of instances with accurate positive predictions among all those predicted as positive.

• Recall is the ratio between the instances with accurate positive predictions and the total number of actual positive instances.

• F1-score is the harmonic mean of precision and recall, offering a single metric that balances both.

The formulae used for these metrics are given in Equations 1, 2, 3, and 4, respectively.

For Equations 1, 2, and 3, the terms True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN) were defined. TPs refers to cases where the test sample is actually diseased and is correctly predicted as such. FPs denotes cases where the sample is healthy but is incorrectly classified as diseased. TNs indicates samples that are healthy and correctly identified as such. FNs represents samples that are diseased but are mistakenly classified as healthy (Ozsari et al., 2024).

In addition, the Grad-CAM approach was employed to examine specific regions of the images prioritized by the models during inference. Neural networks consist of interconnected layers where numerous parameters are adjusted during training to process the input data. However, the mechanisms by which the outputs are generated based on the inputs remain unclear, diminishing model confidence. Grad-CAM is a visualization technique used to interpret and understand the decision-making processes behind CNNs. It is particularly popular in image classification tasks, aiding in identifying the regions of an input image that are significant for accurate prediction by a model. The red regions in the Grad-CAM images indicate the areas where the model focused the most, signifying that the network makes its inferences by examining these regions. The blue regions indicate the least important regions, denoting areas on which the network does not concentrate.

Model Architectures and Training Hyperparameters

We evaluated six transfer-learning backbones: DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101, which were initialized with ImageNet (https://www.image-net.org/index.php) weights. In each model, the original classification head was replaced with a global average pooling layer followed by a fully connected softmax layer with five outputs corresponding to the target species. All layers were fine-tuned end-to-end.

The images were resized to 224 × 224 pixels for DenseNet121, MobileNetV2, VGG16, and ResNet101, and to 299 × 299 pixels for InceptionV3 and Xception. Preprocessing used the dedicated preprocess_input function of each model. During training, we applied horizontal flipping and a 0.2° rotation as data augmentation.

Training employed the Adaptive Moment Estimation (Adam) optimizer with an initial learning rate of 1 × 10-4 and a categorical cross-entropy loss. We trained with a batch size of 32 for 50 epochs, selecting the best checkpoint based on validation loss. We employed the ReduceLROnPlateau scheduling technique (factor 0.1, patience 5, and minimum learning rate 1 × 10-6) and early stopping (patience 10 and restore_best_weights=True). Experiments were conducted in a GPU-enabled Google Colab environment (Python 3.x; TensorFlow/Keras), with fixed random seeds to enhance reproducibility (Ozsari et al., 2024).

Methodological Overview and Contributions

This study implements a standardized transfer-learning pipeline for macrofungi identification that (i) benchmarks six widely used backbones (DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101) under identical data splits and a unified training protocol; (ii) employs on-the-fly augmentation (horizontal flip and 0.2° rotation) to mitigate limited data without artificially inflating the sample counts; (iii) integrates Grad-CAM-based visual explanations to verify the attention on morphologically relevant regions; and (iv) establishes a simple CNN baseline for contextual comparison. All experimental settings, including input resolutions and preprocessing, fine-tuning strategy, optimizer, learning rate schedule, batch size, epochs, early stopping, and the hardware/software environments, facilitate reproducibility (Ekinci et al., 2025; Kumru et al., 2025).

Results

Due to the limited number of images, data augmentation preceded training. This technique involves increasing the quantity of available data through minor transformations, such as rotation and brightness adjustment, without altering the image content. In the present study, data were augmented using horizontal flipping and a 0.2-degree rotation. Augmented samples were not stored offline; therefore, all dataset counts reported refer to the original images, while augmentation only enhances the effective number of training instances per epoch. Figure 2 presents a sample output for A. muscaria.

With on-the-fly augmentation, a series of experiments was conducted to evaluate and confirm the performance of various transfer-learning based models, including DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101, for automatically predicting five different macrofungi species. Table 2 presents the results for the models.

A CNN model built with basic layers yields average results (Table 2). Given the limited number of available images, this outcome is expected and highlights the rationale for utilizing pre-trained networks. All fine-tuned models demonstrated substantial effectiveness, achieving results > 90%. Among all metrics, the MobileNetV2 network attained the maximum values, with the InceptionV3 architecture also producing comparable results. The ResNet101 network produced lower accuracy values when compared to the other models. Notably, the high performance of these networks does not guarantee that the inferences made were from the correct regions. Therefore, Grad-CAM visualizations were generated to analyse the areas that the models focused on for predictions.

Specifically, Figure 3 illustrates the macrofungi for which Grad-CAM visualizations were conducted. Figure 4 displays the Grad-CAM images for DenseNet121, Figure 5 for InceptionV3, Figure 6 for MobileNetV2, Figure 7 for Xception, Figure 8 for VGG16, and Figure 9 for ResNet101.

Analysis of the Grad-CAM images of the DenseNet architecture reveals that it derives inferences from the correct regions for all three macrofungi species. The blue areas denote regions unrelated to the macrofungi. Although its metric values were high, it is evident that the InceptionV3 network concentrates on the entire image (Figure 5), suggesting that the network makes incorrect predictions. The Grad-CAM images for the MobileNetV2 network demonstrate that it focuses on the correct regions for the for the central macrofungus (second image) and partially correct ones for the first and third macrofungi images. The heatmap indicates that the Xception network, just like DenseNet121, concentrates on macrofungi-related areas (Figure 7). The VGG16 network made accurate predictions for the second and third images,but shifted to areas in the first image that were unrelated to macrofungi. The ResNet101 model, apart from InceptionV3, focused on the correct regions in the second image, considering the stem part in the third macrofungi, and shifting to the knife-related area in the first macrofungi. Thus, it can be concluded that DenseNet121 was the most successful model based on the Grad-CAM visualizations and metric values.

Discussion

Fungi represent a diverse kingdom of organisms, including yeasts, moulds, and macrofungi. They are essential components of various ecosystems, serving as decomposers, symbionts, and pathogens. Recently, DL applications in mycology have gained significant popularity, particularly for image classification, species identification, and disease diagnosis. This study employed the models DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101 to assess the effectiveness of DL techniques in autonomously detecting five distinct macrofungal species. The results, with values ≥ 0.9, demonstrate that these networks are highly efficient in distinguishing between these species. However, high-performance metrics do not necessarily ensure that the architectures operate with the appropriate regions. Consequently, Grad-CAM visualizations were also employed. Analysis of the Grad-CAM outputs revealed that the networks generally drew inferences from the correct areas. The Grad-CAM images for the DenseNet121 model demonstrated that it drew inferences from the correct regions for all macrofungal species. This result aligns with the findings of Van Horn et al. (2018), who observed that DL models accurately identify the regions relevant to species classification tasks. Conversely, the InceptionV3 network focused on entire images and, despite high metric values, made incorrect inferences. This outcome was consistent with the findings of Wah et al. (2011), who noted that DL models can sometimes derive inferences from the irrelevant regions. The MobileNetV2 network focused on the correct regions applicable to the middle macrofungi and partially correct areas for A. phalloides and M. procera. The Xception network, similar to DenseNet-121, focused on areas related to macrofungi. The VGG16 network made accurate predictions using the second and third images, but shifted to regions irrelevant to macrofungi in the first image. The ResNet101 model, apart from InceptionV3, focused on the correct areas of the second image, considering the stem part in the third macrofungi, and shifting to the knife-related area in the first macrofungi.

The application of DL techniques in classifying macrofungal species, as demonstrated in this study, has shown significant promise. By leveraging advanced architectures such as DenseNet121, InceptionV3, MobileNetV2, Xception, VGG16, and ResNet101, we achieved accuracy rates > 90%, indicating the efficacy of these models in identifying and classifying species based on their visual characteristics. The use of Grad-CAM visualizations provided further insights into those regions of the images that the models focused on. This observation confirmed that DenseNet121 and Xception, in particular, were highly effective in identifying the areas relevant to the species.
The performance of these architectures was assessed based on accuracy, precision, recall, and F1-score metrics. Additionally, Grad-CAM visualizations were generated to pinpoint the regions that the models concentrated on during inference. These results indicated that the networks achieved high effectiveness, with scores ≥ 0.9. The Grad-CAM images demonstrated that the DenseNet121 and Xception architectures focused accurately on the macrofungi.

Despite these positive outcomes, several challenges persist, particularly concerning data availability and diversity. The current dataset, while effective, is limited in size, which restricts the ability of the model to realize its potential fully. This limitation underscores the need to expand datasets to encompass a broader range of species and more diverse image sets. Larger and more varied datasets would not only enhance model performance but also improve the generalizability of the results, rendering the models more robust across different ecological contexts. Our results were consistent with findings from previous studies on image-based fungal classification, which reported that DL architectures outperform traditional ML approaches (Picek et al., 2022; Yan et al., 2023). However, unlike prior studies that mostly evaluated single CNN models, this study systematically compared six state-of-the-art transfer-learning architectures on macrofungi. Moreover, while earlier research focused only on accuracy-associated metrics, performance evaluation in this study was complemented with Grad-CAM visualizations, thereby adding interpretability and biological relevance to the results.

The main contribution of this study lies in the systematic comparison of six transfer-learning architectures for macrofungi classification, which was combined with the Grad-CAM visualizations to validate the model’s focus on biologically relevant regions. In addition, benchmarking against a simple CNN baseline highlighted the methodological advantage of advanced DL models. These contributions together provide a reproducible and interpretable framework that can be adapted to future biodiversity monitoring studies.

Future research should focus on addressing these data limitations by developing and utilizing more comprehensive and diverse datasets. Furthermore, integrating DL models with other computational techniques, such as computer vision, could further enhance the efficiency and accuracy of species classification, particularly in identifying poisonous macrofungi and supporting mechanized harvesting processes. Such a line of research could be further strengthened by incorporating additional data types, including spore prints and relevant environmental variables. In conclusion, while this study adds significant amounts of data to the automated classification of macrofungi species, it also underscores the necessity for continued research and development. Future studies should aim to build upon these findings by expanding data resources, refining model architectures, and exploring new applications of AI in mycology, ultimately contributing to more effective biodiversity conservation efforts.

Conclusion

This study addresses the core challenge of reliable macrofungal identification with limited datasets, in a context where traditional methods remain inadequate and the existing AI-based approaches seldom integrate performance with interpretability. Through a systematic comparison of six state-of-the-art transfer-learning architectures and the integration of Grad-CAM visualization, this study demonstrated high accuracy and biologically meaningful interpretability, providing a methodological framework that advances beyond previous works. Importantly, by highlighting the diagnostic image regions used by the models, this framework not only advances methodological development but also offers practical ease in the identification of the fungal species included in the study, thereby supporting both taxonomic accuracy and applied usability.Future research should focus on scaling up this framework by employing larger and more diverse datasets and enhancing generalizability under data-scarce conditions through advanced approaches such as transformer-based architectures and semi-supervised learning. Moreover, applying these models to field-based contexts, particularly for the reliable identification of poisonous species and for ecological monitoring, would provide significant practical contributions.

Ethics

Ethics Committee Approval: Since the article does not contain any studies with human or animal subject, its approval to the ethics committee was not required.
Data Sharing Statement: All data are available within the study.
Authorship Contributions: Conceptualization: Ş.Ö., E.K., F.E., and I.A.; Design/methodology: Ş.Ö., E.K., F.E., and T.A.; Execution/investigation: F.E., M.S.G., K.A., and T.A.; Resources/materials: Ş.Ö., E.K., and F.E.; Data acquisition: Ş.Ö. and E.K.; Data analysis/interpretation: Ş.Ö., F.E., and K.A.; Writing - original draft: Ş.Ö. and I.A.; Writing - review & editing/critical revision: E.K., F.E., M.S.G., K.A., and T.A.
Conflict of Interest: The authors have no conflicts of interest to declare.
Funding: The authors declared that this study has received no financial support.
Editor-in-Chief’s Note: Fatih Ekinci is a member of the editorial board of the Trakya University Journal of Natural Sciences. He was not involved in the editorial evaluation or decision-making process for this manuscript.

References

1
Bartlett, P., Eberhardt, U., Schütz, N., & Beker, H. J. (2022). Species determination using AI machine-learning algorithms: Hebeloma as a case study. IMA Fungus, 13 (1), 13. https://doi.org/10.1186/s43008-022-00099-x
2
Chathurika, K., Siriwardena, E., Bandara, H., Perera, G., & Dilshanka, K. (2023). Developing an identification system for different types of edible mushrooms in Sri Lanka using machine learning and image processing. International Journal of Engineering and Management Research, 13 (5), 54-59. https://doi.org/10.31033/ijemr.13.5.9
3
Cheong, P. C. H., Tan, C. S., & Fung, S. Y. (2018). Medicinal mushrooms: Cultivation and pharmaceutical impact. In Biology of macrofungi (pp. 287-304). Springer. https://doi.org/10.1007/978-3-030-02622-6_14
4
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1800-1807). Honolulu, HI, USA. https://doi.org/10.1109/CVPR.2017.195
5
Chugh, R. M., Mittal, P., Mp, N., Arora, T., Bhattacharya, T., Chopra, H., Cavalu, S., & Gautam, R. K. (2022). Fungal mushrooms: A natural compound with therapeutic applications. Frontiers in Pharmacology, 13, 925387. https://doi.org/10.3389/fphar.2022.925387
6
Das, A. K., Nanda, P. K., Dandapat, P., Bandyopadhyay, S., Gullón, P., Sivaraman, G. K., McClements, D. J., Gullón, B., & Lorenzo, J. M. (2021). Edible mushrooms as functional ingredients for development of healthier and more sustainable muscle foods: A flexitarian approach. Molecules, 26 (9), 2463. https://doi.org/10.3390/molecules26092463
7
de Mattos-Shipley, K. M., Ford, K. L., Alberti, F., Banks, A., Bailey, A. M., & Foster, G. (2016). The good, the bad and the tasty: The many roles of mushrooms. Studies in Mycology, 85 (1), 125-157. https://doi.org/10.1016/j.simyco.2016.11.002
8
De, J., Nandi, S., & Acharya, K. (2022). A review on Blewit mushroom ( Lepista sp.) transition from farm to pharm. Journal of Food Processing and Preservation, 46 (11), e17028. https://doi.org/10.1111/jfpp.17028
9
Ekinci, F., Ugurlu, G., Ozcan, G. S., Acici, K., Asuroglu, T., Kumru, E., Guzel, M. S., & Akata, I. (2025). Classification of Mycena and Marasmius species using deep learning models: An ecological and taxonomic approach. Sensors, 25 (6), 1642. https://doi.org/10.3390/s25061642
10
El-Ramady, H., Abdalla, N., Badgar, K., Llanaj, X., Törős, G., Hajdú, P., Eid, Y., & Prokisch, J. (2022). Edible mushrooms for sustainable and healthy human food: Nutritional and medicinal attributes. Sustainability, 14 (9), 4941. https://doi.org/10.3390/su14094941
11
GBIF Secretariat. (2023). GBIF backbone taxonomy [Checklist dataset]. GBIF.org. https://doi.org/10.15468/39omei (Accessed July 21, 2024)
12
Google Colab. (n.d.). Retrieved August 14, 2024, from https://research.google.com/colaboratory/faq.html
13
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770-778). Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.90
14
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2261-2269). Honolulu, HI, USA. https://doi.org/10.1109/CVPR.2017.243
15
Hyde, K. D., Xu, J., Rapior, S., Jeewon, R., Lumyong, S., Niego, A. G. T., Abeywickrama, P. D., Aluthmuhandiram, J. V., Brahamanage, R. S., & Brooks, S. (2019). The amazing potential of fungi: 50 ways we can exploit fungi industrially. Fungal Diversity, 97, 1-136. https://doi.org/10.1007/s13225-019-00430-9
16
Jančo, I., Šnirc, M., Hauptvogl, M., Demková, L., Franková, H., Kunca, V., Lošák, T., & Árvay, J. (2021). Mercury in Macrolepiota procera (Scop.) Singer and its underlying substrate—Environmental and health risks assessment. Journal of Fungi, 7 (9), 772. https://doi.org/10.3390/jof7090772
17
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv Preprint, arXiv:1412.6980. https://doi.org/10.48550/arXiv.1412.6980
18
Korkmaz, A. F., Ekinci, F., Altaş, Ş., Kumru, E., Güzel, M. S., & Akata, I. (2025). A deep learning and explainable AI-based approach for the classification of Discomycetes species. Biology, 14 (6), 719. https://doi.org/10.3390/biology14060719
19
Kumru, E., Ugurlu, G., Sevindik, M., Ekinci, F., Güzel, M. S., Acici, K., & Akata, I. (2025). Hybrid deep learning framework for high-accuracy classification of morphologically similar puffball species using CNN and transformer architectures. Biology, 14 (7), 816. https://doi.org/10.3390/biology14070816
20
Li, H., Tian, Y., Menolli Jr, N., Ye, L., Karunarathna, S. C., Perez-Moreno, J., Rahman, M. M., Rashid, M. H., Phengsintham, P., & Rizal, L. (2021). Reviewing the world’s edible macrofungi species: A new evidence-based classification system. Comprehensive Reviews in Food Science and Food Safety, 20 (2), 1982-2014. https://doi.org/10.1111/1541-4337.12708
21
Llanaj, X., Törős, G., Hajdú, P., Abdalla, N., El-Ramady, H., Kiss, A., Solberg, S. Ø., & Prokisch, J. (2023). Biotechnological applications of mushroom under the water-energy-food nexus: Crucial aspects and prospects from farm to pharmacy. Foods, 12 (14), 2671. https://doi.org/10.3390/foods12142671
22
Niego, A. G. T., Lambert, C., Mortimer, P., Thongklang, N., Rapior, S., Grosse, M., Schrey, H., Charria-Girón, E., Walker, A., & Hyde, K. D. (2023). The contribution of fungi to the global economy. Fungal Diversity, 121 (1), 95-137. https://doi.org/10.1007/s13225-023-00520-9
23
Niego, A. G. T., Rapior, S., Thongklang, N., Raspé, O., Hyde, K. D., & Mortimer, P. (2023). Reviewing the contributions of macrofungi to forest ecosystem processes and services. Fungal Biology Reviews, 44, 100294. https://doi.org/10.1016/j.fbr.2022.11.002
24
Niego, A. G., Rapior, S., Thongklang, N., Raspé, O., Jaidee, W., Lumyong, S., & Hyde, K. D. (2021). Macrofungi as a nutraceutical source: Promising bioactive compounds and market value. Journal of Fungi, 7 (5), 397. https://doi.org/10.3390/jof7050397
25
Ozsari, S., Kumru, E., Ekinci, F., Akata, I., Guzel, M. S., Acici, K., Ozcan, E., & Asuroglu, T. (2024). Deep learning-based classification of macrofungi: Comparative analysis of advanced models for accurate fungi identification. Sensors, 24 (22), 7189. https://doi.org/10.3390/s24227189
26
Picek, L., Šulc, M., Matas, J., Heilmann-Clausen, J., Jeppesen, T. S., & Lind, E. (2022). Automatic fungi recognition: Deep learning meets mycology. Sensors, 22 (2), 633. https://doi.org/10.3390/s22020633
27
Pilz, D., Norvell, L., Danell, E., & Molina, R. (2003). Ecology and management of commercially harvested mushrooms. United States Department of Agriculture, Forest Service, Pacific Northwest Research Station, General Technical Report. https://doi.org/10.2737/PNW-GTR-576
28
Pinto, S., Barros, L., Sousa, M. J., & Ferreira, I. C. (2013). Chemical characterization and antioxidant properties of Lepista nuda fruiting bodies and mycelia obtained by in vitro culture: Effects of collection habitat and culture media. Food Research International, 51 (2), 496-502. https://doi.org/10.1016/j.foodres.2012.12.009
29
Priyamvada, H., Akila, M., Singh, R. K., Ravikrishna, R., Verma, R., Philip, L., Marathe, R., Sahu, L., Sudheer, K., & Gunthe, S. (2017). Terrestrial macrofungal diversity from the tropical dry evergreen biome of southern India and its potential role in aerobiology. PLoS One, 12 (1), e0169333. https://doi.org/10.1371/journal.pone.0169333
30
Raghavan, K. B. S., & Veezhinathan, K. (2024). Attention guided grad-CAM: An improved explainable artificial intelligence model for infrared breast cancer detection. Multimedia Tools and Applications, 83, 57551-57578. https://doi.org/10.1007/s11042-023-17776-7
31
Roncero-Ramos, I., & Delgado-Andrade, C. (2017). The beneficial role of edible mushrooms in human health. Current Opinion in Food Science, 14, 122-128. https://doi.org/10.1016/j.cofs.2017.04.002
32
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510-4520). Salt Lake City, UT, USA. https://doi.org/10.1109/CVPR.2018.00474
33
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 618-626). Venice, Italy. https://doi.org/10.1109/ICCV.2017.74
34
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv Preprint, arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556
35
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2818-2826). Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.308
36
Van Horn, G., Mac Aodha, O., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., & Belongie, S. (2018). The iNaturalist species classification and detection dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.48550/arXiv.1707.06642
37
Wah, C., Branson, S., Welinder, P., Perona, P., & Belongie, S. (2011). The Caltech-UCSD birds-200-2011 dataset. California Institute of Technology. https://doi.org/10.22002/D1.20092
38
Yan, Z., Liu, H., Li, J., & Wang, Y. (2023). Application of identification and evaluation techniques for edible mushrooms: A review. Critical Reviews in Analytical Chemistry, 53 (3), 634-654. https://doi.org/10.1080/10408347.2021.1969886
39
Ye, F., Yu, X.-D., Wang, Q., & Zhao, P. (2016). Identification of SNPs in a nonmodel macrofungus ( Lepista nuda, Basidiomycota) through RAD sequencing. SpringerPlus, 5, 1-7. https://doi.org/10.1186/s40064-016-3459-8
40
Ye, L., Li, H., Mortimer, P. E., Xu, J., Gui, H., Karunarathna, S. C., Kumar, A., Hyde, K. D., & Shi, L. (2019). Substrate preference determines macrofungal biogeography in the Greater Mekong sub-region. Forests, 10 (10), 824. https://doi.org/10.3390/f10100824
41
Yilmaz, I., Ermis, F., Akata, I., & Kaya, E. (2015). A case study: What doses of Amanita phalloides and amatoxins are lethal to humans? Wilderness & Environmental Medicine, 26 (4), 491-496. https://doi.org/10.1016/j.wem.2015.08.003