- -

Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Layana-Castro, Pablo Emmanuel es_ES
dc.contributor.author García-Garví, Antonio es_ES
dc.contributor.author Navarro Moya, Francisco es_ES
dc.contributor.author Sánchez Salmerón, Antonio José es_ES
dc.date.accessioned 2023-12-19T19:02:18Z
dc.date.available 2023-12-19T19:02:18Z
dc.date.issued 2023-09 es_ES
dc.identifier.issn 0920-5691 es_ES
dc.identifier.uri http://hdl.handle.net/10251/200935
dc.description.abstract [EN] Skeletonization algorithms are used as basic methods to solve tracking problems, pose estimation, or predict animal group behavior. Traditional skeletonization techniques, based on image processing algorithms, are very sensitive to the shapes of the connected components in the initial segmented image, especially when these are low-resolution images. Currently, neural networks are an alternative providing more robust results in the presence of image-based noise. However, training a deep neural network requires a very large and balanced dataset, which is sometimes too expensive or impossible to obtain. This work proposes a new training method based on a custom-generated dataset with a synthetic image simulator. This training method was applied to different U-Net neural networks architectures to solve the problem of skeletonization using low-resolution images of multiple Caenorhabditis elegans contained in Petri dishes measuring 55 mm in diameter. These U-Net models had only been trained and validated with a synthetic image; however, they were successfully tested with a dataset of real images. All the U-Net models presented a good generalization of the real dataset, endorsing the proposed learning method, and also gave good skeletonization results in the presence of image-based noise. The best U-Net model presented a significant improvement of 3.32% with respect to previous work using traditional image processing techniques. es_ES
dc.description.sponsorship ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau and Mr. Ernesto-Jesus Rico-Guardioa annotated worm skeletons.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00, FPI Predoctoral contract PRE2019-088214 and by European FEDER funds. es_ES
dc.language Inglés es_ES
dc.publisher Springer-Verlag es_ES
dc.relation.ispartof International Journal of Computer Vision es_ES
dc.rights Reconocimiento (by) es_ES
dc.subject Synthetic dataset es_ES
dc.subject Low-resolution image es_ES
dc.subject U-net es_ES
dc.subject Skeletonizing es_ES
dc.subject End points es_ES
dc.subject Caenorhabditis elegans es_ES
dc.subject.classification INGENIERIA DE SISTEMAS Y AUTOMATICA es_ES
dc.title Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1007/s11263-023-01818-6 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-094312-B-I00/ES/MONITORIZACION AVANZADA DE COMPORTAMIENTOS DE CAENORHABDITIS ELEGANS, BASADA EN VISION ACTIVA, PARA ANALIZAR FUNCION COGNITIVA Y ENVEJECIMIENTO/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI//PRE2019-088214//AYUDA PREDOCTORAL AEI-LAYANA CASTRO. PROYECTO: MONITORIZACION AVANZADA DE COMPORTAMIENTOS DE CAENORHABDITIS ELEGANS, BASADA EN VISION ACTIVA, PARA ANALIZAR FUNCION COGNITIVA Y ENVEJECIMIENTO/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/NIH//P40 OD010440/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros Industriales - Escola Tècnica Superior d'Enginyers Industrials es_ES
dc.description.bibliographicCitation Layana-Castro, PE.; García-Garví, A.; Navarro Moya, F.; Sánchez Salmerón, AJ. (2023). Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset. International Journal of Computer Vision. 131(9):2408-2424. https://doi.org/10.1007/s11263-023-01818-6 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion https://doi.org/10.1007/s11263-023-01818-6 es_ES
dc.description.upvformatpinicio 2408 es_ES
dc.description.upvformatpfin 2424 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 131 es_ES
dc.description.issue 9 es_ES
dc.relation.pasarela S\495284 es_ES
dc.contributor.funder Archer Daniels Midland es_ES
dc.contributor.funder AGENCIA ESTATAL DE INVESTIGACION es_ES
dc.contributor.funder European Regional Development Fund es_ES
dc.contributor.funder National Institutes of Health, EEUU es_ES
dc.description.references Alexandre, M. (2019). Pytorch-unet. Code https://github.com/milesial/Pytorch-UNet. es_ES
dc.description.references Baheti, B., Innani, S., Gajre, S., et al. (2020). Eff-unet: A novel architecture for semantic segmentation in unstructured environment. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, pp. 1473–1481, https://doi.org/10.1109/CVPRW50498.2020.00187. es_ES
dc.description.references Bargsten, L., & Schlaefer, A. (2020). Specklegan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International Journal of Computer Assisted Radiology and Surgery, 15(9), 1427–1436. https://doi.org/10.1007/s11548-020-02203-1 es_ES
dc.description.references Biron, D., Haspel, G. (eds) (2015) C . elegans. Springer Science+Business Media, New York. https://doi.org/10.1007/978-1-4939-2842-2 es_ES
dc.description.references Cao, K., & Zhang, X. (2020). An improved res-unet model for tree species classification using airborne high-resolution images. Remote Sensing. https://doi.org/10.3390/rs12071128 es_ES
dc.description.references Chen, L., Strauch, M., Daub, M., et al (2020) A cnn framework based on line annotations for detecting nematodes in microscopic images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, Iowa City, IA, USA, pp. 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465 es_ES
dc.description.references Chen, Z., Ouyang, W., Liu, T., et al. (2021). A shape transformation-based dataset augmentation framework for pedestrian detection. International Journal of Computer Vision, 129(4), 1121–1138. https://doi.org/10.1007/s11263-020-01412-0 es_ES
dc.description.references Conn, P. M. (Ed.). (2017). Animal models for the study of human disease. Texas: Sara Tenney. es_ES
dc.description.references Dewi, C., Chen, R. C., Liu, Y. T., et al. (2021). Yolo v4 for advanced traffic sign recognition with synthetic training data generated by various gan. IEEE Access, 9, 97,228-97,242. https://doi.org/10.1109/ACCESS.2021.3094201 es_ES
dc.description.references Di Rosa, G., Brunetti, G., Scuto, M., et al. (2020). Healthspan enhancement by olive polyphenols in C. elegans wild type and Parkinson’s models. International Journal of Molecular Sciences. https://doi.org/10.3390/ijms21113893 es_ES
dc.description.references Doshi, K. (2019) Synthetic image augmentation for improved classification using generative adversarial networks. arXiv preprint arXiv:1907.13576. es_ES
dc.description.references García Garví, A., Puchalt, J. C., Layana Castro, P. E., et al. (2021). Towards lifespan automation for Caenorhabditis elegans based on deep learning: Analysing convolutional and recurrent neural networks for dead or live classification. Sensors. https://doi.org/10.3390/s21144943 es_ES
dc.description.references Hahm, J. H., Kim, S., DiLoreto, R., et al. (2015). C. elegans maximum velocity correlates with healthspan and is maintained in worms with an insulin receptor mutation. Nature Communications, 6(1), 1–7. https://doi.org/10.1038/ncomms9919 es_ES
dc.description.references Han, L., Tao, P., & Martin, R. R. (2019). Livestock detection in aerial images using a fully convolutional network. Computational Visual Media, 5(2), 221–228. https://doi.org/10.1007/s41095-019-0132-5 es_ES
dc.description.references Hebert, L., Ahamed, T., Costa, A. C., et al. (2021). Wormpose: Image synthesis and convolutional networks for pose estimation in C. elegans. PLOS Computational Biology, 17(4), 1–20. https://doi.org/10.1371/journal.pcbi.1008914 es_ES
dc.description.references Hinterstoisser, S., Pauly, O., Heibel, H., et al (2019) An annotation saved is an annotation earned: Using fully synthetic training for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, Seoul, Korea (South), pp. 2787–2796. https://doi.org/10.1109/ICCVW.2019.00340 es_ES
dc.description.references Huang, H., Lin, L., Tong, R., et al (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Barcelona, Spain, pp. 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405 es_ES
dc.description.references Ioffe, S., Szegedy, C. (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: F. Bach, D. Blei (eds) Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 37. PMLR, Lille, France, pp. 448–456 es_ES
dc.description.references Iqbal, H. (2018) Harisiqbal88/plotneuralnet v1.0.0. Code https://github.com/HarisIqbal88/PlotNeuralNet. es_ES
dc.description.references Isensee, F., Jaeger, P. F., Kohl, S. A., et al. (2021). nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203–211. https://doi.org/10.1038/s41592-020-01008-z es_ES
dc.description.references Javer, A., Currie, M., Lee, C. W., et al. (2018). An open-source platform for analyzing and sharing worm-behavior data. Nature Methods, 15(9), 645–646. https://doi.org/10.1038/s41592-018-0112-1 es_ES
dc.description.references Javer, A., Brown, A.E., Kokkinos, I., et al. (2019). Identification of C. elegans strains using a fully convolutional neural network on behavioural dynamics. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, vol 11134. Springer, Cham, pp. 0–0. https://doi.org/10.1007/978-3-030-11024-6_35 es_ES
dc.description.references Jung, S. K., Aleman-Meza, B., Riepe, C., et al. (2014). Quantworm: A comprehensive software package for Caenorhabditis elegans phenotypic assays. PLOS ONE, 9(1), 1–9. https://doi.org/10.1371/journal.pone.0084830 es_ES
dc.description.references Koopman, M., Peter, Q., Seinstra, R. I., et al. (2020). Assessing motor-related phenotypes of Caenorhabditis elegans with the wide field-of-view nematode tracking platform. Nature protocols, 15(6), 2071–2106. https://doi.org/10.1038/s41596-020-0321-9 es_ES
dc.description.references Koul, A., Ganju, S., Kasam, M. (2019). Practical Deep Learning for Cloud, Mobile and Edge: Real-World AI and Computer Vision Projects Using Python, Keras and TensorFlow. O’Reilly Media, Incorporated. https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/ es_ES
dc.description.references Kumar, S., Egan, B. M., Kocsisova, Z., et al. (2019). Lifespan extension in C. elegans caused by bacterial colonization of the intestine and subsequent activation of an innate immune response. Developmental Cell, 49(1), 100-117.e6. https://doi.org/10.1016/j.devcel.2019.03.010 es_ES
dc.description.references Layana Castro, P. E., Puchalt, J. C., & Sánchez-Salmerón, A. J. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports, 10(1), 22,247. https://doi.org/10.1038/s41598-020-79430-8 es_ES
dc.description.references Layana Castro, P. E., Puchalt, J. C., García Garví, A., et al. (2021). Caenorhabditis elegans multi-tracker based on a modified skeleton algorithm. Sensors. https://doi.org/10.3390/s21165622 es_ES
dc.description.references Le, K. N., Zhan, M., Cho, Y., et al. (2020). An automated platform to monitor long-term behavior and healthspan in Caenorhabditis elegans under precise environmental control. Communications Biology, 3(1), 1–13. https://doi.org/10.1038/s42003-020-1013-2 es_ES
dc.description.references Li, H., Fang, J., Liu, S., et al. (2020). Cr-unet: A composite network for ovary and follicle segmentation in ultrasound images. IEEE Journal of Biomedical and Health Informatics, 24(4), 974–983. https://doi.org/10.1109/JBHI.2019.2946092 es_ES
dc.description.references Li, S., Günel, S., Ostrek, M., et al. (2020b) Deformation-aware unpaired image translation for pose estimation on laboratory animals. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Seattle, WA, USA, pp. 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317 es_ES
dc.description.references Liu, X., Zhou, T., Lu, M., et al. (2020). Deep learning for ultrasound localization microscopy. IEEE Transactions on Medical Imaging, 39(10), 3064–3078. https://doi.org/10.1109/TMI.2020.2986781 es_ES
dc.description.references Mais, L., Hirsch, P., Kainmueller, D. (2020). Patchperpix for instance segmentation. In: European Conference on Computer Vision, Springer, vol. 12370. Springer, Cham, pp. 288–304. https://doi.org/10.1007/978-3-030-58595-2_18 es_ES
dc.description.references Mane, M. R., Deshmukh, A. A., Iliff A. J. (2020) Head and tail localization of C. elegans. arXiv preprint arXiv:2001.03981. https://doi.org/10.48550/arXiv.2001.03981 es_ES
dc.description.references Mayershofer, C., Ge, T., Fottner, J. (2021). Towards fully-synthetic training for industrial applications. In: LISS 2020. Springer, Singapore, pp. 765–782. https://doi.org/10.1007/978-981-33-4359-7_53 es_ES
dc.description.references McManigle, J. E., Bartz, R. R., Carin, L. (2020). Y-net for chest x-ray preprocessing: Simultaneous classification of geometry and segmentation of annotations. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, Montreal, QC, Canada, pp. 1266–1269. https://doi.org/10.1109/EMBC44109.2020.9176334 es_ES
dc.description.references Moradi, S., Oghli, M. G., Alizadehasl, A., et al. (2019). Mfp-unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica, 67, 58–69. https://doi.org/10.1016/j.ejmp.2019.10.001 es_ES
dc.description.references Olsen, A., Gill, M. S., (eds) (2017) Ageing: Lessons from C. elegans. Springer International Publishing, Switzerland. https://doi.org/10.1007/978-3-319-44703-2. es_ES
dc.description.references Padubidri, C., Kamilaris, A., Karatsiolis, S., et al. (2021). Counting sea lions and elephants from aerial photography using deep learning with density maps. Animal Biotelemetry, 9(1), 1–10. https://doi.org/10.1186/s40317-021-00247-x es_ES
dc.description.references Pashevich, A., Strudel, R., Kalevatykh, I., et al (2019) Learning to augment synthetic images for sim2real policy transfer. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Macau, China, pp. 2651–2657. https://doi.org/10.1109/IROS40897.2019.8967622. es_ES
dc.description.references Pitt, J. N., Strait, N. L., Vayndorf, E. M., et al. (2019). Wormbot, an open-source robotics platform for survival and behavior analysis in C. elegans. GeroScience, 41(6), 961–973. https://doi.org/10.1007/s11357-019-00124-9 es_ES
dc.description.references Plebani, E., Biscola, N. P., Havton, L. A., et al. (2022). High-throughput segmentation of unmyelinated axons by deep learning. Scientific Reports, 12(1), 1–16. https://doi.org/10.1038/s41598-022-04854-3 es_ES
dc.description.references Puchalt, J. C., Sánchez-Salmerón, A. J., Martorell Guerola, P., et al. (2019). Active backlight for automating visual monitoring: An analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE, 14(4), 1–18. https://doi.org/10.1371/journal.pone.0215548 es_ES
dc.description.references Puchalt, J. C., Layana Castro, P. E., & Sánchez-Salmerón, A. J. (2020). Reducing results variance in lifespan machines: An analysis of the influence of vibrotaxis on wild-type Caenorhabditis elegans for the death criterion. Sensors. https://doi.org/10.3390/s20215981 es_ES
dc.description.references Puchalt, J. C., Sánchez-Salmerón, A. J., Eugenio, I., et al. (2021). Small flexible automated system for monitoring Caenorhabditis elegans lifespan based on active vision and image processing techniques. Scientific Reports. https://doi.org/10.1038/s41598-021-91898-6 es_ES
dc.description.references Puchalt, J. C., Gonzalez-Rojo, J. F., Gómez-Escribano, A. P., et al. (2022). Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard petri dishes. Scientific Reports, 12(1), 1–11. https://doi.org/10.1038/s41598-022-05823-6 es_ES
dc.description.references Qamar, S., Jin, H., Zheng, R., et al. (2020). A variant form of 3d-unet for infant brain segmentation. Future Generation Computer Systems, 108, 613–623. https://doi.org/10.1016/j.future.2019.11.021 es_ES
dc.description.references Rizvandi, N. B., Pizurica, A., Philips, W. (2008a). Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In: 2008 15th IEEE International Conference on Image Processing. IEEE, San Diego, CA, USA, pp. 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419 es_ES
dc.description.references Rizvandi, N. B., Pižurica, A., Rooms, F., (2008b) Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In: 16th European Signal Processing Conference, pp. 1–5. Lausanne, Switzerland: IEEE. es_ES
dc.description.references Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, Springer, vol. 9351, pp. 234–241. Cham: Springer. es_ES
dc.description.references Schraml, D. (2019). Physically based synthetic image generation for machine learning: a review of pertinent literature. In: Photonics and Education in Measurement Science 2019, International Society for Optics and Photonics, Jena, Germany, pp. 111440J. https://doi.org/10.1117/12.2533485. es_ES
dc.description.references Stiernagle, T. (2006). Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1. https://www.ncbi.nlm.nih.gov/books/NBK19649/?report=classic es_ES
dc.description.references Tang, P., Liang, Q., Yan, X., et al. (2019). Efficient skin lesion segmentation using separable-unet with stochastic weight averaging. Computer Methods and Programs in Biomedicine, 178, 289–301. https://doi.org/10.1016/j.cmpb.2019.07.005 es_ES
dc.description.references Trebing, K., Stanczyk, T., & Mehrkanoon, S. (2021). Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recognition Letters, 145, 178–186. https://doi.org/10.1016/j.patrec.2021.01.036 es_ES
dc.description.references Tschandl, P., Sinz, C., & Kittler, H. (2019). Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Computers in Biology and Medicine, 104, 111–116. https://doi.org/10.1016/j.compbiomed.2018.11.010 es_ES
dc.description.references Tsibidis, G. D., & Tavernarakis, N. (2007). Nemo: a computational tool for analyzing nematode locomotion. BMC Neuroscience. https://doi.org/10.1186/1471-2202-8-86 es_ES
dc.description.references Uhlmann, V., Unser, M. (2015) Tip-seeking active contours for bioimage segmentation. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, Brooklyn, NY, USA, pp. 544–547. https://doi.org/10.1109/ISBI.2015.7163931. es_ES
dc.description.references Wang, D., Lu, Z., Bao, Z. (2019). Augmenting C. elegans microscopic dataset for accelerated pattern recognition. arXiv preprint arXiv:1906.00078. https://doi.org/10.48550/arXiv.1906.00078 es_ES
dc.description.references Wang, L., Kong, S., Pincus, Z., et al. (2020). Celeganser: Automated analysis of nematode morphology and age. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, WA, USA, pp. 4164–4173. https://doi.org/10.1109/CVPRW50498.2020.00492 es_ES
dc.description.references Wiehman, S., de Villiers, H. (2016). Semantic segmentation of bioimages using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, Vancouver, BC, Canada, pp. 624–631, https://doi.org/10.1109/IJCNN.2016.7727258. es_ES
dc.description.references Wiles, O., & Zisserman, A. (2019). Learning to predict 3d surfaces of sculptures from single and multiple views. International Journal of Computer Vision, 127(11), 1780–1800. https://doi.org/10.1007/s11263-018-1124-0 es_ES
dc.description.references Winter, P. B., Brielmann, R. M., Timkovich, N. P., et al. (2016). A network approach to discerning the identities of C. elegans in a free moving population. Scientific Reports, 6, 34859. https://doi.org/10.1038/srep34859 es_ES
dc.description.references Wöhlby, C., Kamentsky, L., Liu, Z., et al. (2012). An image analysis toolbox for high-throughput C. elegans assays. Nature methods, 9, 714–6. https://doi.org/10.1038/nmeth.1984 es_ES
dc.description.references Yu, C. C. J., Raizen, D. M., & Fang-Yen, C. (2014). Multi-well imaging of development and behavior in Caenorhabditis elegans. Journal of Neuroscience Methods, 223, 35–39. https://doi.org/10.1016/j.jneumeth.2013.11.026 es_ES
dc.description.references Yu, X., Creamer, M. S., Randi, F., et al. (2021). Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. eLife, 10, e66,410. https://doi.org/10.7554/eLife.66410 es_ES
dc.description.references Zhao, X., Yuan, Y., Song, M., et al. (2019). Use of unmanned aerial vehicle imagery and deep learning unet to extract rice lodging. Sensors. https://doi.org/10.3390/s19183859 es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem