Alexandre, M. (2019). Pytorch-unet. Code https://github.com/milesial/Pytorch-UNet.
Baheti, B., Innani, S., Gajre, S., et al. (2020). Eff-unet: A novel architecture for semantic segmentation in unstructured environment. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, pp. 1473–1481, https://doi.org/10.1109/CVPRW50498.2020.00187.
Bargsten, L., & Schlaefer, A. (2020). Specklegan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International Journal of Computer Assisted Radiology and Surgery, 15(9), 1427–1436. https://doi.org/10.1007/s11548-020-02203-1
[+]
Alexandre, M. (2019). Pytorch-unet. Code https://github.com/milesial/Pytorch-UNet.
Baheti, B., Innani, S., Gajre, S., et al. (2020). Eff-unet: A novel architecture for semantic segmentation in unstructured environment. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, pp. 1473–1481, https://doi.org/10.1109/CVPRW50498.2020.00187.
Bargsten, L., & Schlaefer, A. (2020). Specklegan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International Journal of Computer Assisted Radiology and Surgery, 15(9), 1427–1436. https://doi.org/10.1007/s11548-020-02203-1
Biron, D., Haspel, G. (eds) (2015) C . elegans. Springer Science+Business Media, New York. https://doi.org/10.1007/978-1-4939-2842-2
Cao, K., & Zhang, X. (2020). An improved res-unet model for tree species classification using airborne high-resolution images. Remote Sensing. https://doi.org/10.3390/rs12071128
Chen, L., Strauch, M., Daub, M., et al (2020) A cnn framework based on line annotations for detecting nematodes in microscopic images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, Iowa City, IA, USA, pp. 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465
Chen, Z., Ouyang, W., Liu, T., et al. (2021). A shape transformation-based dataset augmentation framework for pedestrian detection. International Journal of Computer Vision, 129(4), 1121–1138. https://doi.org/10.1007/s11263-020-01412-0
Conn, P. M. (Ed.). (2017). Animal models for the study of human disease. Texas: Sara Tenney.
Dewi, C., Chen, R. C., Liu, Y. T., et al. (2021). Yolo v4 for advanced traffic sign recognition with synthetic training data generated by various gan. IEEE Access, 9, 97,228-97,242. https://doi.org/10.1109/ACCESS.2021.3094201
Di Rosa, G., Brunetti, G., Scuto, M., et al. (2020). Healthspan enhancement by olive polyphenols in C. elegans wild type and Parkinson’s models. International Journal of Molecular Sciences. https://doi.org/10.3390/ijms21113893
Doshi, K. (2019) Synthetic image augmentation for improved classification using generative adversarial networks. arXiv preprint arXiv:1907.13576.
García Garví, A., Puchalt, J. C., Layana Castro, P. E., et al. (2021). Towards lifespan automation for Caenorhabditis elegans based on deep learning: Analysing convolutional and recurrent neural networks for dead or live classification. Sensors. https://doi.org/10.3390/s21144943
Hahm, J. H., Kim, S., DiLoreto, R., et al. (2015). C. elegans maximum velocity correlates with healthspan and is maintained in worms with an insulin receptor mutation. Nature Communications, 6(1), 1–7. https://doi.org/10.1038/ncomms9919
Han, L., Tao, P., & Martin, R. R. (2019). Livestock detection in aerial images using a fully convolutional network. Computational Visual Media, 5(2), 221–228. https://doi.org/10.1007/s41095-019-0132-5
Hebert, L., Ahamed, T., Costa, A. C., et al. (2021). Wormpose: Image synthesis and convolutional networks for pose estimation in C. elegans. PLOS Computational Biology, 17(4), 1–20. https://doi.org/10.1371/journal.pcbi.1008914
Hinterstoisser, S., Pauly, O., Heibel, H., et al (2019) An annotation saved is an annotation earned: Using fully synthetic training for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, Seoul, Korea (South), pp. 2787–2796. https://doi.org/10.1109/ICCVW.2019.00340
Huang, H., Lin, L., Tong, R., et al (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Barcelona, Spain, pp. 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405
Ioffe, S., Szegedy, C. (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: F. Bach, D. Blei (eds) Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 37. PMLR, Lille, France, pp. 448–456
Iqbal, H. (2018) Harisiqbal88/plotneuralnet v1.0.0. Code https://github.com/HarisIqbal88/PlotNeuralNet.
Isensee, F., Jaeger, P. F., Kohl, S. A., et al. (2021). nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203–211. https://doi.org/10.1038/s41592-020-01008-z
Javer, A., Currie, M., Lee, C. W., et al. (2018). An open-source platform for analyzing and sharing worm-behavior data. Nature Methods, 15(9), 645–646. https://doi.org/10.1038/s41592-018-0112-1
Javer, A., Brown, A.E., Kokkinos, I., et al. (2019). Identification of C. elegans strains using a fully convolutional neural network on behavioural dynamics. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, vol 11134. Springer, Cham, pp. 0–0. https://doi.org/10.1007/978-3-030-11024-6_35
Jung, S. K., Aleman-Meza, B., Riepe, C., et al. (2014). Quantworm: A comprehensive software package for Caenorhabditis elegans phenotypic assays. PLOS ONE, 9(1), 1–9. https://doi.org/10.1371/journal.pone.0084830
Koopman, M., Peter, Q., Seinstra, R. I., et al. (2020). Assessing motor-related phenotypes of Caenorhabditis elegans with the wide field-of-view nematode tracking platform. Nature protocols, 15(6), 2071–2106. https://doi.org/10.1038/s41596-020-0321-9
Koul, A., Ganju, S., Kasam, M. (2019). Practical Deep Learning for Cloud, Mobile and Edge: Real-World AI and Computer Vision Projects Using Python, Keras and TensorFlow. O’Reilly Media, Incorporated. https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/
Kumar, S., Egan, B. M., Kocsisova, Z., et al. (2019). Lifespan extension in C. elegans caused by bacterial colonization of the intestine and subsequent activation of an innate immune response. Developmental Cell, 49(1), 100-117.e6. https://doi.org/10.1016/j.devcel.2019.03.010
Layana Castro, P. E., Puchalt, J. C., & Sánchez-Salmerón, A. J. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports, 10(1), 22,247. https://doi.org/10.1038/s41598-020-79430-8
Layana Castro, P. E., Puchalt, J. C., García Garví, A., et al. (2021). Caenorhabditis elegans multi-tracker based on a modified skeleton algorithm. Sensors. https://doi.org/10.3390/s21165622
Le, K. N., Zhan, M., Cho, Y., et al. (2020). An automated platform to monitor long-term behavior and healthspan in Caenorhabditis elegans under precise environmental control. Communications Biology, 3(1), 1–13. https://doi.org/10.1038/s42003-020-1013-2
Li, H., Fang, J., Liu, S., et al. (2020). Cr-unet: A composite network for ovary and follicle segmentation in ultrasound images. IEEE Journal of Biomedical and Health Informatics, 24(4), 974–983. https://doi.org/10.1109/JBHI.2019.2946092
Li, S., Günel, S., Ostrek, M., et al. (2020b) Deformation-aware unpaired image translation for pose estimation on laboratory animals. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Seattle, WA, USA, pp. 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317
Liu, X., Zhou, T., Lu, M., et al. (2020). Deep learning for ultrasound localization microscopy. IEEE Transactions on Medical Imaging, 39(10), 3064–3078. https://doi.org/10.1109/TMI.2020.2986781
Mais, L., Hirsch, P., Kainmueller, D. (2020). Patchperpix for instance segmentation. In: European Conference on Computer Vision, Springer, vol. 12370. Springer, Cham, pp. 288–304. https://doi.org/10.1007/978-3-030-58595-2_18
Mane, M. R., Deshmukh, A. A., Iliff A. J. (2020) Head and tail localization of C. elegans. arXiv preprint arXiv:2001.03981. https://doi.org/10.48550/arXiv.2001.03981
Mayershofer, C., Ge, T., Fottner, J. (2021). Towards fully-synthetic training for industrial applications. In: LISS 2020. Springer, Singapore, pp. 765–782. https://doi.org/10.1007/978-981-33-4359-7_53
McManigle, J. E., Bartz, R. R., Carin, L. (2020). Y-net for chest x-ray preprocessing: Simultaneous classification of geometry and segmentation of annotations. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, Montreal, QC, Canada, pp. 1266–1269. https://doi.org/10.1109/EMBC44109.2020.9176334
Moradi, S., Oghli, M. G., Alizadehasl, A., et al. (2019). Mfp-unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica, 67, 58–69. https://doi.org/10.1016/j.ejmp.2019.10.001
Olsen, A., Gill, M. S., (eds) (2017) Ageing: Lessons from C. elegans. Springer International Publishing, Switzerland. https://doi.org/10.1007/978-3-319-44703-2.
Padubidri, C., Kamilaris, A., Karatsiolis, S., et al. (2021). Counting sea lions and elephants from aerial photography using deep learning with density maps. Animal Biotelemetry, 9(1), 1–10. https://doi.org/10.1186/s40317-021-00247-x
Pashevich, A., Strudel, R., Kalevatykh, I., et al (2019) Learning to augment synthetic images for sim2real policy transfer. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Macau, China, pp. 2651–2657. https://doi.org/10.1109/IROS40897.2019.8967622.
Pitt, J. N., Strait, N. L., Vayndorf, E. M., et al. (2019). Wormbot, an open-source robotics platform for survival and behavior analysis in C. elegans. GeroScience, 41(6), 961–973. https://doi.org/10.1007/s11357-019-00124-9
Plebani, E., Biscola, N. P., Havton, L. A., et al. (2022). High-throughput segmentation of unmyelinated axons by deep learning. Scientific Reports, 12(1), 1–16. https://doi.org/10.1038/s41598-022-04854-3
Puchalt, J. C., Sánchez-Salmerón, A. J., Martorell Guerola, P., et al. (2019). Active backlight for automating visual monitoring: An analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE, 14(4), 1–18. https://doi.org/10.1371/journal.pone.0215548
Puchalt, J. C., Layana Castro, P. E., & Sánchez-Salmerón, A. J. (2020). Reducing results variance in lifespan machines: An analysis of the influence of vibrotaxis on wild-type Caenorhabditis elegans for the death criterion. Sensors. https://doi.org/10.3390/s20215981
Puchalt, J. C., Sánchez-Salmerón, A. J., Eugenio, I., et al. (2021). Small flexible automated system for monitoring Caenorhabditis elegans lifespan based on active vision and image processing techniques. Scientific Reports. https://doi.org/10.1038/s41598-021-91898-6
Puchalt, J. C., Gonzalez-Rojo, J. F., Gómez-Escribano, A. P., et al. (2022). Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard petri dishes. Scientific Reports, 12(1), 1–11. https://doi.org/10.1038/s41598-022-05823-6
Qamar, S., Jin, H., Zheng, R., et al. (2020). A variant form of 3d-unet for infant brain segmentation. Future Generation Computer Systems, 108, 613–623. https://doi.org/10.1016/j.future.2019.11.021
Rizvandi, N. B., Pizurica, A., Philips, W. (2008a). Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In: 2008 15th IEEE International Conference on Image Processing. IEEE, San Diego, CA, USA, pp. 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419
Rizvandi, N. B., Pižurica, A., Rooms, F., (2008b) Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In: 16th European Signal Processing Conference, pp. 1–5. Lausanne, Switzerland: IEEE.
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, Springer, vol. 9351, pp. 234–241. Cham: Springer.
Schraml, D. (2019). Physically based synthetic image generation for machine learning: a review of pertinent literature. In: Photonics and Education in Measurement Science 2019, International Society for Optics and Photonics, Jena, Germany, pp. 111440J. https://doi.org/10.1117/12.2533485.
Stiernagle, T. (2006). Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1. https://www.ncbi.nlm.nih.gov/books/NBK19649/?report=classic
Tang, P., Liang, Q., Yan, X., et al. (2019). Efficient skin lesion segmentation using separable-unet with stochastic weight averaging. Computer Methods and Programs in Biomedicine, 178, 289–301. https://doi.org/10.1016/j.cmpb.2019.07.005
Trebing, K., Stanczyk, T., & Mehrkanoon, S. (2021). Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recognition Letters, 145, 178–186. https://doi.org/10.1016/j.patrec.2021.01.036
Tschandl, P., Sinz, C., & Kittler, H. (2019). Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Computers in Biology and Medicine, 104, 111–116. https://doi.org/10.1016/j.compbiomed.2018.11.010
Tsibidis, G. D., & Tavernarakis, N. (2007). Nemo: a computational tool for analyzing nematode locomotion. BMC Neuroscience. https://doi.org/10.1186/1471-2202-8-86
Uhlmann, V., Unser, M. (2015) Tip-seeking active contours for bioimage segmentation. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, Brooklyn, NY, USA, pp. 544–547. https://doi.org/10.1109/ISBI.2015.7163931.
Wang, D., Lu, Z., Bao, Z. (2019). Augmenting C. elegans microscopic dataset for accelerated pattern recognition. arXiv preprint arXiv:1906.00078. https://doi.org/10.48550/arXiv.1906.00078
Wang, L., Kong, S., Pincus, Z., et al. (2020). Celeganser: Automated analysis of nematode morphology and age. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, WA, USA, pp. 4164–4173. https://doi.org/10.1109/CVPRW50498.2020.00492
Wiehman, S., de Villiers, H. (2016). Semantic segmentation of bioimages using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, Vancouver, BC, Canada, pp. 624–631, https://doi.org/10.1109/IJCNN.2016.7727258.
Wiles, O., & Zisserman, A. (2019). Learning to predict 3d surfaces of sculptures from single and multiple views. International Journal of Computer Vision, 127(11), 1780–1800. https://doi.org/10.1007/s11263-018-1124-0
Winter, P. B., Brielmann, R. M., Timkovich, N. P., et al. (2016). A network approach to discerning the identities of C. elegans in a free moving population. Scientific Reports, 6, 34859. https://doi.org/10.1038/srep34859
Wöhlby, C., Kamentsky, L., Liu, Z., et al. (2012). An image analysis toolbox for high-throughput C. elegans assays. Nature methods, 9, 714–6. https://doi.org/10.1038/nmeth.1984
Yu, C. C. J., Raizen, D. M., & Fang-Yen, C. (2014). Multi-well imaging of development and behavior in Caenorhabditis elegans. Journal of Neuroscience Methods, 223, 35–39. https://doi.org/10.1016/j.jneumeth.2013.11.026
Yu, X., Creamer, M. S., Randi, F., et al. (2021). Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. eLife, 10, e66,410. https://doi.org/10.7554/eLife.66410
Zhao, X., Yuan, Y., Song, M., et al. (2019). Use of unmanned aerial vehicle imagery and deep learning unet to extract rice lodging. Sensors. https://doi.org/10.3390/s19183859
[-]