- -

A new training strategy for spatial transform networks (STN's)

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

A new training strategy for spatial transform networks (STN's)

Mostrar el registro completo del ítem

Navarro Moya, F.; Puchalt-Rodríguez, JC.; Layana-Castro, PE.; García-Garví, A.; Sánchez Salmerón, AJ. (2022). A new training strategy for spatial transform networks (STN's). Neural Computing and Applications. 34(12):10081-10092. https://doi.org/10.1007/s00521-022-06993-0

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10251/197778

Ficheros en el ítem

Metadatos del ítem

Título: A new training strategy for spatial transform networks (STN's)
Autor: Navarro Moya, Francisco Puchalt-Rodríguez, Joan Carles Layana-Castro, Pablo Emmanuel García-Garví, Antonio Sánchez Salmerón, Antonio José
Entidad UPV: Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros Industriales - Escola Tècnica Superior d'Enginyers Industrials
Fecha difusión:
Resumen:
[EN] Spatial transform networks (STN) are widely used since they can transform images captured from different viewpoints to obtain an objective image. These networks use an image captured from any viewpoint as input and ...[+]
Palabras clave: Spatial transform network , Learning strategy , Caenorhabditis elegans , Correspondence application
Derechos de uso: Reconocimiento (by)
Fuente:
Neural Computing and Applications. (issn: 0941-0643 )
DOI: 10.1007/s00521-022-06993-0
Editorial:
Springer-Verlag
Versión del editor: https://doi.org/10.1007/s00521-022-06993-0
Código del Proyecto:
info:eu-repo/grantAgreement/AEI//RTI2018-094312-B-I00-AR//MONITORIZACION AVANZADA DE COMPORTAMIENTOS DE CAENORHABDITIS ELEGANS, BASADA EN VISION ACTIVA, PARA ANALIZAR FUNCION COGNITIVA Y ENVEJECIMIENTO/
Agradecimientos:
This study was also supported by Plan Nacional de I+D under the project RTI2018-094312-B-I00 and by the European FEDER funds. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Tipo: Artículo

References

Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2016) Spatial transformer networks, 2017–2025 arXiv:1506.02025 [cs.CV]

Li G, Xu S, Liu X, Li L, Wang C (2018) Jersey number recognition with semi-supervised spatial transformer network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp. 1864–18647. https://doi.org/10.1109/CVPRW.2018.00231

Arcos-García Álvaro, Álvarez-García JA, Soria-Morillo LM (2018) Deep neural network for traffic sign recognition systems: an analysis of spatial transformers and stochastic optimisation methods. Neural Netw 99:158–165. https://doi.org/10.1016/j.neunet.2018.01.005 [+]
Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2016) Spatial transformer networks, 2017–2025 arXiv:1506.02025 [cs.CV]

Li G, Xu S, Liu X, Li L, Wang C (2018) Jersey number recognition with semi-supervised spatial transformer network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp. 1864–18647. https://doi.org/10.1109/CVPRW.2018.00231

Arcos-García Álvaro, Álvarez-García JA, Soria-Morillo LM (2018) Deep neural network for traffic sign recognition systems: an analysis of spatial transformers and stochastic optimisation methods. Neural Netw 99:158–165. https://doi.org/10.1016/j.neunet.2018.01.005

Aubreville M, Krappmann M, Bertram C, Klopfleisch R, Maier A (2017) A guided spatial transformer network for histology cell differentiation. In: Eurographics workshop on visual computing for biology and medicine, pp. 21–25. https://doi.org/10.2312/vcbm.20171233

Lin Y, Wang M, Gu C, Qin J, Bai D, Li J (2018) A cascaded spatial transformer network for oriented equipment detection in thermal images. In: 2018 2nd IEEE conference on energy internet and energy system integration (EI2), pp. 1–5. https://doi.org/10.1109/EI2.2018.8582248

Qian Y, Yang M, Zhao X, Wang C, Wang B (2020) Oriented spatial transformer network for pedestrian detection using fish-eye camera. IEEE Trans Multimed 22(2):421–431. https://doi.org/10.1109/TMM.2019.2929949

Zhang X, Gao T, Gao D (2018) A new deep spatial transformer convolutional neural network for image saliency detection. Des Auto Embed Syst 22:243–256. https://doi.org/10.1007/s10617-018-9209-0

Fang Y, Zhan B, Cai W, Gao S, Hu B (2019) Locality-constrained spatial transformer network for video crowd counting. In: 2019 IEEE international conference on multimedia and expo (ICME), pp. 814–819. https://doi.org/10.1109/ICME.2019.00145

Li S, Günel S, Ostrek M, Ramdya P, Fua P, Rhodin H (2020) Deformation-aware unpaired image translation for pose estimation on laboratory animals. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317

Bhagavatula C, Zhu C, Luu K, Savvides M (2017) Faster than real-time facial alignment: a 3d spatial transformer network approach in unconstrained poses. In: 2017 IEEE international conference on computer vision (ICCV), pp. 4000–4009 . https://doi.org/10.1109/ICCV.2017.429

Puchalt JC, Gonzalez-Rojo JF, Gómez-Escribano AP, Vázquez-Manrique RP, Sánchez-Salmerón AJ (2022) Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard Petri dishes. Sci Rep 12(1):1–11

[-]

recommendations

 

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro completo del ítem