- -

Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria

Mostrar el registro completo del ítem

Company, P.; Otey, J.; Agost, M.; Contero, M.; Camba, J. (2019). Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria. Universal Access in the Information Society. 18(3):675-688. https://doi.org/10.1007/s10209-019-00686-7

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10251/160760

Ficheros en el ítem

Metadatos del ítem

Título: Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria
Autor: Company, P. Otey, J. Agost, M.J. Contero, Manuel Camba, J.D.
Entidad UPV: Universitat Politècnica de València. Departamento de Ingeniería Gráfica - Departament d'Enginyeria Gràfica
Fecha difusión:
Resumen:
[EN] Information and Communications Technologies (ICTs) offer new roles to teachers to improve learning processes. In this regard, learning rubrics are commonplace. However, the design of these rubrics has focused mainly ...[+]
Palabras clave: Formative rubrics , Adaptable and adaptive rubrics , E-rubrics , Go , No-go criteria
Derechos de uso: Reserva de todos los derechos
Fuente:
Universal Access in the Information Society. (issn: 1615-5289 )
DOI: 10.1007/s10209-019-00686-7
Editorial:
Springer Verlag
Versión del editor: https://doi.org/10.1007/s10209-019-00686-7
Código del Proyecto:
info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/DPI2017-84526-R/ES/IMPLEMENTACION Y VALIDACION DE UN MODELO TEORICO DE LA CALIDAD DE LOS MODELOS CAD EN UN CONTEXTO MBE (MODEL-BASED ENTERPRISE)/
Agradecimientos:
This work was partially supported by Grant DPI2017-84526-R (MINECO/AEI/FEDER, UE), Project "CAL-MBE, Implementation and validation of a theoretical CAD quality model in a Model-Based Enterprise (MBE) context."
Tipo: Artículo

References

Popham, W.J.: What’s wrong—and what’s right—with rubrics. Educ. Leadersh 55(2), 72–75 (1997)

Educational Research Service: Focus on: Developing and using instructional rubrics. Educational Research Service (2004)

Panadero, E., Jonsson, A.: The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144 (2013) [+]
Popham, W.J.: What’s wrong—and what’s right—with rubrics. Educ. Leadersh 55(2), 72–75 (1997)

Educational Research Service: Focus on: Developing and using instructional rubrics. Educational Research Service (2004)

Panadero, E., Jonsson, A.: The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144 (2013)

Reddy, Y.M., Andrade, H.: A review of rubric use in higher education. Assess. Eval. High. Educ. 35(4), 435–448 (2010)

Company, P., Contero, M., Otey, J., Plumed, R.: Approach for developing coordinated rubrics to convey quality criteria in MCAD training. Comput. Aided Des. 63, 101–117 (2015)

Company, P., Contero, M., Otey, J., Camba, J.D., Agost, M.J., Perez-Lopez, D.: Web-Based system for adaptable rubrics: case study on CAD assessment. Educ Technol Soc 20(3), 24–41 (2017)

Tierney, R., Simon M.: What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Pract. Assess. Res. Eval. 9(2) (2004). http://www.pareonline.net

Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. 22(140), 55 (1932)

Rohrmann, B.: Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. Project Report, University of Melbourne/Australia (2007)

Fluckiger, J.: Single point rubric: a tool for responsible student self-assessment. Delta Kappa Gamma Bull. 76(4), 18–25 (2010)

Estell, J. K., Sapp, H. M., Reeping, D.: Work in progress: Developing single point rubrics for formative assessment. In: ASEE’s 123rd annual conference and exposition, New Orleans, LA, USA, June 26–29. Paper ID #14595 (2016)

Jonsson, A., Svingby, G.: The Use of scoring rubrics: reliability, validity and educational consequences. Educ. Res. Rev. 2, 130–144 (2007)

Georgiadou, E., Triantafillou, E., Economides, A.A.: Evaluation parameters for computer-adaptive testing. Br. J. Edu. Technol. 37(2), 261–278 (2006)

Company, P., Otey, J., Contero, M., Agost, M.J., Almiñana, A.: Implementation of adaptable rubrics for CAD model quality formative assessment purposes. Int. J. Eng. Educ. 32(2A), 749–761 (2016)

Otey, J.: A contribution to conveying quality criteria in mechanical CAD models and assemblies through rubrics and comprehensive design intent qualification. Ph.D. Thesis, Submitted to the Doctoral School of Universitat Politècnica de València (2017)

Watson, P.F., Petrie, A.: Method agreement analysis: a review of correct methodology. Theriogenology 73(9), 1167–1179 (2010)

Kottner, J., Streiner, D.L.: The difference between reliability and agreement. J. Clin. Epidemiol. 64(6), 701–702 (2011)

McLaughlin, P.: Testing agreement between a new method and the gold standard—how do we test. J. Biomech. 46, 2757–2760 (2013)

Costa-Santos, C., Bernardes, J., Ayres-de-Campos, D., Costa, A., Costa, C.: The limits of agreement and the intraclass correlation coefficient may be inconsistent in the interpretation of agreement. J. Clin. Epidemiol. 64(3), 264–269 (2011)

Chen, C.C., Barnhart, H.X.: Assessing agreement with intraclass correlation coefficient and concordance correlation coefficient for data with repeated measures. Comput. Stat. Data Anal. 60, 132–145 (2013)

Bland, J.M., Altman, D.: Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet 327(8476), 307–310 (1986)

Van Stralen, K.J., Jager, K.J., Zoccali, C., Dekker, F.W.: Agreement between methods. Kidney Int. 74(9), 1116–1120 (2008)

Beckstead, J.W.: Agreement, reliability, and bias in measurement: commentary on Bland and Altman (1986:2010). Int. J. Nurs. Stud. 48, 134–135 (2011)

Bland, J.M., Altman, D.: Measuring agreement in method comparison studies. Stat. Methods Med. Res. 8, 135–160 (1999)

Giavarina, D.: Understanding Bland–Altman analysis. Biochem. Med. 25(2), 141–151 (2015)

GraphPad: Interpreting results: Bland–Altman. Retrieved from https://www.graphpad.com/guides/prism/7/statistics/bland-altman_results.htm?toc=0&printWindow (1995)

[-]

recommendations

 

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro completo del ítem