Mostrar el registro sencillo del ítem
dc.contributor.author | Company, P. | es_ES |
dc.contributor.author | Otey, J. | es_ES |
dc.contributor.author | Agost, M.J. | es_ES |
dc.contributor.author | Contero, Manuel | es_ES |
dc.contributor.author | Camba, J.D. | es_ES |
dc.date.accessioned | 2021-02-05T04:31:19Z | |
dc.date.available | 2021-02-05T04:31:19Z | |
dc.date.issued | 2019-08 | es_ES |
dc.identifier.issn | 1615-5289 | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/160760 | |
dc.description.abstract | [EN] Information and Communications Technologies (ICTs) offer new roles to teachers to improve learning processes. In this regard, learning rubrics are commonplace. However, the design of these rubrics has focused mainly on scoring (summative rubrics), whereas formative rubrics have received significantly less attention. ICTs make possible electronic rubrics (e-rubrics) that enable dynamic and interactive functionalities that facilitate the adaptable and adaptive delivery of content. In this paper, we present a case study that examines three characteristics to make formative rubrics more adaptable and adaptive: criteria dichotomization, weighted evaluation criteria, and go/no-go criteria. A new approach to the design of formative rubrics is introduced, taking advantage of ICTs, where dichotomization and weighted criteria are combined with the use of go/no-go criteria. The approach is discussed as a method to better guide the learner while adjusting to the student's assimilation pace. Two types of go/no-go criteria (hard and soft) are studied and experimentally validated in a computer-aided design assessment context. Bland-Altman plots are constructed as discussed to further illuminate this topic. | es_ES |
dc.description.sponsorship | This work was partially supported by Grant DPI2017-84526-R (MINECO/AEI/FEDER, UE), Project "CAL-MBE, Implementation and validation of a theoretical CAD quality model in a Model-Based Enterprise (MBE) context." | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Springer Verlag | es_ES |
dc.relation.ispartof | Universal Access in the Information Society | es_ES |
dc.rights | Reserva de todos los derechos | es_ES |
dc.subject | Formative rubrics | es_ES |
dc.subject | Adaptable and adaptive rubrics | es_ES |
dc.subject | E-rubrics | es_ES |
dc.subject | Go | es_ES |
dc.subject | No-go criteria | es_ES |
dc.subject.classification | EXPRESION GRAFICA EN LA INGENIERIA | es_ES |
dc.title | Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1007/s10209-019-00686-7 | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/DPI2017-84526-R/ES/IMPLEMENTACION Y VALIDACION DE UN MODELO TEORICO DE LA CALIDAD DE LOS MODELOS CAD EN UN CONTEXTO MBE (MODEL-BASED ENTERPRISE)/ | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Departamento de Ingeniería Gráfica - Departament d'Enginyeria Gràfica | es_ES |
dc.description.bibliographicCitation | Company, P.; Otey, J.; Agost, M.; Contero, M.; Camba, J. (2019). Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria. Universal Access in the Information Society. 18(3):675-688. https://doi.org/10.1007/s10209-019-00686-7 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | https://doi.org/10.1007/s10209-019-00686-7 | es_ES |
dc.description.upvformatpinicio | 675 | es_ES |
dc.description.upvformatpfin | 688 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 18 | es_ES |
dc.description.issue | 3 | es_ES |
dc.relation.pasarela | S\408784 | es_ES |
dc.contributor.funder | European Regional Development Fund | es_ES |
dc.contributor.funder | Agencia Estatal de Investigación | es_ES |
dc.description.references | Popham, W.J.: What’s wrong—and what’s right—with rubrics. Educ. Leadersh 55(2), 72–75 (1997) | es_ES |
dc.description.references | Educational Research Service: Focus on: Developing and using instructional rubrics. Educational Research Service (2004) | es_ES |
dc.description.references | Panadero, E., Jonsson, A.: The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144 (2013) | es_ES |
dc.description.references | Reddy, Y.M., Andrade, H.: A review of rubric use in higher education. Assess. Eval. High. Educ. 35(4), 435–448 (2010) | es_ES |
dc.description.references | Company, P., Contero, M., Otey, J., Plumed, R.: Approach for developing coordinated rubrics to convey quality criteria in MCAD training. Comput. Aided Des. 63, 101–117 (2015) | es_ES |
dc.description.references | Company, P., Contero, M., Otey, J., Camba, J.D., Agost, M.J., Perez-Lopez, D.: Web-Based system for adaptable rubrics: case study on CAD assessment. Educ Technol Soc 20(3), 24–41 (2017) | es_ES |
dc.description.references | Tierney, R., Simon M.: What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Pract. Assess. Res. Eval. 9(2) (2004). http://www.pareonline.net | es_ES |
dc.description.references | Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. 22(140), 55 (1932) | es_ES |
dc.description.references | Rohrmann, B.: Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. Project Report, University of Melbourne/Australia (2007) | es_ES |
dc.description.references | Fluckiger, J.: Single point rubric: a tool for responsible student self-assessment. Delta Kappa Gamma Bull. 76(4), 18–25 (2010) | es_ES |
dc.description.references | Estell, J. K., Sapp, H. M., Reeping, D.: Work in progress: Developing single point rubrics for formative assessment. In: ASEE’s 123rd annual conference and exposition, New Orleans, LA, USA, June 26–29. Paper ID #14595 (2016) | es_ES |
dc.description.references | Jonsson, A., Svingby, G.: The Use of scoring rubrics: reliability, validity and educational consequences. Educ. Res. Rev. 2, 130–144 (2007) | es_ES |
dc.description.references | Georgiadou, E., Triantafillou, E., Economides, A.A.: Evaluation parameters for computer-adaptive testing. Br. J. Edu. Technol. 37(2), 261–278 (2006) | es_ES |
dc.description.references | Company, P., Otey, J., Contero, M., Agost, M.J., Almiñana, A.: Implementation of adaptable rubrics for CAD model quality formative assessment purposes. Int. J. Eng. Educ. 32(2A), 749–761 (2016) | es_ES |
dc.description.references | Otey, J.: A contribution to conveying quality criteria in mechanical CAD models and assemblies through rubrics and comprehensive design intent qualification. Ph.D. Thesis, Submitted to the Doctoral School of Universitat Politècnica de València (2017) | es_ES |
dc.description.references | Watson, P.F., Petrie, A.: Method agreement analysis: a review of correct methodology. Theriogenology 73(9), 1167–1179 (2010) | es_ES |
dc.description.references | Kottner, J., Streiner, D.L.: The difference between reliability and agreement. J. Clin. Epidemiol. 64(6), 701–702 (2011) | es_ES |
dc.description.references | McLaughlin, P.: Testing agreement between a new method and the gold standard—how do we test. J. Biomech. 46, 2757–2760 (2013) | es_ES |
dc.description.references | Costa-Santos, C., Bernardes, J., Ayres-de-Campos, D., Costa, A., Costa, C.: The limits of agreement and the intraclass correlation coefficient may be inconsistent in the interpretation of agreement. J. Clin. Epidemiol. 64(3), 264–269 (2011) | es_ES |
dc.description.references | Chen, C.C., Barnhart, H.X.: Assessing agreement with intraclass correlation coefficient and concordance correlation coefficient for data with repeated measures. Comput. Stat. Data Anal. 60, 132–145 (2013) | es_ES |
dc.description.references | Bland, J.M., Altman, D.: Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet 327(8476), 307–310 (1986) | es_ES |
dc.description.references | Van Stralen, K.J., Jager, K.J., Zoccali, C., Dekker, F.W.: Agreement between methods. Kidney Int. 74(9), 1116–1120 (2008) | es_ES |
dc.description.references | Beckstead, J.W.: Agreement, reliability, and bias in measurement: commentary on Bland and Altman (1986:2010). Int. J. Nurs. Stud. 48, 134–135 (2011) | es_ES |
dc.description.references | Bland, J.M., Altman, D.: Measuring agreement in method comparison studies. Stat. Methods Med. Res. 8, 135–160 (1999) | es_ES |
dc.description.references | Giavarina, D.: Understanding Bland–Altman analysis. Biochem. Med. 25(2), 141–151 (2015) | es_ES |
dc.description.references | GraphPad: Interpreting results: Bland–Altman. Retrieved from https://www.graphpad.com/guides/prism/7/statistics/bland-altman_results.htm?toc=0&printWindow (1995) | es_ES |