Mostrar el registro sencillo del ítem
dc.contributor.author | Schacht, Sigurd | es_ES |
dc.contributor.author | Kamath Barkur, Sudarshan | es_ES |
dc.contributor.author | Lanquillon, Carsten | es_ES |
dc.date.accessioned | 2024-07-18T07:54:39Z | |
dc.date.available | 2024-07-18T07:54:39Z | |
dc.date.issued | 2024-03-12 | |
dc.identifier.isbn | 9788413961569 | |
dc.identifier.uri | http://hdl.handle.net/10251/206325 | |
dc.description.abstract | [EN] Ongoing assessments in a course are crucial for tracking student performance and progress. However, generating and evaluating tests for each lesson and student can be time-consuming. Existing models for generating and evaluating question-answer pairs have had limited success. In recent years, large language models (LLMs) have become available as a service, offering more intelligent answering and evaluation capabilities. This research aims to leverage LLMs for generating questions, model answers, and evaluations while providing valuable feedback to students and decentralizing the dependency on faculty.We finetune existing LLMs and employ prompt engineering to direct the model toward specific tasks using different generative agents. One agent generates questions, another generates answers, and the third takes the human answers to evaluate and ensure the quality. Human evaluation is conducted through focus group analysis, and student progress and faculty feedback are tracked. Results demonstrate the potential of LLMs to provide automatic feedback and learning progress tracking for both students and faculty.In conclusion, this paper demonstrates the versatility of LLMs for various learning tasks, including question generation, model answer generation, and evaluation, all while providing personalized feedback to students. By identifying and addressing knowledge gaps, LLMs can support continuous evaluation and help students improve their understanding before semester exams. Furthermore, knowledge gaps from students identified by the agent can be highlighted and addressed through additional classes or support materials, potentially generated by the same model, leading to a more personalized learning experience. | es_ES |
dc.format.extent | 19 | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Editorial Universitat Politècnica de València | es_ES |
dc.relation.ispartof | 5th International Conference. Business Meets Technology | |
dc.rights | Reconocimiento - No comercial - Compartir igual (by-nc-sa) | es_ES |
dc.subject | LLM | es_ES |
dc.subject | Question Generation | es_ES |
dc.subject | Generative Agents | es_ES |
dc.subject | Personalized Assessments | es_ES |
dc.title | Generative Agents to support students learning progress | es_ES |
dc.type | Capítulo de libro | es_ES |
dc.type | Comunicación en congreso | es_ES |
dc.identifier.doi | 10.4995/BMT2023.2023.16750 | |
dc.rights.accessRights | Abierto | es_ES |
dc.description.bibliographicCitation | Schacht, S.; Kamath Barkur, S.; Lanquillon, C. (2024). Generative Agents to support students learning progress. Editorial Universitat Politècnica de València. https://doi.org/10.4995/BMT2023.2023.16750 | es_ES |
dc.description.accrualMethod | OCS | es_ES |
dc.relation.conferencename | 5th International Conference. Business Meets Technology | es_ES |
dc.relation.conferencedate | Julio 13-15, 2023 | es_ES |
dc.relation.conferenceplace | Valencia, España | es_ES |
dc.relation.publisherversion | http://ocs.editorial.upv.es/index.php/BMT/BMT2023/paper/view/16750 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.relation.pasarela | OCS\16750 | es_ES |