Publikationen

Explainable Feedback for Learning Based on Rubric-Based Multimodal Assessment Analytics with AI (2024)

Abstract

Providing timely formative feedback to students is very important to support self-regulated learning and deep learning strategies. Feedback has been shown to increase student engagement, satisfaction and learning outcomes, especially in generative learning tasks such as ePortfolios and other forms of multimodal compositions. However, the provision of detailed formative feedback places high demands on teachers’ resources. It would be highly beneficial if Large Language Models (LLM) could be used to support the feedback process. Therefore, this paper first describes a general architecture for multimodal formative assessment analysis and feedback generation. It is based on assessment rubrics, which are then used to build task-specific AI analysis pipelines to generate explainable assessment metrics, which are then used to produce helpful feedback. An example of a feedback pipeline for student video submissions in an ePortfolio is given, along with a prompting chain for feedback generation. The paper describes further steps necessary to evaluate and optimise this process in real classroom scenarios…

Veröffentlichung

Wolf, K. D., Maya, F. & Heilmann, L. (2024)

Exploratory and Confirmatory Prompt Engineering (2024)

Abstract

In software development, components utilizing large language models (LLMs) can be easily deployed for specific tasks. LLMs are particularly useful for tasks that are expensive to code manually. However, for successful integration, the generated outputs must meet specific requirements for further processing. This paper introduces an evaluation instrument for exploratory and confirmatory prompt engineering, utilizing prompt templates. A direct evaluation methodology is presented to quantitatively assess prompt outputs. Additionally, a methodology is introduced where LLMs generate ratings that are evaluated for human alignment. Based on the results, the most promising prompt templates can be identified. The evaluation instrument introduced in this paper should be considered when designing software components that utilize high-quality LLM-generated content.

Veröffentlichung

Rüdian, L. S. (2024)

Wertesensible gestaltung von formativem Feedback (2024)

Abstract

Künstliche Intelligenz (KI) und Trusted Learning Analytics (TLA) unterstützen hochschuldidaktische Prozesse, um sowohl den Erwerb von Fachwissen als auch die Entwicklung studentischer Selbstregulation zu fördern. So stellt das im Rahmen des BMBF-geförderten Projekts IMPACT entwickelte personalisierte Feedbackzentrum Studierenden seit dem Wintersemester 2023/24 formatives, hochinformatives Feedback (HIF) mithilfe von KI und Trusted Learning Analytics bereit. Der KI- und Learning Analytics Service ist ein optionales Angebot und soll Studierende durch Visualisierungen und schriftliches Feedback zu ihren Kursaktivitäten im Lernprozess unterstützen. Gleichzeitig werden Fertigkeiten für den Umgang mit Feedback vermittelt (Feedback Literacy), um die Studierenden zu befähigen, einen größeren Nutzen aus dem angebotenen Self-Monitoring für den individuellen Lernfortschritt zu erzielen. Relevant für den nachhaltigen Implementierungsprozess ist zudem die Entwicklung einer hochschulinternen Engagementstrategie, welche die Voraussetzungen zur Integration von ethisch reflektierten KI- und LA-Anwendungen in die gesamte Hochschule maßgeblich vorbereitet. Der Beitrag gibt Einblicke, wie Lösungen sowohl zum Datenschutz, zur wertesensiblen didaktischen Entwicklung von hochinformativem Feedback als auch zur prospektiven technologischen Anwendungsentwicklung mit Open-Source Software umgesetzt wurden. Denn Anwendungen zu Learning Analytics und KI in der Hochschullehre haben ganz spezielle Anforderungen, um negative Effekte im Lernprozess gar nicht erst entstehen zu lassen. Insbesondere bei Anwendungen, die ein Self-Monitoring integrieren, müssen Chilling-Effekte vermieden werden. Studierende müssen zudem in ihrer Autonomie und ihren Fähigkeiten gestärkt werden, indem Feedback-Literacy und Informationen zur Transparenz in das System integriert werden.
Der Beitrag zeigt, wie automatisiertes formatives Feedback nach einem Value Sensitive Design gestaltet werden kann, um Studierende bei der Nutzung von TLA- und KI-basierten Werkzeugen für ihre individuellen Lernziele zu unterstützen. Dabei werden kritische und risikobasierte Anforderungen bereits in der Anwendungsentwicklung berücksichtigt.

Veröffentlichung

Karolyi, H., Hanses, M.,van Rijn, L. & de Witt, C. (2024)

The Actionable Explanations for Student Success Prediction Models: A Benchmark Study on Counterfactual Methods (2024)

Abstract

Digital transformation in higher education resulted in a surge of information technology solutions suited for the needs of academia. The massive use of digital technology in education leads to the production of vast amounts of education and learner-related data, enabling advanced data analysis methods to explore and support the learning processes. When focusing on supporting at-risk students, the dominant research focuses on predicting student success. Enabling prediction models to help at-risk students involves a reliable technical solution and a transparent and explainable solution to build trust among the target learners and educators. Counterfactual explanations (aka counterfactuals) from explainable machine learning tools promise to enable trustful explainable models, provided the features are actionable and causal. However, determining the most suitable counterfactual generation method for student success prediction models remains unexplored. This study evaluates standard counterfactual methods —Multi-Objective Counterfactual Explanations, Nearest Instance Counterfactual Explanations, and What-If Counterfactual Explanations. The methods are evaluated using a black-box machine learning model trained on the Open University Learning Analytics dataset, demonstrating their practical usefulness and suggesting concrete steps for model prediction alteration. Our results indicate that the Nearest Instance Counterfactual Explanation method based on the sparsity metric provides the best results regarding several quality criteria. Detailed statistical analysis finds statistically significant differences between all methods except the difference between the Nearest Instance Counterfactual Explanation and the Multi-Objective Counterfactual Explanation method, which suggests that the methods might be interchangeable in the context of the given dataset.

Veröffentlichung

Cavus, M., & Kuzilek, J. (2024) The Actionable Explanations for Student Success Prediction Models: A Benchmark Study on the Quality of Counterfactual Methods. Human-Centric eXplainable AI in Education Workshop at 17th Educational Data Mining Conference 2024.

Rule-based and prediction-based computer-generated Feedback in Online Courses (2024)

Abstract

Computer-generated feedback can be created manifold. This paper compares two approaches for generating feedback: rule-based and prediction-based. Both approaches have several advantages and disadvantages, which are discussed in detail considering precision, recall, human effort for model creation, and explainability requirements.

Veröffentlichung

Rüdian, L. S.,Schumacher, C., Hanses, M., Kuzilek J. & Pinkwart, N. „Rule-based and prediction-based computer-generated Feedback in Online Courses,“ 2024 IEEE International Conference on Advanced Learning Technologies (ICALT), Nicosia, North Cyprus, Cyprus, 2024, pp. 285-286, doi: 10.1109/ICALT61570.2024.00089

An architecture for formative assessment analytics of multimodal artifacts in ePortfolios supported by artificial intelligence (2024)

Abstract

A key objective of higher education is to promote deeper learning strategies. Complex process-oriented teaching-learning methods, such as inverted classrooms, portfolios or blog writing, help students to actively engage with academic content. Learning in these settings is highly dependent on timely formative assessment and highly informative feedback to guide students’ learning efforts. A major challenge to the successful implementation of such settings is the lack of time resources for teachers to provide such feedback to a larger group of students. In the case of process portfolios, for instance, students design digital portfolios that incorporate multiple pages and a range of multimodal artefacts, such as text, concept maps, images, presentations, documents, audio recordings and videos. In this chapter, we design a high-level solution architecture using both rule-based and machine-learning modules. Our aim is to analyse the various modalities of produced multimodal content, such as ePortfolios, and to provide teachers with explainable metrics that represent human assessment rubrics in order to generate personalised feedback. To demonstrate the feasibility of the architecture, we present an example using produced ePortfolio data from a teacher training course, outlining the different steps to create quality indicators for a specific rubric and derive scores to support the final stage of feedback generation. Additionally, we explore potential refinements and implementation steps for the architecture.

Veröffentlichung

Maya, F., Wolf, K.D. (2024). An Architecture for Formative Assessment Analytics of Multimodal Artefacts in ePortfolios Supported by Artificial Intelligence. In: Sahin, M., Ifenthaler, D. (eds) Assessment Analytics in Education. Advances in Analytics for Learning and Teaching. Springer, Cham. https://doi.org/10.1007/978-3-031-56365-2_15

Learning Analytics in Higher Education — Exploring Students and Teachers Expectations in Germany (2024)

Abstract

Technology enhanced learning analytics has the potential to play a significant role in higher education in the future. Opinions and expectations towards technology and learning analytics, thus, are vital to consider for institutional developments in higher education institutions. The Sheila framework offers instruments to yield exploratory knowledge about stakeholder aspirations towards technology, such as learning analytics in higher education. The sample of the study consists of students (N = 1169) and teachers (N = 497) at a higher education institution in Germany. Using self-report questionnaires, we assessed students and teachers attitudes towards learning analytics in higher education teaching, comparing ideal and expected circumstances. We report results on the attitudes of students, teachers, as well as comparisons of the two groups and different disciplines. We discuss the results with regard to practical implications for the implementation and further developments of learning analytics in higher education.

Veröffentlichung

Fritz, B., Kube, D., Scherer, S. & Drachsler, H. (2024)

Guiding students towards successful assessments: From behavioural data to formative personalized high-information feedback (2024)

Abstract

Current research shows that automated feedback positively affects students’ academic performance, satisfaction with the feedback, and self-regulated learning whilst being independent of prior academic achievements. Concurrently, it has been shown that high-information feedback has the largest effect sizes for learning outcomes and academic performance. The following chapter provides insights to an approach to provide formative feedback supported by artificial intelligence in distance learning, that does not analyze summative assessment data, but rather intends to guide students in their learning process towards the graded final assessment. Approaches that are using methods of artificial intelligence and learning analytics have in common that they need data to derivate senseful outcomes. But how can students’ behavior in online learning courses be measured, and which concrete clickstream entries can be used to calculate these measures? This contribution looks at indicators focusing on data for supporting metacognitive learning strategies and illustrates especially the process to extract measures of behavioral engagement from raw log data and its conversion into high-information feedback. The entire process is reflected in collaboration with lecturers to design a didactically guided, user-centered interface that supports student reflections towards improving their learning and assessment preparation. The pursued solution includes a dashboard in combination with a rule-based personalized feedback text, connecting engagement measures with additional information (e.g., learning material, techniques, etc.). The chapter will give insight into the interdisciplinary elaboration process of learning dashboards and scientifically based development of high-information feedback texts, beside a practical insight into data transformation for learning analytics.

Veröffentlichung

Hanses, M., van Rijn, L., Karolyi, H., de Witt, C. (2024). Guiding Students Towards Successful Assessments Using Learning Analytics From Behavioral Data to Formative Feedback. In: Sahin, M., Ifenthaler, D. (eds) Assessment Analytics in Education. Advances in Analytics for Learning and Teaching. Springer, Cham. https://doi.org/10.1007/978-3-031-56365-2_4

Rethinking how we measure learning in interdisciplinary practice: a commentary (2023)

Abstract

This commentary challenges the operationalization of the goals of the learning analytics research field (i.e., “under-standing and optimising learning and the environments in which it occurs”) into the coding scheme used by theauthors to analyze recent literature from the Learning Analytics and Knowledge Conference and theJournal ofLearning Analytics. We will use the proposed code for learning outcome as a starting point to reflect on the conceptof learning and learning outcomes from an educational science perspective. We reiterate the idea that the definitionof learning outcome disregards process-oriented measurement of learning. In closing, we emphasize the need fordiscourse to refine what the field understands as learning, how to impact it, and how to evaluate the community’s goals.

Veröffentlichung

van Rijn, L., Hanses, M. & Jivet, Ioana (2023). Rethinking how we measure learning in interdisciplinary practice: a commentary. Journal for Learning Analytics, 10(2), 35-36. https://doi.org/10.18608/jla.2023.8197

A Novel Ensemble Method for Automated Short Answer Grading Based on Continuous Response IRT (2023)

Abstract

Automated Short Answer Grading (ASAG) is a field concerned with evaluating short answers written by students using various machine learning techniques. With the development of machine learning, several ASAG approaches have been proposed. According to prior surveys, an ASAG approach can be divided into two parts: language representation and learning algorithm. The choice of these two components can result in different theoretical underpinnings and use of information from short answers. To leverage the benefits of multiple approaches, ensemble methods that combine multiple approaches may lead to improved predictive performance than any one of the individual approaches alone. This study presents a novel ensemble method based on the Continuous Response Item Response Theory (IRT) model, which has been commonly used in psychometrics to combine the ratings from multiple human experts, for integrating different ASAG approaches. Using the validation with an open-accessed dataset ASAP-SAS, the performance of this new ensemble method will be compared to existing ensemble methods and individual approaches.

Veröffentlichung

Liu, T. , Mateen, S. (2023). A Novel Ensemble Method for Automated Short Answer Grading Based on Continuous Response IRT, May 08-10, 2023-Abstract Collection. In Symposium on Big Data and Research Synthesis in Psychology, Frankfurt, Germany. ZPID (Leibniz Institute for Psychology Information).

Proof-of-concept: Pre-selecting Text Snippets to provide formative Feedback in Online Learning (2023)

Abstract

In this paper, a proof of concept is shown to generate formative textual feedback in an online course. The concept is designed to be suitable for teachers with low technical skill levels. As state-of-the-art technology still does not provide high-quality results, the teacher is always held in the loop as the domain expert who is supported by a tool, and not replaced. The paper presents results of our proposed approach for semi-automatic feedback generation using a real-world university seminar, where students create sample micro-learning units as online courses, for which they get feedback for. A supervised machine learning approach is trained based on learner submissions features, and the feedback, that was chosen by teachers in former submissions. The results are promising.

Veröffentlichung

Rüdian, L. S.,Schumacher, C.,Kuzilek J. & Pinkwart, N. (2023). Proof-of-concept: Pre-selecting Text Snippets to provide formative Feedback in Online Learning. In 16th International Conference on Educational Data Mining. International Conference on Educational Data Mining (EDM), Pages 430-433, International Educational Data Mining Society, 2023.

Computer-Generated formative Feedback using pre-selected Text Snippets (2023)

Abstract

In this paper, an approach is introduced to generate formative textual feedback with the idea to use already existing prepared text snippets that are pre-selected by a supervised machine learning model. The approach is based on existing tools that are extended to be suitable for teachers with low technical skill levels. It uses the trusted learning analytics approach. As state-of-the-art technology still does not provide high-quality results, the teacher is always held in the loop as the domain expert who is supported by a tool, and not replaced.

Veröffentlichung

Rüdian, L. S., Schumacher, C., Kuzilek, J., & Pinkwart, N. (2023). Computer-Generated formative Feedback using pre-selected Text Snippets [Poster]. The 13. International Learning Analyitcs and Knowledge Conference (LAK).

Trusted Learning Analytics verstetigen – Mit Change Management zu didaktischen Innovationen (2022)

Abstract

Bestrebungen, mit Learning Analytics universitäres Lernen und Lehren in digitalen Umgebungen besser zu verstehen und zu optimieren, führten bislang nur zu wenigen Ansätzen und Beispielen für eine systematische Implementierung von datengestützten Lernanalysen an deutschen Hochschulen. Der Prozess, Anwendungen von Learning Analytics (LA) und Künstlicher Intelligenz (KI) in die breite Nutzung an Hochschulen in Deutschland zu bringen, geht aktuell in eine neue Implementierungsstufe über. Hürden der (ressourcen-)technischen Ebene (Ifenthaler, 2017), der organisationalen (Jenert, 2020) und partizipativen Rahmungen (Mayrberger, 2019; 2020) werden durch konkrete, strukturierend begleitende Ansätze wie Trusted Learning Analytics (TLA) (Drachsler & Greller, 2016; Hansen et al., 2020) und das Sheila-Framework (Tsai et al., 2018) überwindbar. Der Posterbeitrag veranschaulicht die initialen Schritte eines solchen Implementierungsprozesses für formatives Feedback und dessen didaktische Konzeption, basierend auf dem Ansatz des hochinformativen Feedbacks nach Wisniewski et al. (2020), anhand von IMPACT.

Veröffentlichung

van Rijn, L., Karolyi, H., & de Witt, C. (2022). Trusted Learning Analytics verstetigen – Mit Change Management zu didaktischen Innovationen [Poster]. 30. Jahrestagung der Gesellschaft für Medien in der Wissenschaft e.V.