Electronic Feedback: Pedagogical Considerations for the Implementation of Software

Miguel García-Yeste
Centre for Academic English, Stockholm University

https://doi.org/10.4995/eurocall.2013.9789

 

Abstract

As university lecturers, we often struggle to provide our students with good quality feedback in a consistent manner. This is usually caused by the increasing imbalance in teacher-student ratios (Hounsell et al., 2008), as well as the pressure of academic life and the lack of time (Sadler, 2010). In addition, assessment practices should be transparent enough to ensure all students are evaluated in a similar way (O’Donovan et al., 2004), especially when different instructors teach different groups of students taking the same course. This paper, which focuses on designing a feedback scheme that helps instructors to provide good quality feedback in a consistent manner, begins with a needs analysis based on the author’s experience as an instructor of academic writing in English. A literature review follows, focusing on: (a) the scholarship on feedback in higher education; and (b) the research on the use of technology for the provision of feedback. Finally, a feedback scheme is presented, and some guidelines for its implementation are provided.

Keywords: Formative feedback, literature review, electronic feedback, software, self-regulation.

 

1. Introduction

It is generally accepted that “although student numbers have risen substantially over the past quarter-century, staff to student ratios have halved” (Hounsell et al. 2008, p. 56). This imbalance in the student-teacher ratio means that instructors often have to read and comment on a considerable number of assignments. The problem is particularly acute in the case of writing courses, in which students are usually required to produce written assignments on which they receive feedback. This situation has been reported in a number of studies (e.g. Hyland, 2003) as having negative effects on the feedback students receive.

Drawing on my own experience as an instructor of academic writing, I have noticed that in each batch of papers I grade, I tend to give more feedback on the first texts I read. I have also noticed that the quality of my comments varies depending on a number of contextual factors. For example, if I am tired or if I have written the same or very similar comments several times, the level of detail in my feedback is affected. This tension between the teacher’s desire to provide effective feedback and the lack of resources (e.g. time) has been reported in the literature. For instance, Sadler (2010) points out that “the desirability of feedback cannot be separated from the practical logistics of providing it […] feedback should not only be of an appropriate type but also be provided within the available resources, especially time for academics to give feedback to individual students” (p. 536).

Besides the unfairness of the situation described above, nowadays we are expected to employ assessment methods that are “transparent and demonstrably known and trusted.” (O’Donovan et al., 2004, p.326). This is of particular relevance when more than one teacher is involved in the assessment process. A review of research findings carried out by Hounsell (2003, cited in Hounsell et al. 2008), indicated that the feedback received by students across the UK was reportedly multifarious in quantity and helpfulness. In that sense, I can trace these issues in some of the courses I teach, since different groups within the same course have different teachers and, even when we all share the same grading criteria, the feedback students get can, at times, vary significantly.

2. The problem

In this situation, the need arises to find a system that: (a) enables instructors to provide good quality feedback so as to support the development of students’ self-regulation; (b) helps teachers produce and deliver feedback in a consistent manner; and (c) facilitates replicability so the system can be used by more than one teacher. In the present paper, I aim to plan the implementation of a written text correction programme for an academic writing course in English, taught in a higher education context. This implies developing a usable protocol for the teachers, and designing “good feedback” leading to self-regulation for the students.

So as to address need (a), I intend to refer to the literature describing an “effective provision of feedback” (Nicol, 2006, p.590) in order to reflect on what the literature identifies as high quality feedback. I aim to design effective ways to provide my students with feedback that will help them learn independently and in a self-regulated manner. In relation to needs (b) and (c), consistency and replicability, I have decided to explore the implementation of written text correction software. Some voices claim that information and communications technologies (ICT) might be able to contribute to the role of feedback in helping students achieve self-regulation (e.g. Cabero, 2001; Mooij, 2009; Nicol, 2006). Thus, I aim to identify a programme with functions that support the needs elicited in the selected context.

3. Literature review

In this section, I review the relevant literature in relation to the issues directly connected to this paper’s aim: (a) studies on the design and provision of effective feedback on written performance, and (b) studies on the use of ICT for formative assessment and feedback.

(a) Studies on the design and provision of effective feedback on written performance

Since the context for this project is a range of courses in academic writing in English in which most students have English as a second language (ESL), the research on feedback on second language writing and academic writing is of relevance.

Hedgcock and Lefkowitz (1994) address two questions with potential implications for this paper, namely: (a) How do ESL students react when they receive feedback from their teacher? and (b) How do these reactions affect the evolution of the students’ perception of text quality and their composing processes? In the same paper, the authors explore whether students of English as a second language (ESL) and students of English as a foreign language (EFL) differ systematically in terms of self-appraisal patterns and responses to feedback; however, this question does not relate to the present study because, even if those categories are probably relevant in the context of the study reported by Hedgcock and Lefkowitz, such distinctions do not seem to hold much value in my current context.

In relation to the different types of feedback the researchers analysed, and to how these were perceived by students, Hedgcock and Lefkowitz (1994) state that “many teachers act principally as evaluators rather than as collaborators or as willing recipients of the information students are expected to communicate” (p.143). Quite often this situation results in feedback being more judgemental than constructive in relation to the student’s paper. This is especially troublesome in those cases in which the feedback is provided not during the writing process, but after the completed piece of writing has been submitted. In such cases, it seems obvious that the space left to the students for reaction and improvement is quite limited; they are merely passive recipients of an expert’s opinion. In this sense, my intention is to come up with feedback that helps my students to improve their current performance. In fact, the idea is to comment on the first draft of their paper. I argue that, by framing the feedback within the process of essay writing, both teachers and students will be able to see feedback as an aid in the construction of the final version of the essay, rather than as a value judgement passed on the finished product.

Beyond the question of when feedback is provided, the previous ideas have implications around the basic functions of feedback in terms of cognition, namely that of nourishing students’ problem-solving and critical-thinking abilities (Cumming, 1989). In fact, when Hedgcock and Lefkowitz (1994) consider how students internalise feedback, they suggest that students tend to identify those aspects their teachers comment on as being more important than other things that did not merit a comment. This indicates that students attribute salience to the issues brought up by their teachers, which in turn implies that instructors should consider carefully which features are mentioned in their feedback, as opposed to commenting on everything. These findings become particularly significant when considered in connection to some of the claims found in the literature suggesting that the amount of feedback students can process and react to is rather limited. For instance, Nicol and Macfarlane-Dick (2006) argue that students can only internalise three comments for each assignment. While the present project does not intend to determine exactly how many comments students can process, the students’ capacity to assimilate and respond to comments is certainly relevant.

The next logical step is then to decide what aspects are worth commenting on. In that sense, Nicol and Macfarlane-Dick (2006) remind us of Sadler’s three conditions necessary for students to benefit from feedback, namely that they need to know: (a) what good performance is; (b) how current performance relates to good performance; and (c) how to bridge the gap between good and current performance. In order to help teachers to design effective feedback, Nicol and Macfarlane-Dick (2006) suggest seven principles in light of Sadler’s three conditions. The overarching idea behind the seven principles as presented below is to highlight the relevance of providing “well-thought-out comments” as well as ways to improve performance.

  1. Helps clarify what good performance is (goals, criteria, expected standards);
  2. Facilitates the development of self-assessment (reflection) in learning;
  3. Delivers high quality information to students about their learning;
  4. Encourages teacher and peer dialogue around learning;
  5. Encourages positive motivational beliefs and self-esteem;
  6. Provides opportunities to close the gap between current and desired performance;
  7. Provides information to teachers that can be used to help shape teaching.

Table 1: Seven principles of good feedback practice (Nicol and Macfarlane-Dick, 2006).

It seems feasible, then, that the implementation of a computer programme that allows creating and managing a database of comments could assist teachers in the provision of these so-called “well-thought-out comments”. The idea is that the comments in the database would target at least some of these seven principles on a number of levels. First, the comments would help to clarify what good performance is by providing access to examples of desired performance (principle no.1). Second, the comments would facilitate the students’ development of a reflective attitude towards their own work; rather than indications of good/poor performance, the comments would provide explanations that would help students understand why their text was not effective (principle no.2). In addition, because these comments would be stored in a shared database, several teachers would spend time designing them cooperatively; this would probably increase the quality of the information delivered to students (principle no.3). Because students would have access to samples of desired performance and to information about why their performance was not effective, the opportunities to close that gap would be enhanced (principle no.6). Finally, if the software offered the possibility of showing common problematic areas across students in a group, teachers would be able to address those specific issues in their teaching (principle no.7); in fact, if different teachers compared which areas were problematic in different groups, measures could be taken at syllabus level.

Principles 4 and 5 are harder to address through an action plan as the one presented here. However, because the feedback would be given during the writing process, time for teacher-student interaction, such as tutorials, could provide opportunities to discuss feedback. This would address principle number 4. As for encouraging positive motivational beliefs and self-esteem, the whole experience should be designed so as to empower students and to encourage them to become independent, self-regulated learners. Ultimately, they would feel in command of their learning processes and reinforce their motivation and self-esteem. The degree to which the implementation of this electronic feedback affects individual students would, of course, depend on several psycho-affective factors that fall outside the scope of this paper. However, the provision of effective electronic feedback would provide some support in this direction.

On to a different issue, I would like to close this section on effective feedback by referring to the benefits of using exemplars of desired performance. In that sense, Sadler (2010) problematises the view of feedback as telling and suggests that this may not always be the best way to provide feedback. In his view, showing and guiding towards discovery may be more appropriate instead. One of the reasons he uses to support his argument is connected to the amount of shared knowledge between teachers and students, as he highlights in the following quote:

[R]egardless of levels of motivation to learn, students cannot convert feedback statements into actions for improvement without sufficient working knowledge of some fundamental concepts. Teachers who compose feedback obviously possess and draw on a working knowledge which embraces these concepts. (Sadler, 2010, p.537)

As the reader may see, Sadler emphasises the role of tacit knowledge as defined by Perkins (2007, p.39), and how it can sometimes represent a threat to clarity in teacher-student interaction. In fact, Polanyi (1962; in Sadler, 2010) points at the lack of tacit knowledge as one of the challenges experienced by students when facing teacher feedback. This is a potential issue in the feedback I provided my students with, since more often than not being a skilful writer is connected to being familiar with what McCune and Hounsell (2005) call “ways of thinking and practising”, i.e. ways of writing in a specific discipline. Therefore, in the area of academic writing providing samples for students to look at may be particularly crucial. This may be a good strategy to “not only providing constructive and timely feedback comments, [but also] assisting students to come to hold a conception of what counts as good quality work in the subject area” (Hounsell et al., 2008, p.55). As a consequence, one of the priorities in choosing a computer programme here will be its ability to incorporate access to exemplars.

(b) Studies on the use of ICT for formative assessment and feedback

Another relevant area of research for the purpose of the present paper is that of the use of ICT for formative assessment and feedback. In particular, two main issues need to be explored, namely: the ways in which ICT and assessment can be integrated and how this affects the participants involved in the process (i.e. teachers and students); and the reviews of the programmes available at the moment. Thus, this section presents some of the interesting ideas reported in the literature.

In relation to integrating ICT in the assessment process, there seem to be two schools of thought. On the one hand, some systems provide automatic feedback and writing assessment, which reduces assessment time dramatically. Nevertheless, Ware and Warschauer (2006) highlight that these systems bring along the danger of presenting writing as a mass product designed to pass a quality test, rather than to communicate or to interact with a specific audience. Obviously, this approach is very problematic in all kinds of writing; furthermore, in the case of writing for academic and/or specific purposes, considerations of audience, author, context, and purpose become central to the process, since these concepts determine core aspects such as content, style, structure, etc.

As an alternative approach, several studies (e.g. DiGiovano and Nagaswami, 2001; Tuzi, 2004) refer to the idea of electronic feedback in reference to feedback provided by a human being through technology. In fact, Tuzi (2004) explores the differences between traditional, pen-on-paper feedback and electronic feedback, and concludes that when electronic feedback is used: (a) students make more revisions on the original text; (b) they stay on task for a longer period of time; and (c) the changes they make are mostly at the macrolinguistic level (e.g. paragraphing, essay structure), which, in fact, requires a deeper understanding of the concept of genre and demonstrates a more advanced command of the writing process. This last aspect is very interesting as, even though students in Tuzi’s study reported that they preferred oral feedback rather than written feedback, e-feedback seemed to trigger more revisions. Some of the students interviewed by Tuzi commented that their awareness of audience was greater, and that they were more willing to revise their papers when they perceived their intended message would not be conveyed effectively. This connects with the idea of using feedback as appraisal of communicative effectiveness, rather than as judgement as mentioned in the previous section. 

Furthermore, in relation to written feedback DiGiovano and Nagaswami (2001, p.268) suggest that “teachers can monitor students’ interaction much more closely than in face-to-face situations, where only bits of conversation can be heard as they circulate among peer dyads”.  This is idea can be connected to principle no.7 in Nicol and Macfarlane-Dick’s model, as being able to monitor the  interaction allows teachers to spot problematic areas and misconceptions as they arise, and to tailor teaching to student needs.

Interestingly enough, Case (2007) identified a somewhat problematic issue in relation to the use of ICT and the need to adapt teacher action to student needs. In his study, Case tests the use of a feedback script that was, in turn, fed from a bank of electronically stored comments. One of Case’s main goals is to save time and effort which, in turn, is meant to alleviate labour and cognitive demands on the side of the teacher. However, one of the common comments regarding this type of procedure he encounters is that canned feedback becomes highly depersonalised. The author addresses this issue in the following manner:

Although staff can be resistant to the use of such banks for fear of lack of personalization of feedback comments, there is evidence to suggest that students themselves prefer this slightly more mechanistic approach as it provides them with a substantial amount of information on performance (in this case, directly relevant to learning outcomes and assessment criteria), which can then be supplemented with idiosyncratic comment on the script. (Case, 2007, p.289)

Thus, one of the features aimed at in the feedback scheme reported in this paper is to find a tool that allows the creation of a feedback bank, as long as the comments can be fine-tuned to fit specific texts. In order to decide on a programme, the literature assessing software available as well as their usability and practical issues is considered.

One of the earliest papers on this topic is Holmes’ (1996) report on how he developed a basic programme, Markin’ ©, to produce written feedback. In his paper, the author includes a section on advantages of e-feedback over conventional feedback including: (a) it is more readable than handwritten comments on the margin; (b) the system forces the teachers to be more consistent in diagnosing and classifying the type of issue; and (c) the system is faster once it is in place. Despite being fairly dated, Holmes’ paper provides a general picture of the whole process of using this kind of software, and brings up some practical difficulties faced by the author himself when using the programme.

Bearing in mind Holmes’ experience, two more papers are considered, i.e. Krajka (2002) and Thomas (2004). Both authors examine the process of providing electronic feedback with Markin’© software. While Krajka (2002) compares the use of Markin’© to the use of Microsoft Word©, Thomas (2004) takes a broader approach, and evaluates word processors, Markin’©, Wincorr©, and web-based tools, such as Bonito and Just the word. After reading both reviews, I would argue that using the change tracking function in Microsoft Word actually involves changing the student’s piece of writing; this may interfere with the students’ development of their own authorial voices, which may be counterproductive.

In addition to computer programmes, the possibility of using web-based resources is contemplated. Their main appeal is the fact that they are available anywhere, as opposed to installing software in a specific computer, limiting when and where assessment can be done. However, this option raises issues of ownership. Web-based tools seem to work in combination with online repositories, and obtaining clear information regarding privacy can be extremely difficult.

4. Planning electronic feedback

In light of the literature on software available and after thinking about the features required by the context of this paper, Markin’© (see Fig. 1) has been selected. As mentioned above, Markin’ © is a computer programme developed by Martin Holmes to help teachers correct their students’ writing, while leaving room for teacher action. From the wide variety of existing programmes, the latest update of Markin’© has been selected because it does not correct texts automatically, as other programmes do, but allows teachers to make decisions in the process. Thus, the software does not assess the texts automatically, but offers several tools that facilitate the process of giving feedback through pre-set annotations and databases of frequently used comments. The only automatic feature of the programme is that, based on the teacher’s criteria, it can be asked to calculate a grade on the pieces of writing. This feature may be useful in some situations, although the present study focuses on the programme’s use for the provision of formative feedback. In the following paragraphs I present some characteristics of the programme, namely its default and customisable buttons, and the teacher comments database function. For detailed descriptions of the software see Krajka (2002), Thomas (2004), Alesón et al. (2006), or the developer’s website (http://www.cict.co.uk/markin/index.php).

Markin'

Figure 1. The programme’s interface.

Markin’© has an annotation tool, referred to as buttons by the programme’s developers (see Figure 2), that can be used to tag fragments of text according to predetermined categories. These categories can be adapted for each specific assignment, so that they tag aspects that are relevant for the assignment’s intended learning outcomes (ILOs). The most straightforward use of the programme’s buttons is that of classifying textual and grammatical aspects, both positive and negative. In other words, the buttons can be used to indicate an instance that is problematic in the text (e.g. an instance where subject-verb agreement is problematic, as in Mary and her sister comes every day), but also to praise an effective use of language (e.g. an effective word choice).

Markin'

Figure 2. Image of the programme’s buttons.

Obviously, the use of the buttons does not provide more information other than the identification and categorisation of particular issues. It is then up to the student to find out what the problem is, and how to fix it. Pointing out problematic instances but not offering a specific way to solve them can be beneficial on a number of levels. To begin with, it may support autonomous learning, since students are prompted to investigate how to improve their texts. In addition, this strategy empowers the students as authors, letting them decide how to (re)write their texts in ways they feel comfortable with; ultimately, this might help them to develop their own voice as authors.

On the other hand, the programme has a tool to insert in-text comments (see Figure 3). These comments may include detailed explanations, examples, sample answers, links to online reference material and exercises, etc. External resources may be able to provide the extra support some students need in a manner that may foster their independent and self-regulated learning. In addition, the possibility of including fragments from real texts or links to exemplars would provide students with opportunities to engage with real texts; this is a way to tackle the amount of tacit knowledge involved in the development of writing skills as presented by McCune and Hounsell (2005, p.257).

Markin'

Figure 3. Comments area in the programme’s interface.

In addition, the comments can be stored in a database so they can be reused in the future; every time a comment is used, it can be edited to connect with the specific text. In this sense, Markin’© caters for feedback that is “both specific (referring, as it necessarily does, to the work just appraised) and general (identifying a broader principal that could be applied to later works)” (Sadler, 2010, p.538). The possibility of recycling previously written comments from the database targets one of the main issues identified both in the literature (e.g. Carless, 2011) and in many instructors’ own practice, i.e. the lack of time. Because a comment can be reused over and over again, once it has been typed and saved, a couple of clicks will suffice to insert it into a new text. This saves invaluable amounts of time for the teacher.

On the other hand, canned comments can be edited every time they are used. This function addresses the discomfort some feel (see Case, 2007) regarding lack of individualisation when feedback comes from a predetermined pool of comments.

A further advantage offered by Markin’© is that both the customised buttons and the comments database can be shared by different users of the programme, which allows for several teachers to use the same buttons and comments when they grade students taking the same course. This feature should increase the assessment’s consistency and reliability, as well as its transparency and demonstrability, all of which are issues of increasing relevance in contexts of higher education (O’Donovan et al., 2004, p.326). In addition, the use of the comments database in combination with the in-text comments and the general comments tool should allow teachers to give more comprehensive feedback, which might in turn provide extra scaffolding for the students who need it. Moreover, this tool may also homogenise the comments students receive for a specific assignment in terms of length and level of detail.

5. Final remarks

This paper has tried to bring together the literature on feedback practices and the design of electronic feedback schemes. An academic writing course has been used as sample context for the implementation of one such system, and a computer programme has been identified as meeting the relevant criteria. The selection of the software is based on the fact that it offers functions that match the needs identified in the first part of the paper.

In the case described here, implementation occurs on a series of steps. First, the group of instructors agree on an assignment to test the programme on, and have a meeting in which buttons and comments are designed. For this phase, experience from previous terms helps to elicit problematic areas in student papers. After that, the buttons and comments are saved in each teacher’s computer and used when giving feedback to students. Afterwards, another meeting is held to evaluate the experience. The final step is to consider the final versions of the students’ papers across groups and to compare them against papers from previous terms to check whether positive effects can be observed.

References

Alesón, M. et al. (2006). “Assessment and online feedback: Using a written text correction programme”. Proceedings of the XXIV International AESLA Conference Madrid: UNED.

Cabero, J. (2001). Tecnología educativa. Diseño y utilización de medios de enseñanza. Barcelona: Paidós.

Carless, D. et al. (2011). “Developing Sustainable Feedback Practices”. Studies in Higher Education, 36(4): 395-407.

Case, S. (2007). “Reconfiguring and realigning the assessment feedback processes for an undergraduate criminology degree”. Assessment & Evaluation in Higher Education, 32(3): 285-299.

Cumming, A. (1989). “Writing Expertise and Second-Language Proficiency”. Language Learning, 39: 81-135.

DiGiovanni, E. and Nagaswami, G. (2001). “Online peer review: an alternative to face-to-face?” ELT journal 55(3): 263-272.

Hedgcock, J. & Lefkowitz, N. (1994). “Feedback on feedback: Assessing learner receptivity to teacher response in L2 composing”. Journal of Second Language Writing 3(2): 141-163.

Holmes, M. (1996). “Marking student work on the computer.” The Internet TESL Journal 2(9).

Hounsell, D. et al. (2008). “The quality of guidance and feedback to students”. Higher Education Research & Development 27(1): 55-67.

Hyland, F. (2003). “Focusing on form: student engagement with teacher feedback”. System 31(2): 217-230.

Krajka, J. (2002) “Correcting student work with the computer - using dedicated software and a word processor”. Teaching English with Technology. A Journal for Teachers of English 2(4) [retrieved from: http://www.iatefl.org.pl/call/j_tech10.htm].

McCune, V., & Hounsell, D. (2005). “The development of students’ ways of thinking and practising in three final-year biology courses”. Higher Education, 49(3): 255-289.

Mooij, T. (2009). "Education and ICT-based self-regulation in learning: Theory, design and implementation". Education and Information Technologies, 14(1): 3-27.

Nicol, D. (2006). “Increasing success in first year courses: Assessment re-design, self-regulation and learning technologies”. Proceedings of the 23rd annual ascilite conference.

Nicol, D. J., & Macfarlane‐Dick, D. (2006). “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice”. Studies in higher education, 31(2): 199-218.

O'Donovan, B. et al. (2004). “Know what I mean? Enhancing student understanding of assessment standards and criteria”. Teaching in Higher Education, 9(3): 325-335.

Perkins, D. (2007). “Theories of difficulty”. In Entwistle, N. Student learning and university teaching. BJEP Monograph Series II, Number 4-Student Learning and University Teaching, 1(1): 1-18.

Sadler, D. R. (2010). “Beyond feedback: Developing student capability in complex appraisal”. Assessment & Evaluation in Higher Education, 35(5): 535-550.

Thomas, J. (2004). “Using computers in correcting written work”. Teaching English with Technology, 4(3): 1-8.

Tuzi, F. (2004). “The impact of e-feedback on the revisions of L2 writers in an academic writing course”. Computers and Composition, 21(2): 217-235.

Ware, P., & Warschauer, M. (2006). “Automated writing evaluation: Defining the classroom research agenda”. Language teaching research, 10(2): 157-180.