Abstract
Classroom instruction provides a limited amount of quality speaking practice for language learners. Asynchronous multimedia-based oral communication is one way to provide learners with quality speaking practice outside of class. Asynchronous multimedia-based oral communication helps learners develop presentational speaking skills and raise their linguistic self-awareness. Twenty-two peer-reviewed journal articles studying the use of asynchronous multimedia-based oral communication in language learning were reviewed, (1) to explore how asynchronous oral communication has been used to improve learner speaking skills, and (2) to investigate what methodologies are commonly used to measure and analyze language gains from using asynchronous multimedia-based oral communication to improve learner speaking skills. In this study we present three principal findings from the literature. First, asynchronous multimedia-based oral communication has been used in conjunction with a variety of instructional methods to promote language gains in terms of fluency, accuracy and pronunciation. Second, the methods found in this review were technical training, preparatory activities, project-based learning, and self-evaluation with revision activities. Third, the majority of previous studies demonstrating the effectiveness of these methods have relied on learner perceptions of language gains rather than on recordings of learner speech.
Keywords: Oral, online, asynchronous, video, audio, language learning.
1. Introduction
In order for foreign language learners to succeed, they need a large quantity of high quality language practice. Although Clifford described time on task, or quantity, as “the primary determiner of language acquisition” (2002), it has also been described as “a necessary, but not sufficient, condition for learning” (Karweit, 1984: 33). Hirotani and Lyddon (2013) argued that quality of practice, exemplified in their study by an awareness-raising activity, is an important factor in the language learning.
Media-based oral communication can increase the quantity and improve the quality of language practice by providing more opportunities for speaking and more opportunities to raise learner awareness. Multimedia-based oral communication includes a variety of communication types, such as video conferencing through Skype, posting vlogs on YouTube, and turn-based video conversations using a voiceboard. Lin (2015) lauded the affordances of oral computer-mediated communication (CMC: an important type of multimedia-based oral communication) in his meta-analysis, stating that the “features of CMC seem to provide opportunities to create a social interaction context with more flexibility that cannot be afforded in a traditional face-to-face environment” (p. 262). Here it is useful to recall Clark’s (1994) criticism of many media-related studies, that media itself does not influence learning. Rather it is the instructional method that influences learning. Referring to his previous studies, Clark summarized his argument, stating, “any necessary teaching method could be designed into a variety of media presentations (p. 22). On the other hand, however, it is important to note that certain media and technologies provide affordances that may not be otherwise available or that are more effectively used with those media and technologies.
In his book on distance and blended (a.k.a., hybrid) learning, Graham (2006) stated that online learning environments provide learners with flexibility in communicating outside the classroom. By communicating online, learners may increase their opportunities for speaking practice. Additionally, the digital nature of online communication makes it easier for learners to record and review their speech, allowing them to develop linguistic self-awareness. Both the opportunities and self-awareness promote increased speaking proficiency. Figure 1 illustrates these affordances and their relationship.
Figure 1. Relationship of online and multimedia-based communication to speaking proficiency.
Lin (2015) discussed these affordances in his meta-analysis of CMC use. Although he referred specifically to text-based communication, the affordances also apply to oral communication. He stated that CMC “provides L2 learners with an environment to practice language production at a reduced rate. The relatively reduced rate of exchange and lag-time induced by the text-chat software allows L2 learners ‘more time to both process incoming messages and produce and monitor their output’ (Sauro & Smith, 2010: 557)” (Lin, 2015: 264).
Similarly, in her meta-analysis of 14 studies involving CMC, Ziegler (2016) argued that CMC use provides learners with an opportunity to “notice [the] gaps between their interlanguage and the target language” (p. 575). Because of the time lag that Lin (2015) referred to, Ziegler (2016) found that CMC may be more beneficial to language learning than face to face communication in the target language in terms of developing productive language skills. So, although online oral activities may make use of the same methods that face-to-face activities use, the affordances of online activities may make them at least as effective as, and sometimes more practical than, face-to-face activities by increasing the quantity and quality of oral language practice.
Communication can be categorized as either synchronous, having little or no lag time, or asynchronous, having a long lag time, based on Graham’s (2006) description of distance learning environments (see Table 1). Although asynchronous and synchronous communication are similar in some ways, asynchronous communication provides opportunities that synchronous communication (or even classroom speaking activities) does not. First, synchronous communication is more conducive to interpersonal speaking. Ziegler (2016), in her synthesis of synchronous computer-mediated communication (SCMC) use, situated SCMC within the interaction hypothesis, arguing that it provides opportunities for interaction and negotiation of meaning. Asynchronous oral communication, on the other hand, can be considered a type of presentational speaking, a necessary skill in many occupations—see the American Council on the Teaching of Foreign Language’s (2012) description of modes of communication for more information. However, it could be argued that even synchronous conversations consist, to a degree, of a series of mini-presentations. Whereas Kitade (2000) rightly argued that interlocutors need interaction skills and pragmatic competence when responding to one another in synchronous conversations, they sometimes do so by providing complete, continuous responses or by sharing anecdotes.
Table 1. Comparison of asynchronous and synchronous communication |
|
Asynchronous |
Synchronous |
Prepared |
Spontaneous |
Targets presentational speaking |
Targets interpersonal speaking |
Disposed to formal evaluation |
Disposed to impromptu, informal evaluation |
Revisionary |
Single occurrence |
Second, asynchronous communication more naturally promotes planning before the speech act whereas synchronous communication tends to be more spontaneous. Crookes (1989) discussed the value of pre-task planning to improve non-spontaneous language output. In his study, 40 Japanese learners of English participated in two oral explanation tasks. Group 1 (n=20) was given no preparation and planning time before participating in the task. Group 2 (n=20) was given 10 minutes of preparation and planning before the tasks. Crookes found that learners who planned their output generally produced a greater variety of lexis, more complex language, and more detailed descriptions.
Third, asynchronous communication more naturally allows learners to watch or listen to their own performance and conduct self-evaluation. Instructors and learners in many domains have used video recordings of learner behavior to increase self-awareness and determine what skills they need to focus on. Examples can be found in sports (Hastie, Brock, Mowling, & Eiler, 2012) and medicine (Jamshidi, LeMasters, Eisenberg, Duh & Curet, 2009). In Jamshidi et al.’s (2009) study involving junior surgeons practicing laparoscopic suturing skills, learners benefited from reviewing video recordings of their practice attempts. The learners grew in terms of both self-awareness and skill in part because video recording “provides a matrix of information identical to what was available during the operation itself” (p. 625). This is particularly important in language learning, where the learner’s memory is taxed while trying to create a message to the point that they may not be wholly aware of the actual language they are producing. Video provides them with the opportunity to hear exactly what they said. In fact, Jamshidi et al. (2009) argued that this type of video review can not only be used for post-performance assessment but also in pre-performance planning (p. 625).
Fourth, because of its recorded nature, asynchronous communication enables learners to revise and rerecord their performance so that they can publish their best version. Learners have long had the opportunity to improve their composition writing by creating several drafts before submitting a final version. Although, learners can also practice oral presentations before a live audience (e.g., a classmate) or in front of a mirror prior to their final performance, this asynchronous multimedia-based communication (AMOC) provides another outlet for this kind of practice that can be done in the learner’s own time. Another benefit that live practice does not afford, however, is that AMOC allows the learner to select the best video or audio draft to submit, rather than having to submit the final performance. Additionally, in some draft-writing processes, learners are even asked to focus on revising a specific element of their writing (e.g., spelling or paragraph structure). Castañeda and Rodríguez-González (2011) incorporated this kind of process in their study of nine university-level learners of Spanish and found that learners increased in terms of speaking, analytic, and evaluation skills.
Although AMOC is generally better suited to promoting self-awareness, revision, and presentational speaking skills, synchronous communication seems to be the more popular of the two in blended language learning environments. It may be easy to think that synchronous communication is better for improving learner speaking proficiency, given its shorter lag time and better simulation of face-to-face conversations. Because of this, we risk falling into the trap of relegating AMOC to the status of technologies we only use if we do not have bandwidth and hardware that supports synchronous conversation. Yet, given that AMOC provides different affordances than what synchronous communication offers, asynchronous communication can serve different purposes than synchronous communication.
However, even though AMOC can provide learners with opportunities to develop their linguistic self-awareness and improve their speaking skills, there is no guarantee that learners will make these gains by participating in oral asynchronous activities. The purpose of this literature review, then, is to explore how AMOC has been used to improve speaking skills. Additionally, we examine the methodologies that previous research has used to measure improvements in speaking skills. Thus, in this study we will address the following research questions:
Question 1: What language traits are being promoted with AMOC?
Question 2: What are the challenges to effective use of AMOC?
Question 3: What methods and activities have been used in conjunction with AMOC?
Question 4: What methodologies are commonly used to measure and analyze language gains from using asynchronous multimedia-based oral communication to improve learner speaking skills?
2. Methodology
Literature was located using Academic Search Premier, ERIC, JSTOR, and Scopus. The following combinations of search terms were used: asynchronous video + language, asynchronous CMC + language, asynchronous + speaking + language, video-mediated communication + language, vlog + language, Wimba + language, oral CMC, video drafts + language, and blended learning + video + language. Literature was limited to that published before early 2016.
2.1. Inclusion / exclusion criteria
The following criteria were used to determine which studies to include in this analysis. They are relevance, outlet type, and analysis methods (see Table 2).
Table 2. Summary of inclusion/exclusion criteria |
|
Criterion |
Definition |
Relevance |
University level learner-created oral asynchronous audio or video productions; research focuses on language gains |
Outlet type |
Peer reviewed journal articles |
Analysis methods |
Qualitative and quantitative methods |
2.1.1. Relevance
We used the following criteria to determine if studies were sufficiently relevant to this discussion:
Studies must address asynchronous audio or video communication.
Videos must be learner created.
Studies must discuss how learners improved language skills by producing videos.
Studies must discuss university level class implementation in order to maintain comparability between studies.
2.1.2. Outlet type
Only peer-reviewed journal articles were included in this review. Book chapters and conference proceedings were not included. Conference proceedings, although useful, were not included in order to maintain a higher standard for inclusion in this literature review.
2.1.3. Research type
Only articles including qualitative and quantitative studies were included. This criterion is particularly relevant for research question 1 where both empirical and qualitative information clarified how well learning is taking place. For instance, in Kormos and Dénes’ (2004) study, speaking fluency was described in terms of specific, empirical measurements, which enables us to compare fluency across studies. On the other hand, Castañeda and Rodríguez-González (2011) shared learner feedback from self-evaluations after participating in an asynchronous video intervention. While this qualitative data did not provide a clear means of comparing learning effectiveness as did Kormos and Dénes’ (2004) study, it did provide insights into the learners’ experiences, and it provided other information that might not have been solicited or considered in an empirical study. For instance, one learner discussed the concept of anxiety in their responses (2004), which is an important aspect of the use of asynchronous video communication but would not necessarily be considered in a comparison of fluency gains. Theory and design articles were not included unless they also included either a qualitative or quantitative study showing the effect of their theory or design in practice.
2.1.4. Examples of inclusion/exclusion
Table 3 displays examples of articles found during the literature search along with an indication of whether the example article met a given criterion (“X”) or did not meet the criterion (“—”). This is meant to give an explanation of our decision process in choosing which articles to include for review. Of the examples shown in Table 3, only Hirotani and Lyddon (2013) met all three criteria and was, therefore, the only one included in this literature review. Tiraboschi and Iovino (2009) presented activities and a related technology but did not focus on the learning effects of implementing the activities and technology or present any data. Hirotani’s (2009) article focused on text-based CMC rather than audio or video CMC. Ono, Onishi, Ishihara, and Yamashiro (2015) presented a paper that was published in the conference proceedings, which did not meet the requirement of being a peer-reviewed journal article. Lamy and Goodfellow (1999) focused on text-based CMC, but also focused on language used during ACMC tasks, rather than language gained from using the tasks.
Table 3. Examples and non-examples of articles found in the literature search |
||||
Example/Non-example |
Relevance |
Outlet type |
Analysis methods |
Reason for exclusion |
Tiraboschi & Iovino (2009) |
— |
X |
— |
No data/design showcase |
Hirotani (2009) |
— |
X |
X |
Text-based CMC |
Hirotani & Lyddon (2013) |
X |
X |
X |
NA |
Ono, Onishi, Ishihara, &Yamashiro. (2015) |
X |
— |
X |
Conference proceeding in book |
Lamy & Goodfellow (1999) |
— |
X |
X |
Text-based CMC; does not focus on language gains |
2.2. Search results
Using the aforementioned search terms and inclusion/exclusion criteria, 22 articles were located (see citations for these articles in the Appendix).
3. Using AMOC in language learning
From this pool of articles, we identified several factors that affect the effectiveness of AMOC activities in language learning contexts. This section begins with a description of the linguistic traits that AMOC activities have been used to improve, then moves to a discussion of challenges inherent in using AMOC, and then concludes with a discussion of the effectiveness of various methods of using AMOC to improve the linguistic traits that will be described.
3.1. Using AMOC to develop specific language traits
In this section, we address the question of what language traits are being promoted with AMOC. We will focus on accuracy, fluency, and pronunciation. Although AMOC is used to help learners develop several different linguistic traits, we found that these particular traits need to be treated with more rigor.
3.1.1. Accuracy
By using AMOC, learners are able to increase the accuracy of their speech. In a study on the effects of using AMOC in an ESL writing course, Engin (2014) interviewed participants and analyzed questionnaires, finding that students believed their linguistic accuracy increased as a result of creating their videos. Learners were expected to create English writing explanations (tutorials) for other students in their class in video format. Because of the responsibility of teaching placed upon them and peer dependence on their creating a clear, effective explanation, learners felt compelled to produce linguistically accurate explanations and reduce the number of mistakes in their performance. Engin cited one learner’s interview response that the video activity helped their accuracy: “It is a good thing to worry about our English because we improve our English” (2014: 19). Unfortunately, it is not clear in what ways learner speech increased in accuracy nor the learners’ basis for determining whether they increased in accuracy or not. Although Engin’s findings suggest that AMOC can be used to improve accuracy, additional data and analysis procedures would provide a more rigorous, reliable and trustworthy basis for determining that learner speech became more accurate through producing these videos.
3.1.2. Fluency
Learners using AMOC are also able to develop fluency. In his study of Japanese EFL students, Gromik (2012) found that learners increased their speech rate by 37% over the course of a 13-week video production intervention, comparing average speech production of the first and final weeks. Although the average speech rate of the first week was significantly lower than all subsequent weeks, suggesting that some of the learners’ improvement may be attributed to familiarization with the task and the technology, Gromik demonstrated a general increase in speech rate attributable to learner production of asynchronous videos.
Despite the generally positive findings of Gromik’s (2012) study, his study leaves us with several questions. For instance, Gromik only considered the speech rate of short videos, where the task limited learners to 30-second video clips. It is unclear whether the learners in this study could sustain this speech rate. It is also unclear whether producing longer videos would offer the same advantage in helping learners develop a higher peak speech rate or a higher consistent speech rate. Gromik also considered only two closely related aspects of fluency: number of words produced and speech rate, or number of words produced per second.
While Gromik’s (2012) inclusion of two fluency measures is valuable, it does not represent the wide array of fluency measures available to researchers. In their study on the relationship between proficiency and fluency, Baker-Smemoe, Dewey, Bown, and Martinsen (2014) presented three major categories of speech fluency, each characterized by several different aspects, based on Segalowitz’s (2010) work on fluency. These categories are cognitive fluency, perceived fluency, and utterance fluency. Cognitive fluency refers to the ease with which a speaker is able to create and produce speech; perceived fluency refers to native speaker judgments of how easily the learner produces speech; and utterance fluency refers to measurable aspects of learner speech, including speech rate, hesitations and pausing.
Although Gromik’s (2012) study demonstrated the potential value of using AMOC to improve learner fluency, more evidence is needed in order to generalize his findings. Further research should consider the various categories of fluency and the effect of AMOC on fluency in longer videos.
3.1.3. Pronunciation
AMOC has also been shown to help learners develop their pronunciation. In a study involving 39 students of French, Lepore (2014) linked AMOC participation to the learners’ perceptions of improvement in their pronunciation. Learners in this study used VoiceThread to produce three audio recordings in response to instructor-created prompts and then commented on one another’s recordings. After submitting their recordings, learners completed self-assessments, rating their pronunciation during the recordings.
As with Engin’s (2014) findings on increased accuracy, relying solely on the perceptions of untrained learners in Lepore’s (2014) study renders the validity of the findings questionable. Although Lepore’s self-assessment form provides multiple questions to help the learners think about their pronunciation development (e.g. pronunciation compared to peers’ pronunciation, pronunciation improvements as a result of using VoiceThread, and accuracy of specific vowel and consonants in French), it neither provides clear guidance in rating their pronunciation nor provides guidance on what should be rated. In this case, a rubric identifying front rounded vowels, front unrounded vowels, back vowels, and difficult French consonants (e.g. /ʁ/) along with a rating scale, a series of descriptions of performance (e.g. native-like, somewhat native-like), or a series of characteristics (e.g. vowel was not rounded but was at correct height) might guide learners to more accurately and reliably assess their own pronunciation, as well as guide them to improving their pronunciation.
3.1.4. Conclusions about these traits
AMOC has been used to promote language gains in terms of accuracy, fluency, and pronunciation. However, it is not clear what aspects of accuracy were improved through AMOC. For instance, it may be that oral ACMC activities are conducive to lexical accuracy but not syntactic accuracy, or the converse. Fluency seems more clearly affected by AMOC activities, as studies have used more clear and varied measurements to determine fluency gains. Finally, although AMOC was shown to promote pronunciation gains, the evidence supporting this notion is insufficient. This may be remedied through the use of more rigorously developed self-rating systems, through native-speaker raters, or through acoustic measurements, such as comparing learner consonant production with native-speaker production using PRAAT, a popular phonetic analysis program. In summary, AMOC has been shown to have the potential to promote language gains in various linguistic aspects, but additional studies and more rigorous research methods are needed to confirm this.
3.2. Methods and challenges in using AMOC
Although AMOC has been shown to be a promising medium for helping learners increase their fluency, accuracy, and pronunciation, the mere inclusion of AMOC in a learning environment does not guarantee these increases. The question remains, then, of how to effectively incorporate AMOC into a course curriculum and how to deal with the challenges that inevitably arise. In this section, we address research question 2 by discussing technological challenges that have arisen in previous studies, and address research question 3 by discussing methods and activities that have contributed to the effective use of AMOC in language learning. The methods and activities discussed are training activities, preparatory activities, project-based learning, and self-evaluation combined with revision.
3.2.1. Technological challenges and training
Although many factors affect the quantity and quality of language learning experiences, whether in a classroom or online, technological challenges in particular affect the learning experience during AMOC activities. A variety of technological challenges exist. Poor internet connection is a common challenge that can be experienced in any location. In their study on Malyasian learners using both audio and video recordings, Bakar, Latiff, and Hamat (2013) reported that even learners at a university experienced connectivity problems, affecting their access to the AMOC activities and thereby their level of participation. Hung’s (2012) learners in Taiwan also experienced poor internet.
In addition to internet problems, learners may experience hardware deficiencies and malfunctions. Learners in Bakar, Latiff, and Hamat’s (2013) study experienced hardware malfunctions that made it impossible to record their voices. Gleason and Suvorov (2012) stated that their learners also had trouble saving and editing their recordings. In Gromik’s (2012) study, some learners were unable to upload video files because they were too large. As these video recordings were 30 seconds or shorter, it seems likely that either some learners were unaware of how to select different codecs and file containers for exporting their video or that the recording software they used did not allow them the option to select different codecs or containers. Hung (2012) confirmed this challenge by stating that his learners had difficulties in converting video files into different formats. This was further complicated by the fact that the vlog (video web log) system used in his study only supported a limited set of file formats. Shih (2010) clarified the problem of file format and file size, adding that internet speed is an important and related factor. Thus, with higher internet speeds, file size may not always be a problem, but with lower internet speeds it will be.
Regarding the problem of access to video recording equipment and editing software, Fukushima (2002) argued that in 2002 the cost of equipment and software licenses was, in fact, not an inhibiting factor for implementing video projects in a language class. By 2018, the affordability and availability of basic editing software and recording equipment has likely increased, leading to better access. This is particularly true when one considers that many university students in the United States own a mobile phone capable of recording high definition videos and performing basic video editing tasks, allowing them to record and edit at any time and in any place. Advanced editing functionality is not necessary for most AMOC tasks, which only require the learner to record a simple video, review it, and then record an additional take rather than splice video segments.
However, because not all learners have mobile phones, or their phones cannot record or edit, it is important to provide other means of recording and editing video files. One way to make recording equipment and editing software available to learners is through university media labs. Some universities offer multimedia labs that loan recording equipment and provide computer stations with editing software. Some even go so far as to offer training in the use of the equipment and software. One drawback to these labs, however, is that they may not provide a suitable environment for recording. As Lepore (2014) stated, a lab setting might lead to some learners reducing their recording quality by speaking softly so as not to disturb other lab users. Background noise might also interfere with recording quality. Despite these drawbacks, labs offer a possible solution to hardware and software challenges, and both learners and instructors are frequently unaware of their existence at their university.
Compounding the technological challenges, many learners do not have sufficient experience using the hardware or software needed to participate in AMOC. Responding to this lack of experience, Bakar, Latiff, and Hamat (2013: 232) stated that their learners would benefit from technical training “so that they are familiar with the online devices and would feel less awkward when utilizing the features of the online tools.” One example of this kind of training took place in Abuseileek and Qatawneh’s (2013) study where learners were provided with basic instruction in using the AMOC software. Similarly, learners in Fukushima’s (2002) study were trained in video and audio editing.
In 2011, Castañeda & Rodríguez-González conducted a study on the effects of self-evaluation and iterative video speech revisions on learners’ linguistic self-awareness and speaking skills. In this study, nine intermediate level Spanish language learners participated in a training activity in which they submitted trial videos prior to participating in the intervention. They created a trial video, following the same procedures they would use to create the videos for the intervention. While the researchers did not mention any specific instruction in how to use the hardware or software, learners nevertheless gained experience in the recording and uploading processes that were required of them in the intervention.
The researchers (Castañeda & Rodríguez-González, 2011) analyzed the learners’ self-evaluation forms to determine if learners felt they had made improvement. In their study, Castañeda & Rodríguez-González did not report any learner dissatisfaction with AMOC caused by technological problems. This may be attributed in part to the carefully organized learning activities—where learners participated in four cycles of video recordings and subsequent self-evaluation prior to final submission—but also in part to the technical training learners received.
On the other hand, some learners in Dona, Stover, and Broughton’s (2014) study who attended a software training session at the beginning of the course still reported having technological challenges. The researchers cited low learner tolerance for learning new technologies as one cause for this problem, and unclear tutorials as a second. While it is not expected that any training activity would solve all technological challenges, a clear description of the training provided would help in discovering how the training could be clearer and how to adapt the training to learners with low tolerance for new technologies.
In Goulah’s (2007) ethnographic case study of eight Japanese language learners, learners were not given any formal training on how to use the recording hardware or editing software. Rather, students with prior experience in recording and editing (whether they gained their experience prior to the course or during the first cycle of the intervention activity) became the experts in the second cycle and assisted other learners at that point. In this case, training was done informally by peers, rather than as a formal instructional session by the instructor or researcher. The value in this approach is that learners may, in fact, learn more from someone with a similar status and may learn more because they are receiving instruction while working with the hardware or software. The danger is that instructors cannot guarantee they will have learners with prior experience, and that it may take learners a much longer time to familiarize themselves with the hardware and software before being able to train their peers.
Although it appears training is valuable in alleviating some technological challenges that learners face, there are different ways of providing that training, and it should be carefully designed. Training may be conducted either formally by the instructor or another expert (Dona, Stover, & Broughton, 2014), or by a more knowledgeable peer (Goulah, 2007). Knowing which learners have prior experience with hardware and software is invaluable if peer-to-peer training is to be expected. Training should also be tailored to the particular learners as much as possible. Many learners are eager to work with new technology, but others are wary of it (Dona, Stover, & Broughton, 2014). Finally, in designing AMOC learning activities, designers must consider learner access to recording hardware and software in the first place. Some may be able to use a mobile phone or personal computer, but others may need access to a lab where they can make their recordings. Yet regardless of the exact nature of the training, training should be provided as many learners lack the skills and equipment necessary to make their recordings, and addressing these deficiencies will help learners to focus on their languaging and not on the technological aspects of the activities.
3.2.2. Preparatory activities
One of the factors that increases the effectiveness of AMOC in developing speaking proficiency is the inclusion of a preparatory activity. Crookes (1989) described planning as a type of preparatory activity in his seminal paper involving 40 Japanese learners of English. He cited “consistent, small- to medium-sized effects in favor of the planned condition” (p. 379), as compared with a control group who did not have planning time. Preparatory activities can take a variety of forms. Bakar, Latiff, and Hamat (2013) described a simple preparatory activity in which learners were given “time to construct and develop their ideas or thoughts” (p. 232) prior to making their audio and video recordings. This preparation enabled the learners to produce more complex ideas. In order to create their video tutorials, Engin’s (2014) learners conducted their own research on their tutorial subjects, finding, evaluating, selecting, and finally summarizing their sources. This task made the learners responsible for their learning and pushed them to spend time becoming very familiar with it, resulting in students both becoming experts on their topic and developing speaking proficiency.
Goulah (2007) outlined a more complex preparatory activity. Prior to recording their videos, learners in Goulah’s study watched videos related to their video topic and then created a storyboard for their video. The storyboard process involved drafting, presenting, negotiating, and finally settling on ideas as a group. Essentially, learners moved from input, to output, and finally to revision of their output, resulting in exposure to authentic language and more time on task. This kind of preparatory activity takes the focus off languaging, as Knouzi, Swain, Lapkin, & Brooks (2010) use the term, for the sake of language and encourages learners to focus on task completion. Learners were able to experience a real need for language and a purposeful interaction in the target language.
3.2.3. Project-based learning
Incorporating AMOC tasks through project-based learning (PBL) can be an effective method of developing learner speaking skills. PBL does this by creating an authentic need to use the target language and by encouraging learners to use a variety of their target language skills and knowledge. In Goulah’s (2007) study involving eight intermediate learners of Japanese, learners followed a sequence of project-related activities in which they created commercials responding to challenging political and environmental questions. Their project participation resulted in both an increase of content knowledge and language gains.
Fukushima (2002: 353) conducted a study on the effects of PBL in which seven learners collaborated to produce a video promoting Japanese language learning. He described their participation as “self-directed,” highlighting that learners assigned their own tasks, set their own schedule, wrote their own scripts, and evaluated and revised their own performance. The result was that learners produced an authentic linguistic artefact that demonstrated and developed some of their language skills but did not encourage the level of linguistic output and development that the researcher had hoped for. Although language use was considered and reported on, Fukushima focused more attention on motivation and the development of technical skills than on proficiency and performance. A more thorough analysis of the learners’ performance in terms of linguistic dimensions, such as accuracy, fluency, and pronunciation, would allow for comparisons with similar learners and allow for a long-term study analyzing the learners’ linguistic development.
Although neither Goulah’s (2007) nor Fukushima’s (2002) studies suggest PBL as an efficient means of bringing about language gains, they both demonstrated that PBL has the potential of creating authentic needs for language learning by motivating learners and giving them opportunities to express themselves. Further studies building on Goulah’s (2007) and Fukushima’s (2002) work should demonstrate ways in which we can efficiently use project-based oral ACMC to create authentic linguistic needs, motivate learners, and bring about significant language gains.
3.2.4. Self-evaluation and revision
In addition to other methods and techniques of incorporating AMOC into learning environments, researchers have found that self-evaluation helps learners achieve language gains. Due to the recorded nature of asynchronous audio and video, learners are not only able to produce spoken output but can listen to their own performance and discover areas of weakness and areas of strength. For instance, most learners in Hung’s (2011) study of Chinese learners of English (76%) agreed that participating in creating vlogs helped them reflect on their learning. One learner described the value of the AMOC project in helping them to become aware of their weaknesses and in being able to make improvements by stating, “I can redo the clips again and again until they looked [sic] satisfactory” (Hung, 2011: 742). Lepore (2014) indicated that self-evaluation through AMOC was one of the factors involved in increasing learner willingness to communicate, which itself leads to increased quantity of practice. Dixon and Hondo (2014) reported positive learner impressions of the value of AMOC in making them more aware of their speech production, enabling them to make corrections.
In 2011, Castañeda and Rodríguez-González conducted a study in which nine university-level learners of Spanish produced videos of themselves responding to instructor-generated prompts. Learners in this study responded to a prompt by recording an initial video draft, and conducting an evaluation of their draft. They then recorded a second draft and conducted a second self-evaluation. Learners followed this same 2-draft and 2-self-evaluation process, responding to an altered version of the first prompt, although the drafts were labeled as third and fourth drafts. For the self-evaluation, learners watched their recordings, noting mistakes and then recording an improved version.
Learners in Castañeda and Rodríguez-González’s (2011: 491) study reported an increase in learner awareness of weaknesses as well as improvements in their grammatical accuracy, pronunciation accuracy, and fluency. Demonstrating increased awareness, one learner stated, “I also noticed my adjective endings weren’t correct.” Another learner commented on the effect of the self-evaluation and revision cycles, “as we do more recordings, the pauses are becoming less frequent.” Castañeda and Rodríguez-González attributed these gains at least in part to the self-evaluation and revision activities.
Of course, incorporating self-evaluation using AMOC does not automatically lead to language gains. Gleason and Suvorov (2012) found that learners were only partially in agreement (m=3.78 based on a 5-point scale) that their language skills increased after using AMOC and conducting a self-evaluation. In fact, some learners’ perceptions of the value of the intervention actually decreased after participating. In their study, learners recorded three presentations each to share with their peers. They then watched their recordings later to determine if they had made improvements. There is no mention, however, of asking the learners to evaluate their performance and then make changes to their original recording, or to focus on weak areas in subsequent recordings. It seems that learners did not conduct their self-evaluations until after they had completed all their recordings.
Castañeda and Rodríguez-González’s (2011) study demonstrated the potential value of combining AMOC with learner self-evaluation and revision cycles. The self-evaluations informed learners of weaknesses and mistakes that learners addressed in subsequent video drafts. Additionally, learners participated in four cycles of self-evaluation and revision. In contrast, learners in Gleason and Suvorov’s (2012) study either did not have or did not take the opportunity to improve their recordings based on their self-evaluations. The result was that many did not feel participation in the AMOC activity led to language gains. Thus, while AMOC can be used to create language gains, a structured approach involving both self-evaluation and revision across multiple cycles is more likely to lead to those gains.
3.2.5. Conclusions regarding AMOC methods and challenges
There are a number of things instructors and designers can do to increase the effectiveness of AMOC activities. First, it is important to investigate the learners’ hardware and software needs, provide equipment or a lab environment if necessary, and provide training on the creation and sharing of asynchronous audio and video files. If internet speed is a problem, audio might be a more useful option than video, as audio files tend to be much smaller. Second, preparatory activities will improve learner performance. Preparatory activities range in simplicity from brainstorming ideas before recording to viewing related input and then creating a storyboard. Third, project-based learning in AMOC creates authentic needs for learning and encourages learners to be more self-directed. Finally, cycles of structured self-evaluation followed by revisions may raise learners’ linguistic self-awareness and provide them with the opportunity to learn from their heightened awareness.
With those benefits in mind, it is important to note that these methods will not guarantee effective and efficient learning through AMOC. Designers and instructors must incorporate them appropriately, according to the curriculum and the needs of the particular learners. Furthermore, future research is needed to investigate effective methods of incorporating AMOC into a curriculum and to what degree its successful use can be generalized across university-level language learners.
4. Methodologies for measuring and analyzing language gains in AMOC
In this section, we address research question 4. The authors of the articles considered in this review used several methods to determine whether AMOC activities brought about learner language gains. In terms of data type, they analyzed surveys, journals, and reflections; learner audio and video recordings; interview transcripts; and researcher observation notes. Table 4 displays the frequency of use for each data type. In terms of data analysis type, researchers used qualitative analysis, descriptive measurements, quantitative comparison, expert evaluation, and correlation. Table 5 displays the number of studies that used each data analysis type. Each data type and analysis type used by a given study were counted individually. Thus, if a study incorporated surveys, interviews, and recordings, as in Shih (2012), the frequency for surveys, interviews, and recordings would each be increased by one. In this way, the total count for data types and analysis types equaled more than the total number of studies reviewed. Appendix B displays the data and analysis type(s) considered in each study.
Table 4. Frequency of data types |
|
Data type |
Frequency |
Surveys, journals, and reflections |
16 |
Audio & video recordings |
12 |
Interview transcripts |
10 |
Observation notes |
2 |
Table 5. Frequency of data analysis types |
|
Analysis type |
Frequency |
Qualitative analysis |
16 |
Descriptive measurements |
13 |
Quantitative comparison |
6 |
Expert evaluation |
5 |
Correlation |
3 |
Unknown / unstated |
1 |
4.1. Data sources
Surveys, journals, and reflections was the most common category of data type for determining whether AMOC activities were effective in promoting language gains. Surveys, journals, and reflections were combined into this single category because they contained the learners’ perceptions of their language gains. Many surveys resembled the journals and reflections in that they provided learners with open-ended questions regarding their learning experience, thus increasing the similarity between survey data and journal and reflection data. For instance, Goulah (2007: 65) used surveys to discover that participants felt they learned vocabulary and grammar, referring to his surveys simply as “open-ended questionnaires.” Others, however, used surveys to collect data on learner opinions of AMOC technology and activities. One example is Hung’s (2011: 742) survey, which largely focused on learner attitudes based on a five-point scale, “the vlog helped me reflect on my learning in this course,” though it contained a question related to learner perceptions of language gains “the vlog helped me organise learning in this course.”
Interview data, while the third most common of the four categories, resembled survey, journal, and reflection data, differing only in that interviewers personally elicited learner responses rather than providing them with written questions. Like surveys, interviews focused on learner perceptions of language gains (e.g., Kirkgöz, 2011), as well as attitudes (e.g., Hung, 2011; Yaneske & Oates, 2010). In fact, survey and interview data proved to be similar such that many researchers did not state which themes emerged from survey data and which emerged from interview data.
Audio and video recordings were used as a source of data in roughly one half of the studies considered in this review (n=12). Recordings were either coded for qualitative analysis (n= 6), measured and assigned descriptive statistics (n = 4), or assessed using expert evaluation (n= 4). Three studies used two different analysis types on the recordings (Kormos & Dénes, 2004; Sun, 2012; Sun & Yang, 2015).
4.2. Data analyses
Qualitative analysis was the most common data analysis type found in this study. The term qualitative analysis as used in this study refers to any type of coding and categorizing activities. Conversation analysis and discourse analysis were included in this category.
Descriptive measurement was the second most common analysis type. This term refers to frequency counts, means, and standard deviations. It was frequently used in conjunction with qualitative analysis, as in Shih (2010). In his study, Shih counted the frequency of codes found in learner reflections, and calculated means for survey responses. However, some studies provided empirical descriptions of learner language based on their recordings. For instance, Kormos & Dénes (2004: 154) reported 13 statistics, including speech rate, number of words, and mean length of run.
Quantitative comparison refers to quantitative tests used to compare either survey data or learner performance on recordings. In one of the studies (Gromik, 2012), the researcher used a t-test to compare learner opinions of the value of using a mobile phone in AMOC activities. In the other five studies using quantitative comparison, the researchers assessed linguistic performance by analyzing recordings and language performance tests. For example, in a study of Turkish learners of English (Kirkgöz, 2011), the means of pre-tests and post-tests were compared using a t-test.
Quantitative analysis was used to study the variety of question types and question strategies used (Abuseileek & Qatawneh, 2013); opinions regarding mobile phone use (Gromik, 2012); “fluency, pronunciation, vocabulary, accuracy and task accomplishment” (Kirkgöz, 2011: 4); fluency (13 different measurements) (Kormos & Dénes, 2004); fluency, pronunciation, complexity, and accuracy (Sun, 2012); and pronunciation and grammar (Tognozzi & Truong, 2009).
Expert evaluation refers to either a researcher or instructor’s assessment of the learners’ performance. For example, Kirkgöz (2011: 4) created a rating scale to assess learner performance in terms of “fluency, pronunciation, vocabulary, accuracy and task completion,” which she later used for quantitative comparison. Similarly, in Kormos and Dénes’ (2004) study, three native and non-native speakers rated the learners’ performance in the AMOC task.
4.3. Conclusions on methodologies
It is puzzling that a majority of studies in this review focused on learner perceptions of language gains without considering expert evaluations or empirical measurements of learner performance. That is, although survey, journal, and reflection data constituted only a marginally larger category than the use of recordings as data, if it were combined with interview data to create the broader category of learner perceptions, it would contain twice as many instances of data collection (n=25) as the recordings category (n=12). It is worth noting that this is a count of instances that each collection method was encountered, where one article may use both surveys and interviews. In other words, researchers relied more heavily on learner perceptions of speech production than on their recorded speech production when studying AMOC in language learning, including studies focusing on the effect of AMOC on learner language gains.
While learner perceptions of linguistic growth and of activity effectiveness are no doubt important aspects in evaluating AMOC and its associated activities, the use of learner perceptions as the sole means of determining this growth and effectiveness is fraught with validity issues. It is doubtful that learners are the best means of gauging language improvement. First, learners are not experts in the language and therefore frequently do not know when they are saying something correctly or incorrectly. Second, they are not trained in noticing different aspects of their own speech. Finally, they are not trained in reliably rating their linguistic performance.
Learner perceptions may still be of value when combined with other analysis methods. One method is expert evaluation. Native speakers and highly proficient non-native speakers are more familiar with the language and can more accurately determine the quality and accuracy of the learner’s performance. Objective measurements, such as words produced per second, will provide even more accurate evidence regarding some aspects of learner performance, such as fluency. Taken together, learner perceptions, expert evaluation, and objective measurements would enable researchers to more accurately evaluate learner language gains from using AMOC.
5. Conclusions
AMOC can be beneficial to learners in promoting language gains. Studies considered in this review investigated its effects on accuracy, fluency, and pronunciation, showing that it can be a useful technology in helping learners develop these aspects of their language. However, the research does not universally show that AMOC leads to language gains. Additional studies on the effectiveness of using AMOC would enable us to determine with greater reliability whether it is a viable means of promoting language gains. Additionally, the scope of studies should extend beyond grammatical accuracy, fluency, and pronunciation to include such linguistic aspects as complexity, lexical accuracy, and lexical variety (to name a few).
However, we did identify several factors that contribute to effective use of AMOC in a language-learning curriculum. In designing AMOC activities, instructors and designers should consider the learners’ access to hardware and software as well as their internet speed. Because many learners are not familiar with recording and editing software, learners will benefit from technical training. Learners will also benefit from structured self-evaluation and revision cycles, preparatory activities, and project-based learning.
Current research on the effectiveness of AMOC on speaking performance focuses heavily on learner perceptions of language gains. Although learner perceptions can give us clues about their linguistic self-awareness and their experience as AMOC users, they are not an appropriate data source for inferential studies and not the only factor that should be considered by instructors or programs deciding on whether or how to implement AMOC activities. Triangulating with other data sources (such as recordings of learner speech) and other analysis types (such as expert evaluation and empirical measurements) would allow researchers to make more accurate claims as to the effectiveness of AMOC in promoting foreign language gains. This study shows that there are several studies about the qualitative effects of AMOC but few studies providing empirical evidence for linguistic gains through AMOC. What is lacking is an analysis of whether each study’s data and analysis type matches the study’s claims and conclusions. Such an analysis would help us to better evaluate the trustworthiness of the various conclusions about the usefulness and effectiveness of AMOC.
In this review, audio-based and video-based AMOC were studied together. However, it is not clear if video-based AMOC is more or less effective at promoting language gains when compared to audio-based AMOC. It is possible that video may be detrimental for some learners in that it will likely increase anxiety when compared to audio. On the other hand, video provides a higher fidelity experience when communicating with other learners or the instructor. A purposeful comparison would help determine if the use of either purely audio or purely video-based AMOC is generally most effective, or to which situations and learner types each is best suited.
A final note is that while self-evaluations and revisions promote language gains, it is unclear what systems for self-evaluating and revising are most effective. For instance, is one cycle of video drafting sufficient or must learners follow three or four cycles before they become sufficiently aware and make sufficient revisions? Furthermore, to what degree do learners even follow the specified self-evaluation and review processes? That is, we do not know the extent to which learners revise their recordings after self-evaluating.
AMOC remains an intriguing means of promoting spoken language gains but further research is needed to determine what aspects of spoken language it is best suited for developing and how to effectively incorporate it into a curriculum. AMOC does not appear to be, as some may think, inferior to face-to-face or other synchronous forms of communication. Continued popularity of asynchronous social media, such as Twitter, Snapchat, and YouTube, suggests that it is important to study and understand the unique outcomes and situations where each method can be most useful.
References
American Council on the Teaching of Foreign Languages. (2012). Performance descriptors for language learners. http://www.actfl.org/publications/guidelines-and-manuals/actfl-performance-descriptors-language-learners
Abuseileek, A. F., & Qatawneh, K. (2013). Effects of synchronous and asynchronous computer-mediated communication (CMC) oral conversations on English language learners’ discourse functions. Computers and Education, 62, 181–190. doi:10.1016/j.compedu.2012.10.013
Bakar, N. A., Latiff, H., & Hamat, A. (2013). Enhancing ESL learners speaking skills through asynchronous online discussion forum. Asian Social Science, 9(9), 224–234. doi:10.5539/ass.v9n9p224
Baker-Smemoe, W., Dewey, D. P., Bown, J., & Martinsen, R. A. (2014). Does measuring L2 utterance fluency equal measuring overall L2 proficiency? Evidence from five languages. Foreign Language Annals, 47(4), 707–728. doi: 10.1111/flan.12110
Castañeda, M., & Rodríguez-González, E. (2011). L2 speaking self-ability perceptions through multiple video speech drafts. Hispania, 94(3), 483–501.
Clark, R. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29. doi: 10.1152/advan.00094.2010
Clifford, R. (2002). Achievement, performance, and proficiency testing. Paper presented at the Berkeley Language Center Colloquium on the Oral Proficiency Interview, University of California at Berkley.
Crookes, G. (1989). Planning and interlanguage variation. Studies in Second Language Acquisition, 11(4), 367–383.
Delaney, T. (2012). Quality and quantity of oral participation and English proficiency gains. Language Teaching Research, 16(4), 467–482. doi: 10.1177/1362168812455586
Dixon, E. M., & Hondo, J. (2014). Re-purposing an OER for the online language course: A case study of Deutsch Interaktiv by the Deutsche Welle. Computer Assisted Language Learning, 27(2), 109–121. doi: 10.1080/09588221.2013.818559
Dona, E., Stover, S., & Broughton, N. (2014). Modern languages and distance education: Thirteen days in the cloud. Turkish Online Journal of Distance Education, 15(3), 155–170.
Engin, M. (2014). Extending the flipped classroom model: Developing second language writing skills through student-created digital videos. Journal of the Scholarship of Teaching and Learning, 14(5), 12–26. doi:10.14434/josotlv14i5.12829
Fukushima, T. (2002). Promotional video production in a foreign language course. Foreign Language Annals, 35(3), 349–355.
Gleason, J. & Suvorov, R. (2012). Learner perceptions of asynchronous oral computer-mediated communication: Proficiency and second language selves. Canadian Journal of Applied Linguistics, 15(1), 100–121.
Goulah, J. (2007). Village voices, global visions: Digital video as a transformative foreign language learning tool. Foreign Language Annals, 40(1), 62–78. doi: 10.1111/j.1944-9720.2007.tb02854.x
Gromik, N. A. (2012). Computers & education cell phone video recording feature as a language learning tool: A case study. Computers & Education, 58(1), 223–230. doi: 10.1016/j.compedu.2011.06.013
Graham, C. (2006). Blended learning systems: Definition, current trends, and future directions. In Bonk, C. & Graham, C. (eds.), Handbook of blended learning: Global perspectives, local designs (pp. 3–21). San Francisco: Pfeiffer. doi: 10.2307/4022859
Hastie, P., Brock, S., Mowling, C. & Eiler, K. (2012). Third grade students’ self-assessment of basketball dribbling tasks. Journal of Physical Education and Sport, 12(4), 427–430. doi: 10.7752/jpes.2012.04063
Hirotani, M. (2009). Synchronous versus asynchronous CMC and transfer to Japanese oral performance. Calico Journal, 26(2), 413–438. doi: 10.1016/j.cpen.2012.02.001
Hirotani, M. & Lyddon, P. A. (2013). The development of L2 Japanese self-introductions in an asynchronous computer-mediated language exchange. Foreign Language Annals, 46(3), 469–490. doi: 10.1111/flan.12044
Hung, S. T. (2011). Pedagogical applications of Vlogs: An investigation into ESP learners’ perceptions. British Journal of Educational Technology, 42(5), 736–746. doi: 10.1111/j.1467-8535.2010.01086.x
Jamshidi, R., LaMasters, T., Eisenberg, D., Duh, Q. Y. & Curet, M. (2009). Video self-assessment augments development of videoscopic suturing skill. Journal of the American College of Surgeons, 209(5), 622–625. doi: 10.1016/j.jamcollsurg.2009.07.024
Karweit, N. (1984). Time on task reconsidered: Synthesis of research on time and learning. Educational Leadership, 41(8), 32–35.
Kirkgöz, Y. (2011). A blended learning study on implementing video recorded speaking tasks in task-based classroom instruction. Turkish Online Journal of Educational Technology, 10(4), 1–13.
Kitade, K. (2000). L2 learners’ discourse and SLA theories in CMC: Collaborative interaction in internet chat. Computer Assisted Language Learning, 13(2), 143–166. doi: 10.1076/0958-8221(200004)13
Kormos, J. & Dénes, M. (2004). Exploring measures and perceptions of fluency in the speech of second language learners. System, 32(2), 145–164. doi: 10.1016/j.system.2004.01.001
Lamy, M.-N. & Goodfellow, R. (1999). “Reflective conversation” in the virtual classroom. Language Learning & Technology, 2(2), 43–61.
Lepore, C. E. (2014). Influencing students’ pronunciation and willingness to communicate through interpersonal audio discussions. Dimension, 73–96.
Lin, H. (2015). Computer-mediated communication (CMC) in L2 oral proficiency development: A meta-analysis. ReCALL, 27(3), 261–287. doi: 10.1017/S095834401400041X
McIntosh, S., Braul, B. & Chao, T. (2003). A case study in asynchronous voice conferencing for language instruction. Educational Media International, 40(1), 63–73. doi: 10.1080/0952398032000092125
Ono, Y., Onishi A., Ishihara M. & Yamashiro M. (2015). Voice-based computer mediated communication for individual practice to increase speaking proficiency: Construction and pilot study. In Zaphiris P. & Ioannou A. (eds.), Learning and collaboration technologies. LCT 2015. Lecture Notes in Computer Science, 9192. New York: Springer.
Pop, A., Tomuletiu, E. A. & David, D. (2011). EFL speaking communication with asynchronous voice tools for adult students. Procedia - Social and Behavioral Sciences, 15, 1199–1203. doi: 10.1016/j.sbspro.2011.03.262
Sauro, S. & Smith, B. (2010). Investigating L2 performance in text chat. Applied Linguistics, 31(4), 554–577.
Segalowitz, N. (2010). Cognitive bases of second language fluency. New York: Routledge.
Shih, R. (2010). Blended learning using video-based blogs: Public speaking for English as a second language students. Australasian Journal of Educational Technology, 26(6), 883–897.
Sun, Y. C. (2012). Examining the effectiveness of extensive speaking practice via voice blogs in a foreign language learning context. CALICO Journal, 29(3), 494–506.
Sun, Y. C. & Yang, F. Y. (2015). I help, therefore, I learn: Service learning on Web 2.0 in an EFL speaking class. Computer Assisted Language Learning, 28(3), 202–219. doi: 10.1080/09588221.2013.818555
Tiraboschi, T. & Iovino, D. (2009). Learning a foreign language through the media. Journal of E-Learning and Knowledge Society, 5(3), 133–137.
Tognozzi, E. & Truong, H. (2009). Proficiency and assessment using WIMBA voice technology. Italica, 86(1), 1–23.
Yaneske, E. & Oates, B. (2010). Using voice boards: Pedagogical design, technological implementation, evaluation and reflections. Australasian Journal of Educational Technology, 26(8), 233–250. doi: 10.3402/rlt.v18i3.10767
Ziegler, N. (2013). Synchronous computer-mediated communication and interaction: A research synthesis and meta-analysis (Doctoral dissertation). Washington, DC.
Appendix A
Articles Reviewed in this Study
Abuseileek, A. F., & Qatawneh, K. (2013). Effects of synchronous and asynchronous computer-mediated communication (CMC) oral conversations on English language learners’ discourse functions. Computers and Education, 62, 181–190. http://doi.org/10.1016/j.compedu.2012.10.013
Bakar, N. A., Latiff, H., & Hamat, A. (2013). Enhancing ESL learners speaking skills through asynchronous online discussion forum. Asian Social Science , 9(9), 224–234. http://doi.org/10.5539/ass.v9n9p224
Castañeda, M., & Rodríguez-González, E. (2011). L2 speaking self-ability perceptions through multiple video speech drafts. Hispania, 94(3), 483–501.
Dixon, E. M., & Hondo, J. (2014). Re-purposing an OER for the online language course: A case study of Deutsch Interaktiv by the Deutsche Welle. Computer Assisted Language Learning, 27(2), 109–121. http://doi.org/10.1080/09588221.2013.818559
Dona, E., Stover, S., & Broughton, N. (2014). Modern languages and distance education: Thirteen days in the cloud. Turkish Online Journal of Distance Education, 15(3), 155–170.
Engin, M. (2014). Extending the flipped classroom model: Developing second language writing skills through student-created digital videos. Journal of the Scholarship of Teaching and Learning, 14(5), 12–26. http://doi.org/10.14434/josotlv14i5.12829
Fukushima, T. (2002). Promotional video production in a foreign language course. Foreign Language Annals, 35(3), 349–355. Retrieved from http://dx.doi.org/10.1111/j.1944-9720.2002.tb01860.x%5Cnhttp://search.ebscohost.com/login.aspx?direct=true&db=mzh&AN=2002652556&site=ehost-live&scope=site
Gleason, J., & Suvorov, R. (2012). Learner perceptions of asynchronous oral computer-mediated communication: Proficiency and second language selves. Canadian Journal of Applied Linguistics, 15(1), 100–121. Retrieved from http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Learner+Perceptions+of+Asynchronous+Oral+Computer-mediated+Communication+:+Proficiency+and+Second+Langauge+Selves#0
Goulah, J. (2007). Village voices, global visions: Digital video as a transformative foreign language learning tool. Foreign Language Annals, 40(1), 62–78. http://doi.org/10.1111/j.1944-9720.2007.tb02854.x
Gromik, N. A. (2012). Computers & education cell phone video recording feature as a language learning tool: A case study. Computers & Education, 58(1), 223–230. http://doi.org/10.1016/j.compedu.2011.06.013
Hirotani, M., & Lyddon, P. A. (2013). The development of L2 Japanese self-introductions in an asynchronous computer-mediated language exchange. Foreign Language Annals, 46(3), 469–490. http://doi.org/10.1111/flan.12044
Hung, S.-T. (2011). Pedagogical applications of Vlogs: An investigation into ESP learners’ perceptions. British Journal of Educational Technology, 42(5), 736–746. http://doi.org/10.1111/j.1467-8535.2010.01086.x
Kirkgöz, Y. (2011). A blended learning study on implementing video recorded speaking tasks in task-based classroom instruction. Turkish Online Journal of Educational Technology, 10(4), 1–13.
Kormos, J., & Dénes, M. (2004). Exploring measures and perceptions of fluency in the speech of second language learners. System, 32(2), 145–164. http://doi.org/10.1016/j.system.2004.01.001
Lepore, C. E. (2014). Influencing students’ pronunciation and willingness to communicate through interpersonal audio discussions. Dimension, 73–96.
McIntosh, S., Braul, B., & Chao, T. (2003). A case study in asynchronous voice conferencing for language instruction. Educational Media International, 40(1), 63–73. http://doi.org/10.1080/0952398032000092125
Pop, A., Tomuletiu, E. A., & David, D. (2011). EFL speaking communication with asynchronous voice tools for adult students. Procedia - Social and Behavioral Sciences, 15, 1199–1203. http://doi.org/10.1016/j.sbspro.2011.03.262
Shih, R. (2010). Blended learning using video-based blogs: Public speaking for English as a second language students. Australasian Journal of Educational Technology, 26(6), 883–897.
Sun, Y.-C. (2012). Examining the effectiveness of extensive speaking practice via voice blogs in a foreign language learning context. CALICO Journal, 29(3), 494–506. Retrieved from http://proxy1.ncu.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=75496365&site=eds-live
Sun, Y.-C., & Yang, F.-Y. (2015). I help, therefore, I learn: Service learning on Web 2.0 in an EFL speaking class. Computer Assisted Language Learning, 28(3), 202–219. http://doi.org/10.1080/09588221.2013.818555
Tognozzi, E., & Truong, H. (2009). Proficiency and assessment using WIMBA voice technology. Italica, 86(1), 1–23.
Yaneske, E., & Oates, B. (2010). Using voice boards: Pedagogical design, technological implementation, evaluation and reflections. Australasian Journal of Educational Technology, 26(8), 233–250. http://doi.org/10.3402/rlt.v18i3.10767
Appendix B
Comparison of articles reviewed