Collaborative peer feedback and learning analytics: Theory-oriented design for supporting class-wide interventions

Although dialogue can augment the impact of feedback on student learning, dialogic feedback is unaffordable by instructors teaching large classes. In this regard, peer feedback can offer a scalable and effective solution. However, the existing practices optimistically rely on students’ discussion about feedback and lack a systematic design approach. In this paper, we propose a theoretical framework of collaborative peer feedback which structures feedback dialogue into three distinct phases and outlines the learning processes involved in each of them. Then, we present a web-based platform, called Synergy, which is designed to facilitate collaborative peer feedback as conceptualised in the theoretical framework. To enable instructor support and facilitation during the feedback practice, we propose a learning analytics support integrated into Synergy. The consolidated model of learning analytics, which concerns three critical pieces for creating impactful learning analytics practices, theory, design, and data science, was employed to build the analytics support. The learning analytics support aims to guide instructors’ class-wide actions toward improving students’ learning experiences during the three phases of peer feedback. The actionable insights that the learning analytics support offers are discussed with examples.


Introduction
Feedback is one of the greater influences on learning (Hattie & Timperley, 2007).The impact of feedback on learning, however, depends on (besides other factors such as timing and quality) the degree to which such feedback is conceived and used by students (Carless, Salter, Yang, & Lam, 2011).Recently, dialogue has been suggested as an integral element of feedback that can support its uptake (Yang & Carless, 2013).
Accordingly, research showed that when engaged in dialogue around feedback, students can negotiate meaning from feedback and build strategies to use the feedback for improving their learning and progressing on tasks (Nicol, 2010).
Several studies conceptualized dialogic feedback and proposed guidelines for instructors to maintain effective feedback dialogue.For example, Yang and Carless (2013) proposed the feedback triangle that concerns the interplay among three dimensions for (teacher-centred) dialogic feedback to promote student learning: cognitive (i.e., feedback content), social-affective (i.e., the role of social relationships and emotions), and structural dimensions (i.e., the organization of feedback provision).
The authors, based on this conceptualization, provided some suggestions for instructors to consider in their feedback practices.Moreover, Steen-Utheim and Wittek (2017) proposed an analytical model of dialogic feedback to help researchers investigate feedback dialogues between a teacher and his students.This model suggests four potentialities of teacher-centred dialogic for student learning: emotional and relational support, maintenance of dialogue, expressing themselves, and the other's contribution to individual growth.The authors discuss some implicit implications of their analytic model for facilitation of teacher-led dialogic feedback.Although these studies offer some strategies that may also apply to dialogic peer feedback (e.g., showing sensitivity to students' emotional responses), they do not provide a comprehensive account of how to design and implement a well-structured dialogic feedback among peers.
Dialogic peer feedback can be an effective and realistic solution considering the unpracticality of instructor-centred feedback with growing class sizes in higher education (Nicol & Macfarlane-Dick, 2006).Thus far, the practice of dialogic peer feedback has been mostly limited to enabling students to talk with their peers about the feedback provided (Ajjawi & Boud, 2018).That is, these studies, rooted in the socioconstructivist theory of learning, rely on students as active learners who co-construct knowledge from feedback through dialogue.However, considering that maintaining a meaningful dialogue is difficult even for instructors (Steen-Utheim & Wittek, 2017), students may easily fail to build a productive dialogue with their peers, which may lead to misinterpretation and disapproval of feedback.Thus, we argue that for dialogic peer feedback to foster productive student learning, there is a need for a systematic design approach to help structure and organize students' collaborative interactions and efforts.
We suggest that this need can be addressed by framing dialogic peer feedback from a theoretical perspective.
Adding to this gap, although the capacity of learning analytics in improving feedback practices in higher education is noted (Ryan, Gašević, & Henderson, 2019), its potential for enhancing peer feedback remains unexplored.Exploiting the high number of peers to implement dialogic feedback may help scale the practice; however, instructors' facilitative role remains a critical element of the feedback practice to intervene in a timely and proper manner for a better learning experience.Therefore, there is a need for well-designed learning analytics support to enable instructors' classwide actions during peer feedback.
Attending to these critical gaps in the literature, this paper first presents a theoretical framework of collaborative peer feedback, rooted in Hadwin and her colleagues ' (2011, 2017) conceptualization of collaborative learning.This framework outlines the phases of the collaboration in peer feedback and identifies the student roles (either as a provider or recipient of feedback) and the type of dialogue involved for each phase based on the feedback literature.Second, the Synergy, whose design is grounded in the theoretical framework is presented.Synergy is a fully developed open source platform, aiming at online facilitation of collaborative peer feedback.Third, this paper presents the learning analytics support integrated into Synergy that aims to enable instructors' actions for class-wide scaffolding through visualizations of large (feedbackrelated) activity data at manageable level with minimal information load for instructors (van Leeuwen, 2015).The analytics component is only available in the instructor interface and it is designed as a separate page that can be accessed at any time.The design of the learning analytics support considers the interactions among theory, design, and data science as suggested in the consolidated model of learning analytics, proposed by Author and his colleagues (Gašević, Kovanović, & Joksimović, 2017).Learning analytics holds great potential for enhancing feedback practices in higher education (Ryan et al., 2019).This paper contributes to the work in this area with a theoryoriented design of learning analytics support to help instructors intervene in a timely manner during the practice of peer feedback.To the best of our knowledge, this paper is the first attempt on creating learning analytics support driven by theory and learning design in the context of collaborative peer feedback.This paper is structured as follows.First, background and related work in the area of feedback and learning analytics is introduced, which is followed by a section outlining the theoretical framework of collaborative peer feedback.Next, informed by the framework, Synergy, a web-based tool to facilitate collaborative peer feedback in online environments, is introduced.Then, the design of learning analytics support for instructors is presented.After the discussion section, the paper concludes with future research prospects.

Background and related work
The field of learning analytics aims to improve and innovate teaching and learning by exploiting the digital traces that students leave (Dawson, Gašević, Siemens, George, & Joksimović, 2014).One promising area where learning analytics can have an impact is feedback.Given the increasing teaching workload in higher education where the class sizes continue to grow every year (Shi, 2019), learning analytics can be particularly helpful in the contexts where small instructor-to-student ratios restrict impactful feedback practices (Pardo, 2019).
Learning analytics dashboards have been widely used to provide students with feedback about their learning process and progress (Schwendimann et al., 2017).
Dashboards generally contain visualizations of learner data (e.g., visits to resources, artefacts created) as feedback to increase students' awareness of their engagement, learning activities, and progress, and therefore, to support their regulation of learning toward achieving the desired learning outcomes (Jivet, Scheffel, Drachsler, & Specht, 2017).However, such feedback provided by learning analytics dashboards rarely triggers and informs learners' future actions toward improving their learning and progress (Jivet et al., 2017).Matcha and her colleagues (2019) discuss that learning analytics dashboards lack conceptual or theoretical foundations from the feedback literature, and therefore they fail to follow the good principles of effective feedback practices (Nicol, 2010).
Another promising area of research has been the use of learning analytics to help instructors scale the provision of personalized and timely feedback.Pardo proposed a new conceptualization of a feedback model that considers the role of learning analytics to design feedback practices in data-rich environments.Pardo's model (Pardo, 2017), mainly grounded in the literature of self-regulated learning (Butler & Winne, 1995), suggests a feedback process where instructors can tailor the content of the feedback based on certain conditions (e.g., answering less than 5 questions in a quiz) that are configured by instructors and applied by some technology agents based on learners' data traces.This model was later used to provide personalized feedback messages (determined by instructors) based on students' engagement levels in three different tasks (Pardo, 2019).According to self-report data, students' perception of the usefulness of feedback and achievement in midterm exams improved significantly when compared with previous editions of the same course (Pardo, 2019).
Although these learning analytics approaches have brought significant practical value to the current feedback practice in higher education, their focus on feedback as one-way transmission of messages is an important limitation.Such feedback practices discard the dynamic nature of learning and optimistically assume that students by themselves will understand and use the feedback (Boud & Molloy, 2016;Nicol & Macfarlane-Dick, 2006).Recent feedback literature highlights the role of dialogue in helping students actively construct meaning from feedback and collectively decide on the learning actions to take to improve their learning and task performance (Yang & Carless, 2013).Empirical studies noted that dialogue can elevate the impact of feedback on learning and achievement (Gikandi & Morrow, 2015;Nicol, 2010).
Although the practice of dialogic feedback can be scaled with peers' reviewing each other's work, instructors' role of facilitating feedback activities and supporting students remains essential.Learning analytics can provide instructors with practical insights into student behaviour and engagement in various processes of dialogic feedback and guide their pedagogical decisions toward maximizing the learning gains from feedback.A learning analytics solution that is pedagogically sound should be founded in relevant learning theories (Gašević, Kovanović, et al., 2017;Reimann, 2016).Theory provides the necessary foundations for determining the set of digital traces as the indicator(s) of student engagement in a learning process that is theoretically considered critical to support.However, to the extent of our knowledge, there exists no theoretical work to frame dialogic peer feedback.To close this gap, in the next section we first present a theoretical framework of collaborative peer feedback that offer a structured dialogue among students during the feedback activity.Later, we further discuss how this framework informs the integration of learning analytics support into a web-based tool designed to support collaborative peer feedback.

Theoretical framework of collaborative peer feedback
In this paper, we consider dialogic peer feedback as a collaborative task during which students engage in dialogue with their peers for collective meaning-making from feedback (Filius et al., 2018).Hadwin et al. (2011Hadwin et al. ( , 2017) ) propose a theoretical framing of collaborative learning around regulation of learning, which considers three types of regulated learning emerging during a collaborative activity: self-regulation (SRL), coregulation of learning (Co-RL), and socially shared regulation of learning (SSRL).
Previous research showed that collaborative groups in which students engage in these three types of learning regulation are better at collectively constructing knowledge and achieving the learning goals of the group work (Malmberg, Järvelä, & Järvenoja, 2017).
Grounded in Hadwin and her colleagues' (2017) framing of collaborative learning, we suggest conceptualizing dialogic peer feedback as a collaborative learning activity that is composed of three phases mapping to different levels of learning regulation.In particular, we suggest that for a successful practice of collaborative peer feedback, the learning regulation takes place at three different stages and at different levels.First phase is the planning and coordination of the feedback activities, where peers should socially regulate their learning (i.e., SSRL) to negotiate, plan, and coordinate the feedback activities (Hadwin et al., 2011).This phase involves SSRL to achieve a common understanding of what should be the focus of the feedback and to set collective goals, which is critical to the success of both feedback quality and consistency (Ogrenci, 2013) and overall collaboration process (Malmberg, Järvelä, Järvenoja, & Panadero, 2015).Second phase is the discussion of feedback to support its uptake, during which peers should guide and support the target student's regulation of learning (i.e., Co-RL) through feedback and its discussion (Hadwin, Oshige, Gress, & Winne, 2010).Co-RL during feedback discussion with peers is necessary in this phase to help students make meaning from feedback (Bloxham & Campbell, 2010).In the last phase, the translation of feedback into action,, students should regulate their learning (i.e., SRL) based on the feedback received for strategic task engagement (Winne & Hadwin, 1998).In this phase, SRL helps students perform the learning actions derived from feedback and progress on their work as planned (Winne & Hadwin, 1998).
This framework aims to create peer feedback practices where students help each other learn and improve their work by engaging in a structured collaboration involving a continuous dialogue.At the same time, this framework aims to promote students' feedback literacy skills, defined as "the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies" (Carless & Boud, 2018, p2).In this regard, the framework is closely aligned with the features of student feedback literacy.For example, in the first phase, students need to assess their own work, compare the scores with peers, and discuss to resolve any conflicts, which may enhance students' capacity in making judgements, and in second phase, students need to determine concrete actions based on the feedback and perform these actions in the third phase, thus supporting taking actions.
The specific activities suggested to take place in each phase are explained as follows.

Planning and coordination of feedback activities
The first phase involves socially shared regulation of learning, where peers providing feedback work together to construct a shared understanding plan and coordinate the feedback provision.The goal is to ensure that peers generate coherent feedback based on a shared task understanding and later consistently engage in feedback provision according to the shared plan and goals.In this phase, peers should assess the work using a rubric provided by the instructor and should be encouraged to compare the scores and discuss any discrepancies.The goal is to establish a shared focus on the quality of student work (Jackson & Larkin, 2016).In this phase, students' assessment of their own work can play a critical role since their consensus with peers on the strengths and weaknesses of their work can lead later to a more productive discussion on the feedback and enhance their internalization and use of feedback (Taras, 2003).After negotiating the aspects of the work, peers should plan their activities by identifying the feedback to provide and the responsible peer.

Discussion of feedback to support its uptake
In the second phase, according to the plan (created in the previous phase), peers provide feedback and engage in a discussion with the target student to support the uptake of the feedback.Students' reflection on the received feedback has a critical importance in the uptake of the feedback (Filius et al., 2018).Following the reflection, the dialogue between the students may continue until everyone agrees on what the feedback is meant to say.The subsequent dialogue should focus on how to move feedback forward (Orsmond, Maw, Park, & Crook, 2013) by determining concrete learning actions to take (Quinton & Smallbone, 2010).In this step, peer support plays a critical role since as the feedback providers they can guide the student on how to use the feedback (e.g., identifying the learning actions and performing them).Students should also determine the day by which they plan to complete each action, which is necessary to help them monitor and evaluate their progress to see if they manage to keep up with their plans.

Translation of feedback into action
The last phase is the translation of the feedback into action.Guided by the learning actions determined in the previous phase, students are expected to have a higher awareness of the areas for improvement and the learning strategies to use to close the existing gaps.Self-regulation plays a key role in this phase for the effective use of feedback to advance in their learning as well as in their work as planned (Orsmond et al., 2013).For self-monitoring, students should be encouraged to track their progress (in each action separately), which then can be compared with the standards they set in the previous phase (i.e., the expected completion day).Each action belongs to an individual student, and therefore, this monitoring is enabled based on their subjective standards.
By monitoring their engagement with learning tasks, students can generate internal feedback (Butler & Winne, 1995), which can support their self-regulation to progress towards the learning goals.Synergy is an open source web-based platform designed and developed to facilitate collaborative peer feedback online.The design of Synergy is grounded in the theoretical framework presented in Section 3.Although Synergy could be used for formative assessment of student products in a variety of contexts, we suggest that it can offer more value for assignments or learning tasks with constructivist approaches where students generate distinct learning artefacts, such as writing an essay on the history of civil society instead of a computer science assessment where everyone is supposed to write (almost) the same piece of code (Diep, Zhu, & Vo, 2019).
In Synergy, reviewing peers are supposed to complete two tasks (see Figure 2): assessing the peer's work using the rubric (see Figure 3) and providing feedback on the work (see Figure 6).When the work is assessed, all students involved (i.e., students reviewing and being reviewed) receive a notification to view the assessment scores and discuss together to resolve any conflicts (see Figure 4).Conflict may emerge when peers provide a different score for the same criteria as a result of distinct perspectives on the quality of the work.The reviewing peers can use the Feedback Planner (see Figure 5) to plan the feedback ahead of time by creating feedback tasks (based on the identified weaknesses of the work).To facilitate the feedback provision, Synergy uses Google Documents (GD).GD provides a collaborative environment where students can easily post (feedback) comments on specific parts of the work and reply to these comments to discuss the feedback.GD also allows fetching the activity data on the documents to feed the learning analytics component.The integration of GD into Synergy for feedback provision and discussion is shown in Figure 6.Planner (see Figure 9) to determine the concrete actions that they will take per each feedback.Students revise their work by incorporating desired changes in their work opened in Google Documents (see Figure 10).In the same page, students can list the learning actions, and check and update the current progress of each action.WebQuest activity.The activity lasted a week and students used Synergy outside the classroom.At the end of the week, a short online survey was distributed to ask students about their experiences (i.e., if they liked using Synergy or not, and why).The survey consisted of an open-ended question inquiring about students' overall experiences, and two liker-type questions (with a scale of 1-5) about the degree to which students agree/disagree that the feedback received/provided was useful.The participation in the survey was voluntary.The participants (10 female and 4 male) were Primary Education majors and mostly in their first year (n=13).According to the results, all participants (n=14) indicated that they had a positive experience with using Synergy for peer review.
The most positive aspects noted by students were that Synergy provided a practical environment for helping them improve their work (n=6) and learn more by reviewing others' work (n=5).In accordance with their experiences, students agreed that the feedback that they received from peers (6 strongly agree, and 8 agree) and that they provided for their peers (4 strongly agree, and 9 agree) were useful.Moreover, students also indicated several technical problems (e.g., not being able to delete a comment), which were handled quickly during the activity.This pilot study showed that Synergy is a tool with great potential for facilitating formative peer reviews.

Learning analytics support
Informed by the theoretical framework, Synergy provides a structured online environment to guide student activities during collaborative peer feedback.However, it lacks the capacity to offer timely and appropriate actionable insights for instructors to take informed decisions on enhancing students' learning experiences during the feedback activity.Such a support for instructors may play a key role in particular when the Synergy platform is used to facilitate peer feedback at scale.Please note data literacy skills are necessary for instructors to understand and make productive use of learning analytics support to inform their decision making (Mccoy & Shih, 2016).
Without such skills, instructors may not utilize the learning analytics support effectively, resulting in limited support for students facing problems during the feedback activity.
Using the consolidated model of learning analytics proposed by Author and his colleagues (Authors, 2017), we propose a learning analytics support for instructors to assist them in supporting students during the collaborative peer feedback practice within the Synergy platform.This model identifies three mutually connected key dimensions, theory, design, and data science, to be considered in learning analytics research and practice (Gašević, Kovanović, et al., 2017).Theory is necessary to guide the design and integration of learning analytics into existing tools (Marbouti & Wise, 2015) (e.g., Synergy) and to make informed use of trace data (Siadaty, Gašević, & Hatala, 2016).
Design may refer to (a) interaction and visualisation design which aims to help stakeholders take informed decisions, (b) learning design which determines the integration of learning analytics and its effects on learning (Er et al., 2019), and/or (c) study design which concerns rigor in empirical research studies and evaluations of learning analytics -based interventions (Gašević, Kovanović, et al., 2017).Learning design and visualization design are the focus of this paper since the goal is (through some visualizations) to inform instructors' intervention in a peer feedback practice that follows a strict learning design driven by a particular theoretical framework (Section 3).
Following the principles within these dimensions of the consolidated model, we formulate the learning analytics support for instructors to intervene collaborative peer feedback within the Synergy platform.This section is organized by the phases conceptualized in the theoretical framework of dialogic peer feedback (Figure 1).This framework enforces a specific design of feedback practice (through Synergy), where the tasks that students need to complete are determined.This learning design is likely to shape students' behaviour and interactions in a certain manner.Structuring this section around these phases, help clearly explain the alignment of learning analytics with the learning design embodied in Synergy for each phase distinctly.
The design of learning analytics in all phases aims to support instructor action for whole-class interventions (i.e., a design decision addressing the whole class) (Wise & Jung, 2019).The rationale behind this decision is that learning analytics is likely to create more impact in practice when they are targeted at instructors (Shum, Ferguson, & Martinez-maldonado, 2019).Literature has raised many issues with student-facing analytics support that produced little or no impact on students' learning process and actions (Matcha et al., 2019).To support class-wide interventions in each phase, an instructor dashboard design is presented, intended to offer actionable insights to instructors about learner behaviour and interactions.

Planning and coordination of feedback activities
According to the theoretical framework, two critical processes of this phase are building consensus on the quality of the work reviewed and planning the feedback activities.An instructor dashboard (see Figure 11) is developed to offer practical insights toward these two processes.
First, this dashboard provides an overview of participation in assessments, which includes the number of students who assessed the assigned work, the average score obtained (across all submissions), number of discussions about the assessments, and number of feedback tasks created (along with the average scores).This overview aims to provide the instructors with the current activity level in assessments and feedback planning as well as the overall quality of student work submitted.Next component is the highlights that list the most and least rubric items based on the average assessment scores, number of discussions, number of feedback tasks, and number of submissions with conflicting scores (e.g., the student A gives the score of 2 whereas the student B gives the score of 5 for the same submissions).These indicators can be viewed for any of the rubric item by selecting it from the dropdown list.This information can be actionable in several ways.Class-wide low scores across all assessment criteria may imply in overall low progress on submitted works and may result in major pedagogical changes.To provide more opportunities to improve their work, for example, instructors may decide to run several rounds of peer reviews (Cheng, Liang, & Tsai, 2015).Multiple iterations of peer reviews that build on previous round can effectively support student progress (Tseng & Tsai, 2007).On the other hand, only one assessment criterion with low score class-wide may indicate limited knowledge on a particular topic and may involve relatively minor actions (such as sharing additional resources to enhance students' understanding of the topics corresponding to that criterion).Moreover, if the low scores are accompanied by few or no feedback tasks, this may imply that peers cannot manage to formulate feedback due to lack of knowledge (Nilson, 2003).In such a case, the instructor may provide explicit guidance on what the focus of the feedback should be for the assessment criterion with examples (Jonsson, 2013).
The number of conflicted scores per rubric item helps instructors identify any anomalies and take relevant actions.For example, a high number of conflicts may imply that the description of the rubric item (or assessment criterion) is ambiguous, resulting in different student interpretation.Students' misunderstanding of assessment criteria is usual (Price et al., 2010).Based on this actionable information, as a class-wide scaffolding, instructors can refine the explanations of the rubric item and provide an illustration.The lack of assessment discussions in spite of having conflicting scores may indicate that instructors should motivate students to use the discussions to resolve any discrepancies in their perspectives, which is critical to productive feedback provision.
Moreover, the dashboard includes a line graph to visualize the trend of these class-wide statistics over time.With this chart instructors can compare the changes in the number of conflicts, the overall assessment scores, the number of feedback tasks, and the discussion posts, and identify the relationship among these variables (e.g., if the number of conflicts decline in parallel to increasing discussions about the assessment scores).For example, the chart in Figure 11 shows that there is an increasing number of conflicts as students continue assessing the assigned works whereas there are almost no discussions about the discrepancies in the scores.

Discussion of feedback to support its uptake
The theoretical framework suggests that students engage in two different processes in the second phase of dialogic peer feedback: negotiating shared meanings from feedback and identifying learning actions based on feedback.Given the importance of student interactions about the feedback in this phase, a dashboard was implemented (see Figure 12) to enhance instructors' awareness of the ongoing dialogic interactions around the feedback provided.This parameter should be interpreted in connection with the overall assessment scores (from the first phase) since low scores (implying poor progress in the submitted works) will expectedly result in more feedback, whereas high scores, indicating a good progress, are unlikely to yield much feedback.Despite the low scores, number of feedback comments produced might be still low due to lack of motivation, competence or expertise to complete the reviews (Panadero, 2016).Instructors can take several actions to improve student engagement such as providing additional training about how to provide feedback (Barker & Pinard, 2014), offering some incentives (Neubaum, Wichmann, Eimler, & Krämer, 2014), or extending the deadline for peer reviews.
Next, the number of replies to feedback comments can indicate the degree to which students attempt to understand and learn from feedback.This information can be of practical value to instructors for encouraging students to reflect on and make sense of feedback.The literature notes that the learning value of feedback elevates with the dialogue it triggers (Filius et al., 2018).However, students might be reluctant to engage in dialogue after providing/receiving feedback (Carless, 2016).Considering the case where the dashboard indicates low feedback interactions (as decided by instructors), instructors can take several actions to enhance the participation in feedback dialogue.
One could be to provide exemplars of feedback discussions (Carless & Boud, 2018).
Exemplars can guide students with tangible information and illustrate concrete strategies to use for discussing feedback.Instructors can also send out a message to the whole class, reminding and emphasizing that discussion of the feedback is the part of the whole learning experience (Filius et al., 2018).This may help create a classroom climate that encourages students to engage in feedback dialogue (Carless, 2016).
Figure 12: Instructor dashboard targeting the feedback provision and discussion Furthermore, the ability to derive actions from feedback is a key indicator showing that students understand the feedback and know how to use it (Carless & Boud, 2018).In this regard, the indicators about the learning actions can provide instructors with an overall understanding of how effectively students could translate the feedback into concrete actions.There might be various reasons for the low number of actions.
One common reason could be that feedback lacks concrete suggestions about what learning strategies to follow and how to revise the work at hand (van der Pol, van den Berg, Admiraal, & Simons, 2008).Instructors can explicitly ask students to provide concrete suggestions for revision (van der Pol et al., 2008).Another reason could be that discussions of feedback might be taking longer than what instructors planned initially and students may need more time to determine what actions to take.In that case, instructors may revise the time schedule depending on the needs in the context.Moreover, similar to the first phase, to provide instructors with further insights into the temporal change in student behaviour and activities, the dashboard includes a line graph that visualizes the number of feedback (comments), replies to feedback, and learning actions each day over a timeline.With this plot, besides observing class-wide engagement in the feedback processes, instructors can also explore the relationship among these variables to better understand student behaviour.For example, few learning actions may tell a different story if the number of feedback comments and replies does not change after a certain date (indicating a maturity in the dialogue but problem with deriving actions), versus if students still continue discussing feedback (indicating the need for more time to discuss and understand feedback).As described previously, these insights may result in different pedagogical actions.For example, the chart in Figure 12 suggests that students were able to derive learning actions as they begin and continue to post replies to discuss about the feedback received.

Translation of feedback into action
The last phase of dialogic feedback is the translation of feedback into action, where students intend to progress on the target task by taking the planned learning actions.The dashboard presented in Figure 13 was developed to increase instructors' awareness about class-wide progress on the learning actions as well as the effort put in revising the target works.The dashboard first provides a class overview of several engagement indicators in this phase of the feedback activity: a) number of actions (as well as the average number per submission), b) number of progress updates along with the average progress in all actions, and the number of revisions made along with the average number per submission).This overview of student engagement in this phase can guide several instructor actions.For example, if a low progress is noted class-wide, instructors may decide if the assignment was difficult for students to complete (Molenaar & Campen, 2018) and look for some remedies such as decreasing the requirements for the current round and preparing students for a second review round (Tseng & Tsai, 2007).
If instructors think otherwise about the difficulty of the assignment, they may follow some other strategies to motivate learners through sending out message to the whole class (Molenaar & Campen, 2018) or through some reward schemes (Kulkarni, Bernstein, Klemmer, & Diego, 2015).
Moreover, the information about the revisions made can help instructors with identifying the extent to which students put effort into improving their work (Mcnely, Gestwicki, Hill, Parli-horne, & Johnson, 2012), which can offer several actionable insights to improve the feedback practice.The high number of revisions may indicate that students are highly motivated to improve their work based on the feedback received.However, class-wide low progress in the learning actions despite the high engagement in revising the work may signal instructors that students are struggling to make the desired progress on their work and instructors may consider providing additional support to enhance students' conceptual understanding on the topic that is directly linked to the assignment (Shahbodin & Zaman, 2008).The low number of revisions may suggest several problems including those related with the feedback activity (e.g., lack of understanding the feedback, inability in performing the actions, running late in feedback discussion) or the assignment itself (e.g., lack of conceptual understanding).Instructors may take some other actions such as changing the schedule of the activity (e.g., providing more time for feedback provision and discussion), incorporating several incremental iterations of peer reviews, or introducing supplementary learning materials.Moreover, the changes over time in three indicators (the average progress, the number of progress updates and the number of revisions) are visualized through line graph to provide instructors with a better understanding of the evolvement of student engagement over time and the relationship between the indicators.This chart can help instructors to identify when students actually began to revise their work and when they did start to record progress on actions.Instructors can also observe the temporal relationship between the effort students put into improving their work and the corresponding progress on learning actions.For example, the line graph in Figure 13 shows that in parallel to the increasing number of revisions over time, there is a meaningful progress recorded on the learning actions.

Discussion
The impact of learning analytics solutions is likely to be limited when their designs lack a theoretical grounding (Gašević, Kovanović, et al., 2017;Joksimovic, Kovanovic, & Dawson, 2019).For example, most dashboards fail to provide impactful feedback for students since they are designed without considering any principles of effective feedback that are well established in the literature (Matcha et al., 2019).Given the key role of theory in learning analytics, this paper presented a theory-oriented approach to the design of learning analytics for supporting instructors' class-wide interventions during peer feedback practice.
In this work, the theory piece was mainly the theoretical framework of collaborative peer feedback presented in Section 3.This framework distinguishes the distinct phases of collaborative peer feedback and outlines the learning processes involved in each phase, along with the roles that learners play.In the design of learning analytics support, this framework played a dominant role.First, driven by the framework, Synergy was designed to facilitate dialogic peer feedback online.Synergy enforces a very particular learning design of feedback activity, strictly aligned with the presented framework.This learning design outlines the specific tasks that students (as feedback provider and receiver) need to complete.Moreover, the integration of learning analytics into Synergy involved strong alignment with the grounding framework and the learning design of the feedback activity.The design decisions on learning analytics support concerned (a) the way the feedback activity is structured in Synergy, (b) the critical processes and interactions outlined by the theoretical framework, and (c) the relevant learner data stored by Synergy.These design decisions were also supported by relevant research from the literature.This strong alignment with a specific theoretical framework, although it lays foundations for making informed decisions for the design of Synergy and learning analytics support, may pose challenges to the adoption of the proposed way of feedback practice in real-world contexts.First, Synergy provides a highly structured environment, which may offer a limited flexibility to fit into different learning designs.However, educational contexts come with various practical constraints and distinct pedagogical intentions, which are likely to impose changes in the way feedback activity should be designed as well as in the design and integration of learning analytics support.Given the influence of learning design on student behaviour (Er et al., 2019), the emerging feedback activity data may vary and need to be interpreted differently depending on the way feedback activity itself is structured and positioned in the curriculum.We suggest that the theory-oriented design approach should offer high flexibility for its adoption in various contexts with different needs and constraints.This flexibility can be determined based on the opinions of a large group of practitioner, which then can be implemented in Synergy.
Second, the grounding theoretical framework places a considerable workload on students during the feedback activity.Although this, as an active learning strategy, might offer certain learning gains, the success of the proposed design highly depends on students' considerable efforts in performing all outlined tasks successfully.Achieving high student engagement is a well-known challenge, and several strategies might be applied to promote student engagement in the proposed collaborative feedback activity.
One strategy could be to grade the performance of reviewing peers.Grading has been effective in promoting student engagement (Widiastuti, 2017;Young, 2011).Given the numerous tasks performed during the feedback activity, the grading could become a rather tedious task for instructors to perform rigorously.Therefore, we recommend students' assessment of peers' performance based on a detailed grading form (preferably prepared by instructors) that lists all reviewing tasks performed by peers.An example assessment form is shared in Table 1.<Table 1 goes here> Another strategy could be the training of students about the feedback activity.
This training could be performed in various formats (face-to-face instructor-led session and/or online tutorials) with the goal of informing students about not only the steps they need to follow during the activity but also the learning value of completing the feedback tasks at each step.This training can also provide students with specific instructions about feedback provision (e.g., how to provide constructive feedback) and discussion (e.g., how to react to a feedback) along with several examples.Students can also be enabled to receive training through participating a system-guided example activity in Synergy.Effective training can lead to a more purposeful and active participation in feedback activities (Filius et al., 2018).The presented learning analytics support harnesses all available learning data that are automatically captured as students perform the required learning tasks and interact with each other in Synergy.The completeness of the data collected may depend on the context where Synergy is used, therefore the presented learning analytics dashboards should be interpreted cautiously depending on the context.For example, in a blended course where students may carry out some discussions about assessment scores face to face, the learning analytics representation of the discussion activities might be incomplete as opposed to an online course where all discussions are conducted distantly through Synergy.Moreover, the use of these data owned by students raises ethical and privacy issues, which may harm student engagement and learning if not addressed properly (e.g., being monitoring might be perceived as threatening by students) (Wong, 2016).These issues are addressed by applying the principles of learning analytics deployment proposed by Pardo and Siemens (2014), which are transparency, student control over data, right of access, and accountability and assessment.Synergy requires students to read and provide their consent with respect to the Terms of Service and Privacy Policy to be able to register and use the tool.These documents inform students about the data collection, manipulation, and storing processes in Synergy (i.e., transparency), the data to be collected along with some examples of visualizations built from the data (i.e., student control over data), the user groups and the type of data they can access (i.e., right of access), and the responsible entities for data security (i.e., accountability).

Conclusion and future research
Following a theory-oriented approach, this paper explored the use of learning analytics to support instructors' class-wide interventions for enhancing the practice of dialogic peer feedback at scale.Dialogic feedback is a growing research area that can increase the impact of feedback in higher education.In this direction, this paper presented initial research toward implementing scalable practices of dialogic peer feedback.This research has some limitations that offer several promising avenues for future research.First, the Synergy platform (and the grounding theoretical framework) lacks a comprehensive evaluation.Its full evaluation in preferably several real-world contexts is necessary to identify its effectiveness in facilitating peer feedback and to obtain students' and instructors' opinion about the feedback activity itself (including all subtasks involved) and its implementation in Synergy.In this regard, we plan to conduct several evaluation studies.These studies are intended for different disciplines to explore the impact of Synergy by discipline.Second, the focus of learning analytics support is intended for class-wide interventions.Although the presented learning analytics support in this paper may enable class-wide interventions to address issues involving the most learners, instructors may often need to identify specific reviewer groups facing problems (e.g., failure to reach a consensus on the quality of the work).That is, the current learning analytics support should be extended to assist instructors in taking actions for targeted scaffolding (Wise & Jung, 2019).Targeted scaffolding refers to instructor actions addressed to specific students or student groups.
Furthermore, the data science piece in the design of learning analytics focused on some indicators of students' engagement levels.Although these indicators have potential to offer actionable insights for instructors, future research should evaluate the effects of instructors' use of the learning analytics support on their pedagogical decisions, and the impact of these decisions on students' learning process.Moreover, some other advanced approaches can be incorporated into the design to provide instructors with a deeper understanding of feedback use and provision strategies.For example, students can be clustered according to different feedback strategies they use (Gašević, Jovanović, Pardo, & Dawson, 2017), which then can be used by instructors to provide a more personalized support.Text analytics could be applied to identify different qualities of feedback messages (Cavalcanti et al., 2019).This information can help instructors provide specific guidance to support certain aspects of peer feedback.
These future enhancements are planned to increase the power of the learning analytics support for the better facilitation of collaborative peer feedback.

Figure 1 :
Figure 1: Theoretical framework of collaborative peer feedback.(p) denotes peers providing feedback; and (s) denotes students receiving feedback

Figure 2 :
Figure 2: Review tasks page for students who are reviewing a peer work

Figure 4 :
Figure 4: Assessment Results page

Figure 6 :
Figure 6: Providing Feedback page

Figure 7 :
Figure 7: Review tasks page for students whose work is being reviewed

Figure 11 :
Figure 11: Instructor dashboard targeting the assessments and feedback planning

Figure 13 :
Figure 13: Instructor dashboard targeting the action progress and revisions