Evaluating Assessment in MOOCs*

La evaluación de la evaluación por pares en los MOOC

Magis. Revista Internacional de Investigación en Educación, vol. 14, 2021

Pontificia Universidad Javeriana

Yolanda Agudo-Arroyo *

Universidad Nacional de Educación a Distancia, España


Javier Callejo-Gallego **

Universidad Nacional de Educación a Distancia, España


Received: December 01, 2019

Accepted: June 10, 2020

Published: July 01, 2021

Abstract: This article analyzes students' peer review method and provides the results of online surveys designed to evaluate MOOC courses. It focuses on determinants of the peer review, such as concrete experience during the course or external aspects, applying multivariate analysis of binary logistic regression. 1037 participants were surveyed. Results highlight the existent doubts of participants about this evaluation system from the relationship between expectations and experiences in the course, the relationship with the principles of the learning model, specific aspects of the contents, the design of the courses, as well as students' characteristics.

Keywords:Courses, evaluation, fairness, learning, perception, summer schools.

Resumen: Este artículo analiza determinantes de la evaluación por pares. Presenta los resultados de encuestas online destinadas a evaluar cursos MOOC. Se centra en determinantes de tal evaluación, como la experiencia concreta durante el curso o aspectos externos, aplicando análisis multivariante de regresión logística binaria. Los resultados de 1037 encuestados muestran las dudas sobre este sistema de evaluación desde la relación entre expectativas y experiencias obtenidas en el curso que se ha seguido, la relación con los principios del modelo de aprendizaje, aspectos concretos de los contenidos y el diseño de los cursos, y las características de los estudiantes.

Palabras clave: Cursos, evaluación, justicia;, aprendizaje;, percepción;, cursos de verano.

Introduction

This article analyzes MOOC1 students' peer review and contributes to the reflection on the evaluation design of new proposals for the development of these courses. Particularly, this article inquires about the acceptance of peer review processes, where initially there is not a subject who judges, and subjects judged based on such knowledge anchored in a framework of procedures. The MOOC courses included in the international ECO2 project, offered from different European universities, are taken as a reference. Also, three evaluation surveys were analyzed.

MOOCs3 have been developed in recent years in the university level as a new version of the e-learning modality (Torres & Gago, 2014), with an extensive innovative offer in multidisciplinary topics, diverse in content and quality,4 and with the potential for expanding knowledge (Mackness et al., 2010). The democratization of knowledge promoted by MOOCs contributes to learning through virtual platforms at low cost (Mengual-Andrés et al., 2015; Zapata, 2013), universal education (Vázquez et al., 2013) and continuing education (Bates, 2014). The expansion and impact of these courses makes it necessary to assess the experience of participants to place the reflection of new designs, even when their future development is unpredictable (Lewin, 2012; Osuna-Acedo & Gil-Quintana, 2017). An important defining dimension of this educational specialty is the evaluation given by the participants to the evaluation received, specifically, the peer review. It has a leading character in the final assessment, to which some type of certificate or recognition may be linked. This article analyzes determining factors of students' doubts: The perception of the degree of fairness of their peers' judgment, taking as a reference those who are not convinced with this evaluation system, to complete with empirical information on the diagnosis of the experience in these courses.5We focused on one of the multiple dimensions, a specific type of evaluation, based on the sociodemographic profile of students, their opinion regarding the interaction and technical aspects, their evaluation of the dimensions of the course content, as well as their experience and expectations when taking this type of education. We will delve into a specific aspect, the perception of unfairness of the peer review, contextualized in the analysis of the experience developed in more descriptive studies (Callejo & Agudo, 2018; Osuna-Acedo & Gil-Quintana, 2017).

In the emerging educational context of MOOCs, methodological and evaluation changes find a special echo, presenting benefits and limitations associated with peer assessment, and students' satisfaction with this method. Our results offer information on the perception of this methodology, to be considered for its adaptation to the design of new proposals, and the possible influence of this evaluation method on participation, permanence and drop out in MOOC courses. The objective is, therefore, to analyze the determinants of the perception of unfairness/injustice in the peer assessment of the less convinced students with this evaluation system, indicating whether the determinant is the concrete experience or external aspects to the course itself.

Theoretical Framework

MOOC courses represent a new aspect of distance learning (García-Aretio, 2015). Seen as the future of online knowledge acquisition (González de la Fuente & Carabantes, 2017), MOOC courses offer learning scenarios with pedagogical formats, different from the traditional ones (Ramírez-Fernández, 2015), and a potential change in learning contexts, based on connectivism theory and its learning model (Osuna-Acedo & Gil-Quintana, 2017). MOOC courses focus on the active role of students in their learning process (Bartolomé & Steffens, 2015). These are oriented towards an interactive participatory pedagogy, with the use of new technologies in higher education, within the framework of constructive theories and collaborative learning, which activate new demands in the development of lifelong learning strategies (Vera-Cazorla, 2014). MOOC courses have contributed to the expansion and evolution of distance, open and online learning. Consequently, research on this course format implies reconceptualizing the variables of education (Admiraal et al., 2014), since they demand new approaches for interpreting methodological and evaluation changes that move away from the traditional classroom, and are oriented towards diverse participants (De Boer et al., 2014), with aspects that significantly alter the context of education.

Multiple research has been carried out on the MOOC learning modality, although participants' doubts regarding its evaluation system are not resolved (Sánchez & Escribano, 2014); which becomes one of its greatest limitations due to its massive nature. Assessment is an emerging topic in the MOOC literature (Admiraal et al., 2014).

Evaluation in the teaching and learning process receives renewed attention in the international educational agenda, in a context that explores evaluation models that promote student learning (Vu & Dall'Alba, 2007). Assessment systems condition learning processes. They are based on traditional concepts to check knowledge acquisition, as well as more innovative concepts that establish evaluation as a strategy for self-regulation of learning, where the active participation of the student acquires more relevance, especially in interaction with peers (Gallego et al., 2017). Aligned, they contribute to the development of knowledge (Biggs & Tang, 2011; Boud & Falchikov, 2006; Ibarra et al., 2012).

In a context of reflection and innovation that promotes learning-oriented evaluation, incorporating peer review, which favors self-regulated6 learning (Ibarra et al., 2012), gave space to higher education to change its evaluation procedures that granted greater relevance to students —in opposition to the teacher-centered practice— through new pedagogical methods aimed at educating autonomous, adaptable individuals with communication skills, who self-control their learning, as society currently demands, with meta-cognitive, social and affective competencies.

Accordingly, evaluation is a fundamental part of the process, which contributes more than to the mere reproduction of knowledge, to self-regulated learning and to the development of constructivist learning behaviors (Dochy et al., 2006; Vera-Cazorla, 2014). Evaluation is a fundamental piece in the teaching and learning process (Vera-Cazorla, 2014). In MOOC, learning depends on its design and evaluation (Admiraal et al., 2014). Peers' interaction7 acquires special relevance, and presents new challenges (Aguaded, 2013).8 Peer review is increasingly used in higher education, linked to active learning and student-centered approaches (Carvalho, 2013; Li et al., 2010). It is leading a renewed interest (Topping, 2009, 2017).

Numerous research focused the debate on its use in higher education, taking account different contexts and knowledge areas (Chambers et al., 2014; Gatfield, 2006; Ibarra et al., 2012). As an evaluation methodology, it has demonstrated its effectiveness in a variety of contexts, with students of different ages and abilities (Topping, 2009), accepting its validity and reliability (Gatfield, 2006). It calls the attention of researchers such as Suen (2014), who perceives peer assessment as a valuable tool, regardless of the precision of its results. Ibarra et al. (2012) cite works where a positive correlation is shown between the scores of students, teachers and final grades (Chambers et al., 2014; Cho et al., 2006; Kane & Lawler, 1978; Stefani, 1994,). Most studies on peer assessment in universities and colleges have found adequate reliability and validity (Topping, 2017). When students give fair grades to their peers, consistent with what is expected, this evaluation is presented as reliable (Strang, 2015).

Topping (2009) argues that this is an effective approach and encourages its use. Building on this guidance, some researchers have used Calibrated Peer Reviews (CPR) as guides for peer assessment, reporting progress in learning outcomes after its application and the strong instructor-peer correlation in overall grades (Furman & Robinson, 2003; Schneider, 2015; Saterbak et al., 2018; Suen, 2014). It is presented as a mechanism for learning evaluation (Saterbak et al., 2018) and an easily applicable approach to MOOC (Suen, 2014). A study gives low or moderate quality feedback, and recommends it as “assessment for learning” rather than “assessment of learning” (Admiraal et al., 2014). Sometimes it is defended, more than as another type of assessment, as a change in the educational model (Vera-Cazorla, 2014): Summative and quantitative evaluation tool at the end of the process, and learning and qualitative assessment of students' understanding (Admiraal et al., 2014; Planas et al., 2013; Strang, 2015; Topping, 2009, 2017).

In the MOOC context, peers' interaction takes special relevance and the limitations in the evaluation constitute new challenges. Teacher-student interaction is diluted in MOOCs that represent a logical continuation of the trend in education driven by developments in communication technology and mass education (Suen, 2014). For Suen (2014), learning assessment is essential to guarantee learning, and in mass, open and distance education, some feedback is needed for a complete teaching-learning experience. The particular challenges of these courses require different assessment methodologies than those of traditional online courses, solved with multiple-choice questionnaires. Furthermore, the inability of teachers to provide feedback to a large number of students would not represent a weakness, as long as peer assessment enhances learning (Ashton & Davies, 2015). Peer review, as one more task within MOOC, could be an alternative to avoid regression in didactique perspective, as well as a solution to the huge number of students that might be involved in evaluation in massive courses (Admiraal et al., 2014; Sánchez-Vera & Prendes-Espinosa, 2015). Peer review is required for MOOCs to be a complete autonomous educational tool —and not programs for the unidirectional transmission of information, multimedia and interactive textbooks— in order to contribute to peer learning; to complete the teaching-learning-assessment cycle (Suen, 2014).

Depending on the underlying pedagogical model and the MOOC evaluation system we distinguish, on the one hand, connectivist-oriented courses focused on emerging knowledge, more participatory and open, that rely on social interaction as the basis of learning and use peer review or peer assessment; on the other hand, courses based exclusively on content and transmission of information, which follow a behaviorist model, through automated evaluation and self-directed learning (Ashton & Davies, 2015; Sánchez & Escribano, 2014; Suen, 2014; Osuna-Acedo & Gil-Quintana, 20179 ). The former are known as cMOOC, and the latter are xMOOC. Sometimes difficult to distinguish. As far as we are concerned, peer assessment is also being incorporated into the xMOOCs (Sánchez-Vera & Prendes-Espinosa, 2015).

Therefore, peer review is a challenge for MOOCs. There is abundant literature on the benefits and limitations of this system, which take as reference small learning environments (Carnell, 2015), higher education (Nulty, 2010), the classroom (Hou et al., 2007) or online courses (Chen & Tsai, 2009), among others. Regardless of the context of application, this literature discusses its multiple benefits (Planas et al., 2013) and highlights that it encourages self-regulation and strategic learning (Gallego et al., 2017), achieving greater depth in the understanding of learning itself, which improve critical and reflective practices (Carnell, 2015; Chambers et al., 2014; Chen & Tsai, 2009; Hou et al., 2007; Sánchez-Vera & Prendes-Espinosa, 2015; Topping 2009).

It is pointed out that peer assessment is appropriate for autonomous learning, and increases the chances of learning from peers and from the evaluation process itself. It also allows the development of academic and professional skills, as well as strategies that promote lifelong learning (Ibarra et al., 2012; Topping, 2009; Vu & Dall'Alba, 2007). Likewise, temporary aspects are pointed out as benefits (more immediate response than a teacher could give), as well as constant involvement and motivation (Carvalho, 2013; Ibarra et al., 2012; Kang'ethe, 2017; McMahon, 2009; Vera-Cazorla 2014). Regarding limitations, a greater workload for students is indicated (Vu & Dall'Alba, 2007). Another difficulty is of conceptual nature: the imbalance between the teacher's assessment and that of the student, despite what has been presented regarding validity and reliability (Ibarra et al., 2012).

The literature also highlights the difficulty caused by the lack of trust in the peers' evaluation capacities, doubts about different levels of involvement, peers' understanding and responsibility, as well as the effects of personal bias in the assessment (Brindley & Scoffield, 2006; Carvalho, 2013; Furman & Robinson, 2003; Gallego et al., 2017; Planas et al., 2013; Suen, 2014). Lack of familiarity with the procedure can also lead to a biased assessment. Considering the uncertainty that it may cause, it is necessary to provide adequate preparation to students for peer assessment (Topping, 2017; Vu & Dall’Alba, 2007), as well as rubrics that establish clear assessment criteria (Ashton & Davies, 2015; Chambers et al., 2014; Gallego et al., 2017).

Another different question is students' satisfaction with this evaluation system. Evidence in relation to how peer assessment is perceived is less abundant than that related to its efficiency, benefits or limitations (Li et al., 2010). Some specific studies find high levels of satisfaction and positive assessment of students with this method (Brindley & Scoffield, 2006; Carvalho, 2013; Gallego et al., 2007; Gatfield, 2006; Luo et al., 2014; Planas et al., 2013). Topping (2009) observed that students' perception of peer review is independent of their knowledge of its reliability and validity. Others emphasize that the satisfaction and perception of benefits of the participants in the peer assessment increases after participating in it (Moore & Teather, 2013; Topping, 2017). Chambers et al. (2014) pointed out that students showed a positive attitude towards peer assessment, although not a preference for this method with respect to other traditional forms. Other results pointed to a more negative perception, even resistance or skepticism to an evaluation method that also presents implementation problems (Vickerman, 2009). Resistance can be motivated by a lack of structure, guidance and support that produce uncertainty, as well as by a lack of confidence in the results (Chambers et al., 2014; Li et al., 2010; Vu & Dall'Alba, 2007). Suen (2014) warned that there is still skepticism regarding the reliability of the results of the peer review. Furman & Robinson (2003) highlighted the negative reaction of students to the use of Calibrated Peer Review, which they accused as part of the overload of work it implies for students. Sánchez & Escribano (2014) highlighted the discomfort with this evaluation system, a reason to which they attributed to students' drop out.

Peer review in MOOCs takes place in settings that are sometimes different from those used by some of the cited studies. Specifically, these courses are massive, they have little or no orientation, and peers have international background. Suen (2014) indicates that MOOCs that use peer review have a lower completion rate, although it is not clear if this is due to the effect of using this evaluation system, or if it is the result of asking students to submit open assignments, instead of being limited to the simple and fast click of the multiple choice answers.

The intrinsic massiveness of MOOCs can be a characteristic that could help overcome some of the observed problems in peer assessment in other types of courses. For instance, the mediation of personal relationships between students, to increase or decrease the evaluation based on them (Planas et al., 2013; Topping, 2009), threatening the reliability and validity of the evaluation (Tooping, 2017), or, from another perspective, generating feelings of pain or betrayal, as a reaction to negative evaluations (Ibarra et al., 2012; Vu & Dall'Alba, 2007). The massiveness and, therefore, the absence of previous personal ties between the students and evaluators helps to overcome these problems.

This literature review stated that the research on peer assessment points to questions regarding the benefits and limitations derived from its use, also about the doubts of students about it. However, we have detected a greater theoretical and empirical gap around the perception of injustice/ unfairness regarding this evaluation system. Although there is research that highlights the perception of unfairness by students regarding peer evaluation, due to its lack of objectivity (Brindley & Scoffield, 2006). Others, on the contrary, highlight the acceptance of this assessment model, as well as their perceptions of fairness based on concrete experiences (Carvalho, 2013; Gatfield, 2006). In this research we want to deepen the perception of unfairness/injustice, essential to mitigate the limitations that lie in MOOC evaluation systems.

Methodology and Hypotheses

This work analyzes the results of online surveys aimed at evaluating the MOOC courses included in the international ECO project, of the last three (3rd-4th-5th) editions. The self-administered questionnaire was available in six languages: English, French, German, Italian, Portuguese and Spanish, regardless of the language of the course in which the student registered. People from 37 different countries participated. We used the Lime Survey application, which is open source, for the construction of the database. Results of waves 1 and 2 were pretested for the courses and the evaluation process, with the “students” preferably being teachers of future MOOCs. Answering the questionnaires analyzed was mandatory to obtain the certificate of course completion. The 25 courses were the following, with the number of their respective students throughout the three waves, with a sample made up of 1037 people (table 1), the reference base in this article, accumulating the three editions, to facilitate greater statistical accuracy.

Table 1
Distribution of students who answered the questionnaire by course
Distribution of students who answered the questionnaire by course


Source: own source

Even though it was possible to obtain the certificate without completing the entire course, it seems legitimate to argue that those who had the highest expectations of such completion were the most inclined to respond and, therefore, take part in the evaluation.

Those who were on the way to a positive experience may be overrepresented, in contrast to those who left the course prematurely. 10 This led, in the development of the ECO project itself, to involve the largest number of students registered in the courses, in order to have sufficient sample bases, especially for courses with fewer students, and to be cautious in the positive and absolute interpretation of the results, placing the dominant analysis strategy in the comparison between evaluations.

The online questionnaire was available to students from the middle of the course. It was notified, reminders were sent, including stating the obligation to complete it to obtain a certificate. The deadline to complete it was extended three weeks, after the course was finalized. The resulting file of the responses was anonymized, in order to follow the previous commitment with the respondents, although identification keys were established to avoid the possibility of completing several questionnaires per student/ course. The questionnaire consisted of 30 questions, six aimed to establish the sociodemographic profile of the student (age, gender, country, level of studies, activity and occupation). Four questions collected the assessment of: content (question 9, 5 items), design (p. 12, 11 items), course development (p. 24, 5 items) and participation (p. 17, 4 items). The average time spent to complete the questionnaire was 13 minutes and 54 seconds, during April-August 2016 (3rd edition: 218 questionnaires), October-November 2016 (4th edition: 420) and December 2016-January 2017 (5th edition: 399). The sociodemographic profiles of those who responded the questionnaire are: mean age of 40.77 years (minimum 16 years; maximum 77 years); 45.2 % men, 53.4 % women; 85 % had completed university courses; 73.3% are employed, 16.1% are unemployed; 64.6% had previous experience in MOOC. The items on peer assessment was found in question 16, formulated as: 16.1. “In the evaluation of the work done by the peers (other students) to what extent (very much, to a certain extent, a little, not at all) do you consider that it is an interesting evaluation method?”; 16.2. “In the assessment of the work done by the peers (other students), to what extent (very much, to a certain extent, a little, not at all) do you consider that it is a fair method of evaluation?”. The distribution of responses to these two questions is as shown in table 2:

Table 2
Assessment of the evaluation made by the peers (vertical percentages)
Assessment of the evaluation made by the peers (vertical percentages)


Source: Own source

31.5 % of the students who answered the questionnaire have expressed their concerns about the fairness of this form of assessment: Selecting the categories little or not at all. It should be noted that 9.5% did not know; 8.3% considered it a very interesting method, and 29.7%, that considered it to some extent interesting, have considered it a little or not at all fair. Interest in the peer review method is not related to its fairness.

In order to address our research question on the determinants that lead to questioning this method —its assessment as a fair or unfair method— the variable about the fairness of the method was transformed, from five categories, to a binary variable; grouping, on the one hand, the two original categories that indicated that the peer review method is considered a little, or not at all fair, and, on the other hand, the rest of the original categories (very much, to a certain extent and does not know). This transformation allowed us to use this variable as a dummy dependent in the application of logistic regression analysis. The factors considered were established based on the following hypotheses:

H1 : The perception of the peer review method as unfair is related to the results obtained in the course. For this, we considered factors, such as the objectives to pursue the course —it was analyzed whether it was for obtaining a certificate, where a low peer review could have been an obstacle, or the wish to learning new things, assessing what was learned, the perception of the degree of fulfillment of the expectations that were projected and the evaluation of the experience. It seems possible to asume that the assessment of one of the most significant aspects of the courses was related to the results achieved, especially when they were related to the expectations and motivations expressed.

H2 : The perception of the peer assessment method as unfair is related to the principles about how learning should be, which includes showing interest in new proposals, learning methods and procedures in general and evaluation in particular, as well as the assessment by those who have filled out the questionnaire of compliance with the principles on which the ECO project was defined (encouraging discussion and reflection, promoting student involvement in the course, communication between students and creativity); with special attention to the students' interaction (support, comments received, documents and shared work or feedback).

H3 : Doubts about the fairness of the method are related to the assessment of internal dimensions of the course, such as its design or content. Here, those positions (of doubt) are related to the assessment of aspects, such as: the degree of coverage of the proposed topic, as well as interest, relevance, timeliness and adaptation to the course content; temporal distribution of assignments, design of individual and collective assignments, video readings, video subtitles, documentation provided, audiovisual materials, professors' responses in the different channels (forums, chats, etc.), technical support, usability of the platform or the adequacy of the games (quizzes).

H4 : The perception of the peer assessment method as unfair is related to the sociodemographic profiles of students. Youngest students accepted in a greater level this innovation (H4A), thus, it is possible to assume greater openness to novelty and acceptance of new methods, as well as less experience in this type of course (H4B), in line with the results of the studies by Chambers et al. (2014), Carvalho (2013), and Searby & Ewers (1997). This fourth hypothesis, in the line drawn by the works of Ibarra et al. (2012), points out that the assessment of this method would have to do more with prejudices, than with one's own experience with the specific aspects of the course —collected in H3— or with the assessment received. In addition to age and previous experience in MOOCs, other sociodemographic factors were also considered: Sex, highest level of studies and occupation.

Excluding sociodemographic profiles, the most influential factor in assessing the fairness of the peer review could be the course taken. For this reason, in the proposed logistic regression analyzes, it is taken as a control variable, present in all models. In table 3, the great difference can be seen: between the different courses that have had more than ten students, in the percentage of students who have expressed doubts about the peer evaluation, which supports the decision to take this aspect as a variable of control, if you want to delve into the degree of determination of the other aspects.

Table 3
Percentage of the courses that have stated that the peer evaluation method is little or not at all fair
Percentage of the courses that have stated that the peer evaluation method is little or not at all fair

Unit: Percentage of followers of each MOOC



Source: Own source

For a better management of the outputs of the logistic regression analyzes, a good part of the original variables have been transformed, in order to obtain a relatively balanced distribution of the responses in the different categories.

Results

The observation has been based on the use of the multivariate analysis of the binary logistic regression, transforming the variable on perception of fairness of the peer assessment into a dummy variable, in which the category coded with 1 grouped the responses that have considered this method of evaluation as a little or not at all fair. Table 4 shows the resulting models, based on the exclusive use of the factors most related to each hypothesis or areas of determination of that conception of the method as unfair, as well as the last columns in which all the variables were controlled with which we successively worked with:

Table 4
Results of the binary logistic regression analysis on the variable perception of injustice in the peer evaluation
Results of the binary logistic regression analysis on the variable perception of injustice in the peer evaluation

***, **and * indicate statistical significance at 1% , 5% and 10%, respectively.


Source: Own source

Table 4
Results of the binary logistic regression analysis on the variable perception of injustice in the peer evaluation
Results of the binary logistic regression analysis on the variable perception of injustice in the peer evaluation


Source: Own source

Relationship with the Objective Results and Results of the Experience

Wanting to learn new things (F2) is positively and significantly related to students' doubts about the fairness of the peer review. This evaluation method (model 1) would not be accepted as another learning instrument by those who have this objective when enrolling in these courses, which would lead us to the dimension of principles or to what is understood by learning, or that, at least in this experience, this evaluation method was considered to have contributed, a little, to students' learning.

To a lesser degree, except when controlled by the other variables (model 5), this relationship occurs when the objective of the experience is to obtain a certificate (F1). When objectives and expectations are related to the experience of the MOOC course, the relationship with the perception of unfairness of the peer evaluation is negative; if it is maintained that the expectations were fully or largely met (F4), such feeling of unfairness is reduced. Satisfaction with the overall experience of having taken the course, which includes the recognition that a lot was learned (F3) or a general assessment that the experience was very good (F5), distance from rating the peer review as a little or not at all fair. However, when the set of variables observed in all the models is introduced, the relationship between the very good evaluation of the experience and the perception of fairness is inverted. Thus, the probability that someone who values the experience as very good perceives the peer evaluation as a little or not at all fair is 1.7 times higher than those who have not had such a general evaluation of the experience (model 5). Thus, the relative and subordinate determination of satisfaction with the experience of the course on the perception of the peer evaluation's degree of fairness is revealed. The influence of some outcome factors is appreciated —especially not having fully or largely met the expectations (F4)— but it seems to be due to other mediations as well.

Perception of Compliance with the Principles of the Courses

The MOOCs of the ECO project were configured in order to comply with a series of principles, most of them under the reference of the socalled cMOOCs, as the greater protagonism of students, based on discussion, reflection and, above all, the interaction and communication between them. Considering students' greater role, the peer assessment of the assignments and tests makes sense. What is taken into account is the relationship of the perception of compliance with such principles, specified through a wide variety of items, and the assessment of the peer review as unfair. The verification that the discussion has been fully promoted (F7) and that the objectives have been met (F6) —and, therefore, with the principles that inspired the course—, have a negative relationship with the perception of unfairness of the peer review. These results helped us understand that perceiving, in higher degree, the compliance with the principles that inspired the designs of the courses is related to the acceptance, as fair, of the peer assessment. Thus, even when indirectly, this approach indicated that more democratic and egalitarian principles —at least, with a very dampened hierarchical relationship— entails the acceptance of a peer assessment. This is what happens with factors, such as the assessment that communication between students has been fully or largely promoted (F12 and F13), which have a negative relationship with the perception of unfairness (model 2). Also, there's a strong relationship with the perception of fairness of the peer review when the experience in the course itself has led to the comments received or the feedback from other students as excellent (F16 and F20) or good (F17 and F21). When there has been a positive experience in direct relationships with other students, the perception of the peer review's unfairness evaporates. The probability of considering as fair the peer review method by someone who has rated the feedback of other students as good (F21), is more than four times higher than that of someone who considers that it has not been good. Something very different happens in the relationship of students' perception of fairness with the assessment of the degree of the course promotion of sharing documents, such as projects, references, texts, videos (F18 and F19). It is perceived that the course promotes this type of protagonism from students and, at the same time, that the peer review method is unfair. The fact that such principles have been developed and promoted by those responsible does not mean that they are in agreement with them. And this is what seems to emerge from this relationship between variables F18 and F19, consolidated in model 5, and our dependent variable.

A distance between compliance with the principles of the course and students' acceptance of them, at least when such acceptance is specified in the perception of fairness of the method, can be interpreted in the observation of the behavior of the variables related to the promotion of online interaction (F14 and F15). Both variables have a low statistical significance in the models (2 and 5). However, what is relevant is how these models indicated the relationship between the accomplishment of some of the objectives that motivated students to enroll in the course, and the the peer reviews' sense of unfairness. Thus, the fact that the course fully promoted students' involvement (F8), students' interaction (F9) and creativity (F10) does not seem to lead to the acceptance of the peer review method as fair.

The acceptance, as fair, of the peer assessment method derives, in turn, from the acceptance of it as a principle: A certain exercise of equality and recognition of knowledge —knowing how to value and mark an assignment in a field in which the judge is learning— by peers, beyond its concretion in good/bad comments on the test, assignment or exercises. The acceptance of the principle, projected in the perception of the peer assessment method's fairness, was manifested in the behavior of the variable that reflected the consideration (strongly agree) of this method as interesting (F11).

It has already been mentioned that not all students who have considered this method to some extent or very interesting, also perceived it as fair. Now, the probability to consider it as fair is more than seven times higher among those who have affirmed that they strongly agree that it is an interesting method, than among those who do not have such an opinion, presenting one of the highest coefficients in the determination (negative) of the unfairness of the method.

Specific Features of the Course

Content and design: There is a positive and statistically significant relationship between the perception of unfairness of the peer review method and the evaluation as excellent of the set of audiovisual materials offered during the course (F33), or of the use of the platform (F36). Therefore, the good evaluation of these aspects of the course did not seem to affect the perception of lack of fairness of the peer review, pointing out that such perception is intrinsic to the evaluation method. The relationship with the perception of unfairness of the peer review method was also positive, even with lower coefficients and with little statistical significance (model 3), considering that the contents were completely updated (F25) and completely adapted to different kinds of learning (F26), and that the temporal distribution of assignments throughout the course (F27) or the design of individual tasks (F28) was excellent, the responses obtained from the teaching team (F34) or the support received technician (F35). High considerations of different important aspects in the development of the course that, however, are not an obstacle for the peer review to be considered as unfair. What reduces the probability that the peer review is unfair, with a high coefficient and high statistical significance, is the excellent review of the design of the collective assignments (F29) and the games, quizzes and online tests (F37). The relationship established with this last factor invited us to interpret that, when the evaluation instruments —games, tests— are well designed, the perception of lack of fairness remains distant. Perhaps because in these instruments the evaluation —at least in terms of scoring, of summative evaluation— is derived directly, without being relevant the mediation of other evaluators, other than those who have designed the instruments.

The assessment of the course contents as completely accurate (F24) also distances students' perception of the peer evaluation as unfair. Accordingly, highly defined contents help in the evaluation of the assignment, generating less room for mediations that can be considered biased or poorly formed. Therefore, not only the types of tests but also their origin and contents, can help to increase students' acceptance of the peer assessment.

The relationship is also negative, but with lower coefficients, in the output for models 3 and 5: Considering that the contents completely cover the subject of the course (F22), that the contents are completely interesting (F23), there's an excellent design of individual assignments (F28), recommended video readings (F30), subtitling of videos (F31) and set of documents (articles, book chapters, texts) provided to students (F32) or, with a very weak relationship, the support received from professors (F38).

Student Profiles

An attempt was made to analyze the extent to which the perception of fairness of the peer assessment is conditioned by the characteristics of the students themselves, rather than by the objectives of the course and its results, the achievement of the learning principles that inspire the project or the specific characteristics of the courses. Analyzing the results of model 4 and addressing one of the cores of our hypothesis (H4A), age (F39) seems to have little relevance in determining the perception of unfairness. Of course, it is a negative relationship: the older you are, the less likely it will be considered as an unfair method. The other factor that points to one of the hypotheses drawn (H4B) is previous experience in MOOC. A positive —and statistically powerful— relationship is observed between previous experience (F43) and the perception of unfairness. Experience in this type of education did not clear up students' doubts about an evaluation method that, in many ways, is considered appropriate, taking into account the characteristic of massiveness.

The positive relationship with the perception of unfairness of the peer review is also found in the variable sex (F40): women are more likely than men to have such a perception, having university studies (F41) and being employed (F42). However, results obtained with respect to these last two factors —level of studies and occupation— do not allow definitive conclusions.

Conclusions and Discussion

The perception of fairness, according to the evaluation of students of massive courses that are evaluated by their peers, is related to the design of the evaluation of new proposals in the development of this type of courses. In the article we have focused on those who are not convinced with such an evaluation system, to analyze the determining factors of students' doubts. 16.1% and 42.9% of those who have completed the questionnaire, among those who followed a MOOC course, stated, respectively, that they considered as fair, to a large or some extent, such an assessment method. A majority that has not prevented us from focusing on the 31.5% who considered it a little or not at all fair. Unlike what Chambers et al. (2014) pointed out, we have not found a dominant unfavorable attitude; but it is relevant to address what determines such an attitude.

These authors concluded that participants are very ambivalent in relation to their experience with the peer review process. Our analysis allows us to speak more of complexity than of ambivalence or, in its ambivalence, it would be necessary to establish its different characteristics: which go beyond questioning the validity and reliability of students' experiences with peer assessment, an aspect that has been widely studied (Topping, 2009, 2017; Vu & Dall'Alba, 2007). Students' doubts about the peer assessment, of which Suen (2014) or Furman & Robinson (2003) warned us about, has been evidenced in our research, from the expectations and experience along the course that was taken, the relationship with the principles of the learning model, the more concrete aspects of the contents and the design of the courses, and, finally, considering students' characteristics.

In each of these aspects, factors were found consistently related to students' perception of the unfairness of the method or to the distance from this perception. However, students' doubt seems rooted in the factors related to the relationship to the principles, which allowed us to argue that, even though the specific experience in the course is relevant, the divergence regarding the assessment seems to derive from the lack of acceptance of the “other” —equal as judge of oneself—, of someone who is not supposed to have greater knowledge, since that other is in the same learning situation as the one who contributes with the assignment to be marked.

Results showed that this evaluation method would not be accepted as another learning instrument by those who have this objective among their expectations. This brings us to the dimension of principles or what students understand by learning. Likewise, the verification that democratic principles have been developed during the course, and have been promoted by their design, did not translate into their acceptance, at least when such acceptance was specified in the perception of fairness of the peer review method.

Seeing this method as fair resulted from its acceptance as a principle, as a certain exercise of equality and recognition of knowledge by equals. The perception of unfairness is intrinsic to the evaluation method itself, since it shows a positive and significant relationship with specific features of the course, such as those related to content and design.

Finally, with regard to the experience in this type of courses, authors such as Topping (2017) and Moore & Teather (2013) highlighted that participants' satisfaction and perception of benefits in the peer review increases after participating in it, in comparison with the uncertainty felt at the beginning. Yet, in our research we observed that the experience in this type of education did not clear the doubts about an evaluation method that could be considered appropriate, taking into account the massiveness and the pedagogical model in which it was developed.

In this context that pays special attention to the active role of students in their learning process, as indicated by Bartolomé & Steffens (2015), oriented towards an interactive participatory pedagogy, within the framework of constructive theories of learning and collaborative learning, as described by Vera-Cazorla (2014), we did not find that students' interest in the peer review method is assimilable to its fairness. Our respondents showed satisfaction with a learning model that favored self-regulated learning (Ibarra et al., 2012). This satisfaction was expressed when referring to the fact that the courses promoted students' involvement, their interaction and creativity, but did not lead to the acceptance of this evaluation method as fair. Thus, the reconceptualization of the variables of education in research around this course format, to which Admiraal et al. (2014) have referred, will have to be operationalized by also considering the previous aspects.

It is true that MOOCs demand new approaches capable of interpreting methodological and evaluation changes due to their massive nature and the diversity of their participants (De Boer et al., 2014) and, according to Carvalho (2013), it will be necessary to know more about students' perceptions, without underestimating, as it has been found here, that not only the formal specifics of the tests but also the design of their instruments or the precision of the contents, can help students to have a greater acceptance of the peer assessment. Therefore, when analyzing such perceptions, special attention must be given in future lines of research that will have as reference the configuration of the tMOOC (Transfer Massive Open Online Courses) within the framework of the different types of MOOCs (OsunaAcedo et al., 2018).

References

Admiraal, W., Huisman, B. & Van de Ven, M. (2014). Self- and peer assessment in massive open online courses. International Journal of Higher Education, 3(3), 119–128. http://www.sciedu.ca/journal/index.php/ijhe/article/view/5149

Aguaded, J. I. (2013). La revolución MOOCs, ¿una nueva educación desde el paradigma tecnológico? Comunicar, 41, 7–8. https://doi.org/10.3916/C41-2013-a1

Aguaded, J. I. & Medina, R. (2015). Criterios de calidad para la valoración y gestión de MOOC. Revista Iberoamericana de Educación a Distancia, 18(2), 119–143. https://doi.org/10.5944/ried.18.2.13579

Ashton, S. & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course. Distance Education, 36(3), 312–334. https://doi.org/10.1080/01587919.2015.1081733

Bartolomé, A. & Steffens, K. (2015). ¿Son los MOOC una alternativa de aprendizaje? Comunicar, XXII(44), 91–99. https://doi.org/10.3916/C44-2015-10

Bates, T. (2014, October 19). The strengths and weaknesses of MOCCs: Part I. Learning and Distance Education Resources. http://www.tonybates.ca/2014/10/19/the-strengths-and-weaknesses-of-moocs-part-i/

Biggs, J. & Tang, C. (2011). Teaching for Quality Learning at University: What the Student Does. Higher Education (4th ed.). The Society for Research into Higher Education & Open University Press. https://cetl.ppu.edu/sites/default/files/publications/-John_Biggs_and_Catherine_Tang-_Teaching_for_QualiBookFiorg-.pdf

Boud, D. & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment and Evaluation in Higher Education, 31(4), 399–413. https://doi.org/10.1080/02602930600679050

Brindley, C. & Scoffield, S. (2006). Peer assessment in undergraduate programmes. Teaching in Higher Education, 3(1), 79–90. https://doi.org/10.1080/1356215980030106

Callejo, J. & Agudo, Y. (2018). MOOC: valoración de un futuro. RIED. Revista Iberoamericana de Educación a Distancia, 21(2), 219–241. https://doi.org/10.5944/ried.21.2.20930

Carnell, B. (2015). Aiming for autonomy: Formative peer assessment in a final-year undergraduate course. Assessment & Evaluation in Higher Education, 41(8), 1269–1283. https://doi.org/10.1080/02602938.2015.1077196

Carvalho, A. (2013). Students' perceptions of fairness in peer assessment: Evidence from a problem-based learning course. Teaching in Higher Education, 18(5), 491–505, https://doi.org/10.1080/13562517.2012.753051

Chambers, K., Whannell, R. & Whannell, P. (2014). The use of peer assessment in a regional Australian university tertiary bridging course. Australian Journal of Adult Learning, 54(1), 69–88. https://files.eric.ed.gov/fulltext/EJ1031009.pdf

Chen, Y. C & Tsai, Ch. Ch. (2009). An educational research course facilitated by online peer assessment. Innovations in Education and Teaching International, 46(1), 105–117. https://doi.org/10.1080/14703290802646297

Cho, K., Schunn, C. D. & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891–901. https://doi.org/10.1037/0022-0663.98.4.891

De Boer, J., Ho, A. D., Stump, G. S. & Breslow, L. (2014). Changing “course”: Reconceptualizing educational variables for massive open online courses. Educational Researcher, 43(2), 74–84. http://dx.doi.org/10.3102/0013189X14523038

Dochy, F., Segers, M. & Sluijsmans, D. (2006). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331–350. https://doi.org/10.1080/03075079912331379935

Furman, B. & Robinson, W. (2003). Improving engineering report writing with Calibrated Peer Review™. In D. Budny (Ed.), Proceedings of the 33rd Annual Frontiers in Education Conference. IEEE Digital Library.

Gallego, B., Quesada, V., Gómez, M. A. & Cubero, J. (2017). La evaluación y retroalimentación electrónica entre iguales para la autorregulación y el aprendizaje estratégico en la universidad: la percepción del alumnado. REDU. Revista de Docencia Universitaria, 15(1), 127–146. https://doi.org/10.4995/redu.2017.5991

García-Aretio, L. (2015). ¿… Y antes de los MOOC? Revista Española de Educación Comparada , 26, 97-115. https://doi.org/10.5944/reec.26.2015.14483

Gatfield, T. (2006). Examining student satisfaction with group projects and peer assessment. Assessment and Evaluation in Higher Education, 24(4), 365– 377. https://doi.org/10.1080/0260293990240401

González de la Fuente, A. & Carabantes, D. (2017). MOOC: medición de satisfacción, fidelización, éxito y certificación de la educación digital. RIED. Revista Iberoamericana de Educación a Distancia, 20(1), 105–123. https://doi.org/10.5944/ried.20.1.16820

Hou, H-T., Chang, K. & Sung, Y-T. (2007). An analysis of peer assessment online discussions within a course that uses project-based learning. Interactive Learning Environments, 15(3), 237–251. https://doi.org/10.1080/10494820701206974

Ibarra, M., Rodríguez, G. & Gómez, M. A. (2012). La evaluación entre iguales: beneficios y estrategias para su práctica en la universidad. Revista de Educación, 359, 206-231. http://www.educacionyfp.gob.es/dam/jcr:7e92776f027a-4f76-a4f4-2fe971b2e0c6/re35911.pdf

Kane, J. S. & Lawler, E. E. (1978). Methods of peer assessment. Psychological Bulletin, 85(3), 555–586. https://doi.org/10.1037/0033-2909.85.3.555

Kang'ethe, S. M. (2017). Peer assessment as a tool of raising students' morale and motivation: The perceptions of the University of Fort Hare Social Work students. International Journal of Educational Sciences, 6(3), 407–413. https://doi.org/10.1080/09751122.2014.11890152

Lewin, T. (2012). Education site expands slate of universities and courses. New York Times. http://www.nytimes.com/2012/09/19/education/coursera-addsmore-ivy-league-partner-universities.html?_r=0

Li, L., Liu, X. & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536. https://doi.org/10.1111/j.1467-8535.2009.00968.x

Luo, H., Robinson, A. C. & Park, J.-Y. (2014). Peer grading in a MOOC: Reliability, Validity, and Perceived Effects. Online Leraning Consortium, 18(2) http://dx.doi.org/10.24059/olj.v18i2.429

Mackness, J., Mak, S. & Williams, R. (2010). The ideals and reality of participating in a MOOC. In V. Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell & T. Ryberg (Eds.), Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–274). Lancaster University,

McMahon, T. (2009). Combining peer-assessment with negotiated learning activities on a day-release undergraduate-level certificate course (ECTS level 3). Assessment & Evaluation in Higher Education, 35(2), 223–239. https://doi.org/10.1080/02602930902795919

Mengual-Andrés, S., Lloret, C. & Roig, R. (2015). Validación del cuestionario de evaluación de la calidad de cursos virtuales adaptados a MOOC. RIED. Revista Iberoamericana de Educación a Distancia, 18(2), 145–169. https://doi.org/10.5944/ried.18.2.13664

Moore, C. & Teather, S. (2013). Engaging students in peer review: Feedback as learning. Issues in Educational Research, 23(2), 196–211. http://www.iier.org.au/iier23/moore.pdf

Nulty, D. D. (2010). Peer and self-assessment in the first year of university. Assessment & Evaluation in Higher Education, 36(5), 493–507. https://doi.org/10.1080/02602930903540983

Osuna-Acedo, S. & Gil-Quintana, J. (2017). El proyecto europeo ECO. Rompiendo barreras en el acceso al conocimiento. Educación XX1, 20(2), 189–213. https://doi.org/10.5944/educxx1.19037

Osuna-Acedo, S., Marta-Lazo, C. & Frau-Meigs, D. (2018). De sMOOC a tMOOC, el aprendizaje hacia la transferencia profesional: el proyecto europeo ECO. Comunicar, 55, 105–114. https://doi.org/10.3916/C55-2018-10

Pappano, L. (2012). The Year of the MOOC. The New York Times. http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-coursesare-multiplying-at-a-rapid-pace.html

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–408. http://dx.doi.org/10.1007/s10648-004-0006-x

Planas, A., Feliu, L., Fraguell, R.M., Arbat, G., Pujol, J., Roura, N., Suñol, J.J. & Montoro, L. (2013). Student perceptions of peer assessment: an interdisciplinary study. Assessment & Evaluation in Higher Education, 39 (5), 592-610. https://doi.org/10.1080/02602938.2013.860077

Ramírez-Fernández, M. (2015). La valoración de MOOC: una perspectiva de calidad. RIED. Revista Iberoamericana de Educación a Distancia, 18(2), 171–195. https://doi.org/10.5944/ried.18.2.13777

Sánchez, E. & Escribano, J. J. (2014). Clasificación de medios de evaluación en los MOOC. EDUTEC, Revista Electrónica de Tecnología Educativa, (48), a279. http://dx.doi.org/10.21556/edutec.2014.48.137

Sánchez-Vera, M. M. & Prendes-Espinosa, M. P. (2015). Más allá de las pruebas objetivas y la evaluación por pares: alternativas de evaluación en los MOOC. RUSC. Universities and Knowledge Society Journal , 12(1), 119–131. http://dx.doi.org/10.7238/rusc.v12i1.2262

Saterbak A., Moturu A. & Volz, T. (2018). Using a teaching intervention and Calibrated Peer Review ™ Diagnostics to improve visual communication skills. Annals of Biomedical Engineering, 46, 513–524. https://link.springer.com/article/10.1007/s10439-017-1946-x

Schneider, S. C. (2015). Work in progress: Use of Calibrated Peer Review to improve report quality in an electrical engineering laboratory [Conference presentation]. 2015 American Society for Engineering Education Zone III Conference, Springfield, MO. https://www.asee.org/documents/zones/zone3/2015/Work-in-Progress-Use-of-Calibrated-Peer-Review-to-ImproveReport-Quality-in-an-Electrical-Engineering-Laboratory.pdf

Searby, M. & Ewers, T. (1997). An evaluation of the use of peer assessment in higher education: A case study in the School of Music, Kingston University. Assessment & Evaluation in Higher Education, 22(4), 371–383. https://doi.org/10.1080/0260293970220402

Stefani, L. (1994) Peer, self and tutor assessment: Relative reliabilities. Studies in Higher Education, 19(1), 69–75. https://doi.org/10.1080/03075079412331382153

Strang, K. D. (2015). Effectiveness of peer assessment in a professionalism course using an online workshop. Journal of Information Technology Education: Innovations in Practice, 14, 1–16. https://doi.org/10.28945/2100

Suen, H. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning , 15(3), 312–327. http://dx.doi.org/10.19173/irrodl.v15i3.1680

Topping, K. J. (2009). Peer assessment. Theory into Practice, 48(1), 20–27. https://doi.org/10.1080/00405840802577569

Topping, K. J. (2017). Peer assessment: Learning by judging and discussing the work of other learners. Interdisciplinary Education and Psychology, 1(1). https://doi.org/10.31532/InterdiscipEducPsychol.1.1.007

Torres, D. & Gago, D. (2014). Los MOOC y su papel en la creación de comunidades de aprendizaje y participación. RIED. Revista Iberoamericana de Educación a Distancia, 17(1), 13–34. http://revistas.uned.es/index.php/ried/article/view/11570

Vázquez, E., López, E. & Sarasola, J. L. (2013). La expansión del conocimiento en abierto: los MOOC . Octaedro ICE.

Vera-Cazorla, M. J. (2014). La evaluación formativa por pares en línea como apoyo para la enseñanza de la expresión escrita persuasiva. RED. Revista de Educación a Distancia, (43), 2–17. http://revistas.um.es/red/article/view/236941

Vickerman, P. (2009). Student perspectives on formative peer assessment: An attempt to deepen learning? Assessment & Evaluation in Higher Education, 34(2), 221–230. https://doi.org/10.1080/02602930801955986

Vu, T. T. & Dall'Alba, G. (2007). Students' experience of peer assessment in a professional course. Assessment & Evaluation in Higher Education, 32(5), 541–556. https://doi.org/10.1080/02602930601116896

Watson, S. (2013). Tentatively exploring the learning potentialities of postgraduate distance learners' interactions with other people in their life contexts. Distance Education, 34(2), 175–188. https://doi.org/10.1080/01587919.2013.795126

Zapata, M. (2013). MOOCs, una visión crítica. El valor no está en el ejemplar. http://eprints.rclis.org/18452/1/MOOC_critica_Elis.pdf

Zimmerman, B. J. (2002). Becoming a self-regulated learner. Theory into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

Notes

* Article description | Descripción del artículo This research article is derived from the E-learning, Communication, Open-Data (ECO) project. This project, which lasted three years, aimed to design the 21st century MOOCs, adapted to the needs of a modern European citizenry that demands digital and mobile learning, to be able to study at any time and in each place, not only with the supports available at home.

1 Massive Open Online Course.

2 E-learning, Communication, Open-Data (ECO) project funded by the European Union, with participants from twelve university institutions, developing in six languages, coordinated by S. Osuna (UNED).

3 They appear around 2008 in the United States, gaining momentum from 2012 (Pappano, 2012).

4 In the debate on the criteria for evaluating the quality of MOOCs, we found among others, Aguaded & Medina (2015) and evaluation agencies such as the European Foundation for Quality in e-Learning EFQUEL or the Quality Management Agency for Higher Education QAA.

5 Specifically, of the courses analyzed, since the philosophy of the ECO project gives special relevance to the evaluation of its different proposals, by proposing and observing the operation of different methodologies, the ECO project “seeks to develop a horizontal and bidirectional educational model, from the new reality of MOOCs” (Osuna-Acedo & Gil-Quintana, 2017).

6 On the theory of Self-Regulated Learning, see Zimmerman (2002) and Pintrich (2004).

7 On the interaction in distance education, see: Watson (2013).

8 It is increasingly used in higher education, linking to active learning and student-centered approaches (Carvalho, 2013; Li et al., 2010).

9 Osuna-Acedo & Gil-Quintana (2017) present another type of MOOC, the sMOOC, which, as they describe, encompasses many approaches and contexts, they are developed to stimulate connectivism and socio-constructivist learning, appropriating the social media channel and enriching the social layer with shared knowledge.

10 Regarding drop out —as it is exposed in the document titled “Eco: Elearning, Communication and Open-data: Massive Mobile, Ubiquitous and Open-Learning”. Appendices-registered from Learning Analytics, 31% of those who registered start the courses. But only 4% finished them. Moreover, on average, only 1 in 25 students enrolled in a course finished it.

11 In all models, the selected course was included as a control variable. In all of them it appears with p < 0.010.

Author notes

* Yolanda Agudo-Arroyo is a Sociologist. She combines her teaching in Sociology of Education, Research Methods and Techniques, and Gender and Media, with research on the links between higher education and employment from a gender perspective, on the sociology of youth, on the methodology of distance learning and research methodology.

** Javier Callejo-Gallego is a Sociologist. After his experience as a social market researcher in several companies, he currently teaches the subjects of Social Research Techniques and Sociology of Communication at UNED, Spain, where he places his research lines, together with the Sociology of Time and Society of Uncertainty.

Additional information

To cite this article | Para citar este artículo : Agudo-Arroyo, Y. & Callejo-Gallego, J. (2021). Evaluating assessment in MOOCs. magis, Revista Internacional de Investigación en Educación, 14, 1–27. doi: 10.11144/Javeriana.m14.eain

Contexto
Descargar
Todas