10 student vs expert ground rules

On the basis of autonomy, we let students set their own ground rules for group functioning. By ground rules, we mean behaviorally-anchored criteria that favor effective group functioning (section “peer rating” in Kane & Lawler, 1978; see also Ohland et al., 2012). Students set their rules after a brief introduction to group work by the instructor (for more detail, go to setting the ground rules). To initiate the discussion, we give one example on the ground-rule sheet; “listening to each other”. Many groups adopt this rule (but not all). We take it for granted that these self-created rules do not always cover a broad spectrum of behaviors for effective group functioning. What matters to us is that the group feels comfortable with the rules (relevant to group and task), that they allow for self-evaluation, and that they give members a sense of agency (control over the group’s behavior). From the hundreds of propositions from both students (810 propositions) and experts (117 propositions), we extracted a list of 9 criteria that recur repeatedly. We also created a “miscellaneous” criterion. This list should not be considered “the list” of criteria that groups must meet. We gave each criterion a number (code) and began to codify the ground rules of students and experts. The numbers do not represent an order of importance. This approach, although subjective, allows us to see to what extent the occurrence of expert- and student rules are similar or dissimilar (see figure 1 below). Do they cover the same grounds? Expert criteria were retrieved from * references shown in the reference list below.

figure 1. Relative occurence of ground rule themes, represented by codes, proposed by experts and students. The themes and their corresponding codes are shown in table format underneath the histogram

The histogram shows similitudes and differences in the relative occurrence of ground rules. Both experts and students place most emphasis on the work ethos of the group, which can be summed up by being diligent: “high quality work”, “set high standards”, “good argumentation”, “personal (original) work”, and “rigorous participation in all activities” (be it personal research or group meetings). For the students, “listening to each other” and “being open to feedback” (code 1) is a second frequently written rule. The high frequency is certainly the result of the example we give to start the group discussion. A “fair division of tasks” (code 4) is also often reported. Students describe this as “contributing to an equal share of the project tasks”. The experts place great emphasis on “attending and actively participating in group meetings” (code 7). A criterion that students also often refer to in terms of “actively participating in group sessions”, “sharing (personal) opinions”, “sharing your findings” or “communicating with the group”. A last important theme we mention is “trying to understand each other” (code 2). Students find mutual respect very important, members should “make an effort to seek compromise” and “everybody must endeavors to create a non-hostile environment”. With respect to miscellaneous (code 10), we find rules such as “enjoy the learning task”, “kindness and goodwill are paramount”, “be honest”, “be in a good mood” or “switch off your smartphone in work sessions”. You will find all propositions in an attached excel file at the end of this article.

Although we give groups a great deal of freedom, we sometimes intervene in setting the rules. For example, if they propose few “behaviorally-anchored” criteria. If groups only write holistic rules, such as “make collaboration work,” “make the group work,” “be enthusiastic,” or “contribute to the success of the project,” we ask them to be more precise. What quality of behavior “makes a group work,” or “makes collaboration work,” or “how can you contribute to the success of the project”? One or two holistic rules are not a problem. It is also common for groups to write rules about how the group should function, such as: “the group must work well together”. You can’t turn such a rule into an individual evaluation question. We then help to reformulate the rule. Often they come up with a new proposal themselves, for example, “completing the work assigned by the group in a timely manner”. The evaluation question then becomes, “Does she/he (do I) completes the work assigned by the group in a timely manner?” We also intervene when the focus is only on social criteria and not on quality, or vice versa. We then make suggestions along the lines of what happens if someone is at meetings on time and listens to others, but repeatedly works on an unassigned topic? Or what if someone works very hard but systematically fails to communicate the results in time. In the last examples, where the rules are appropriate for individual evaluation but the scope may be very limited, we leave it up to the group to change or not. We do not enforce changes.

Wouldn’t students be better evaluated according to prescribed rules that cover all the essential criteria of effective group work? Wouldn’t that be better for learning to work in groups? Isn’t it important for leaders to define what is good? Isn’t it better to develop a comprehensive standardized assessment against which everyone is measured as, for instance, proposed by Ohland et al. (2012).

This is a possibility, but we have four arguments why we have not chosen this approach. Our first and most important argument is that the groups should make the rules themselves. It is their collaborative task, it is their responsibility (Blatchford et al., 2003). The role of the instructor is to help them in that task through convincing instruction, not to enforce rules of conduct. Regardless, if group members collectively decide to ignore the prescribed rules then they will also fill in the standardized assessment to their own liking and tick off behaviors that do not match reality. If we consider that both cognitive and social traits (social sensitivity) determine collective intelligence (Williams-Woolley et al., 2010) then the group’s incorrect behavioral choice will be paid for by a poor final product (under the condition that the task merits a collaborative approach, that is;  if it is an authentic and a sufficiently complex task for a high level of interdependency (Kramer & Kursurkar, 2017, and citations therein)). Here is where instructors have their say. If students don’t bother with a bad product, then any rule (or instruction) is lost anyway. A second argument is that tasks and groups are often different, and self-made rules allow for emphasis on specific aspects of the task and the group (Salas et al., 2005, p570). A third argument is that, like standard tests for school disciplines, group members may work more for the test (performance oriented) and less for the task (mastery oriented, or goal oriented) (Deci et al., 1981; Smither, et al., 2005). Finally, as far as learning to collaborate is concerned, we mention that when students participate in multiple collaborative projects and in different groups, the sum of the rules will match the list of behaviorally-anchored criteria established by experts. Our quantitative analysis of ground rules (figure 1) indicates that students and experts formulate approximately the same range of criteria. Thus, over time, students will be exposed to all of the behavioral norms that make for effective collaboration; nothing is lost!

References

Baker, D.F. (2008). Peer assessment in small groups: a comparison of methods. Journal of Management Education 32(2), 183-209. https://doi.org/10.1177/1052562907310489

Blatchford, P., Kutnick, P., Baines, E., & Galton, M. (2003). Toward a social pedagogy of classroom group work. International Journal of Educational Research, 39(1/2), 153-172. https://doi.org/10.1016/S0883-0355(03)00078-8  

*Cornell Univerity, website:  https://teaching.cornell.edu/resource/how-evaluate-group-work

Deci, E.L., Schwartz, A., Sheinman, L., & Ryan, R.M. (1981). An instrument to assess adult’s orientation toward control versus autonomy in children: reflections on intrinsic motivation and perceived competence. Journal of Educational Psychology, 73, 642-650. https://doi.org/10.1037/0022-0663.73.5.642

*Gillies, R.M. (2004). Structuring cooperative group work in classrooms. International Journal of Educational Research 39(1,2), 35-49.  https://doi.org/10.1016/S0883-0355(03)00072-7  

*Goldfinch, J. (1994). Further developments in peer-assessment of group projects. Assessment and Evaluation in Higher Education 19(1), 29-35. https://doi.org/10.1080/0260293940190103

*Freeman, M. & McKenzie, J. (2002). SPARK, a confidential web-based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects. British Journal of Educational Technology, 33(5), 551-569.  https://doi.org/10.1111/1467-8535.00291

*Freeman, M. SPARK website: https://help.online.uts.edu.au/information-for-staff/introduction-spark-gcm/#spark-10

*Friedman, B.A., Cox, P.L., & Maher, L.E. (2008). An expectancy theory motivation approach to peer assessment. Journal of Management Education 32(5), 580-612. https://doi.org/10.1177/1052562907310641

Kane, J.S., & Lawler, E.E. (1978). Methods of peer assessment. Psychological Bulletin 85, 555-586. https://doi.org/10.1037/0033-2909.85.3.555

*Kench, P.L., Field, N., Agudera, M., a Gill M. Peer assessment of individual contributions to a group project: student perceptions. Radiography, 15, 158-165. https://doi.org/10.1016/j.radi.2008.04.004  

Kramer, I.M., & Kusurkar, R.A. (2017). Science-writing in the blogosphere as a tool to promote autonomous motivation in education. The Internet and Higher Education 35, 48-62. https://doi.org/10.1016/j.iheduc.2017.08.001

*Lejk, M & Wyvill, M. (2001) Peer Assessment of Contributions to a Group Project: A comparison of holistic and category-based approaches. Assessment & Evaluation in Higher Education, 26:1, 61-72. https://doi.org/10.1080/02602930020022291

*Loughry, M.L., Ohland, M.W. & Moore, D.D. (2007). Development of a theory-based assessment of team member effectiveness. Educational Psychological Measurement 67, 505-524. https://doi.org/10.1177/0013164406292085

* Ohland, M.W., Loughry, M.L., Woehr, D.J., Bullard, L.G., Felder, R.M., Finelli, C.J., Layton, R.A., Pomeranz, H.R., & Schmucker, D.G. (2012). The comprehensive assessment of team member effectiveness: development of a behaviorally anchored rating scale for self-and peer evaluation. Academy of Management Learning & Education 11(4), 609-630. http://dx.doi.org/10.5465/amle.2010.0177

Salas, E., Sims, D.E. & Burke, C. (2005). Is there a “big five” in teamwork? Small Group Research 36(5), 555-599. https://doi.org/10.1177/1046496405277134  

Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology, 58, 33-66. https://doi.org/10.1111/j.1744-6570.2005.514_1.x

Williams Woolley, A., Chabris, C.F., Pentland, A., Hashmi, N., & Malone, T.W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science 330, 686-688. https://www.science.org/doi/10.1126/science.1193147