Saturday, 7 November 2015

Distributed collaborative homework activities in a problem-based usability engineering course


John M. Carroll & Hao Jiang & Marcela Borge
Published online: 15 January 2014
# Springer Science+Business Media New York 2014
Abstract Teams of students in an upper-division undergraduate Usability Engineering
course used a collaborative environment to carry out a series of three distributed
collaborative homework assignments. Assignments were case-based analyses structured
using a jigsaw design; students were provided a collaborative software environment
and introduced to a simple model of collaboration. We found that students were
able to use the collaboration model, though the quality of their collaboration was poor
both before and after training. We found that students were able to carry out the
distributed collaborative homework activities using our collaborative software environment,
though they often used, and sometimes relied on face-to-face interactions. The
use of chat to maintain team awareness, and coordinate the development of shared
documents, was particularly notable as a practice of our most successful teams.
Students reported a great variety of benefits and challenges in carrying out the
distributed collaborative homework activities. We speculate on future directions for
teaching collaboration skills, and for better supporting team awareness and workflows
in distributed collaborative homework activities.
Keywords Problem-based learning . Collaborative learning . Distributed collaborative
homework . Awareness
1 Introduction
In the social constructivist view, learning is a process of enculturation, constituted by
the appropriation of the artifacts and practices of a community (Dewey 1966; Vygotsky
1978). Constructivist learning involves collaboration, social norms, tool manipulation,
domain-specific goals and heuristics, problem solving, and reflection-in-action
(Hutchins 1995; Scardamalia and Bereiter 1993; Schön 1991). It is common to refer
to this as authentic learning (Lave andWenger 1991; Savery and Duffy 1996) since the
Educ Inf Technol (2015) 20:589–617
DOI 10.1007/s10639-013-9304-6
J. M. Carroll (*) : H. Jiang : M. Borge
Center for Human-Computer Interaction and College of Information Sciences and Technology,
The Pennsylvania State University, University Park, PA 16802, USA
e-mail: jmcarroll@psu.edu
learning activities, as well as the concepts and skills that are learned, closely model the
activities, concepts and skills of domains in the real world.
In contrast, the learning activities encouraged, developed, and valued in traditional
classrooms—such as presenting, listening to, and being tested on lectures, working sets
of purely symbolic problems, and so forth—are often quite inauthentic with respect to
the artifacts and practices of communities and domains beyond the school itself.
Indeed, traditional school-learning often cannot be productively engaged and applied
outside of school (Brown et al. 1989; Resnick 1987). One major thrust in contemporary
educational reform is anchoring classroom learning in authentic situations and issues,
for example, workplace skills (Bottge and Osterman 1998).
Our research addresses the development of authentic instruction in usability engineering
at the advanced undergraduate level. In our prior work since 1996, we
developed a library of usability case studies describing real system development
projects, a set of case-based inquiry activities for students to carry out using these
cases, and a project-based course curriculum integrated through a semester-long project
in which students created an extensive case study report (Carroll and Rosson 2005;
Rosson and Carroll 2002; Rosson et al. 2004).
As is typical in authentic learning designs, we have always incorporated collaborative
learning into this course. Specifically, the semester project and in-class activities
have always been group-based. Our initial rationale for this was that working together
on meaningful activities naturally evokes describing, explaining, listening, and
interpreting, and thereby should help students develop language skills, collaboration
skills, and self-monitoring or meta-cognitive skills. Shared knowledge-building allows
learners to integrate creation and reception, to negotiate meaning and purpose, to divide
and manage collective work, and to come to regard themselves as persons who solve
problems and develop conclusions. A considerable body of work in classroomand
online learning communities encourages this view (Barab et al. 2004; Brown and
Campione 1996; Levine and Shapiro 2004; Resta and Laferrière 2007), though some
of that work involved much younger students than our undergraduates.
In our prior work, we did not present articulate guidance to the students about what
collaboration is or how to do it. Rather, we expected to benefit from our student
experience in their earlier courses: For most of our students, usability engineering is
a final-semester course, and during the early 2000s our college implemented a pervasive
commitment to collaborative learning. Thus, most of our usability engineering
students had been learning collaboratively in teams for at least 3 years. We also
expected to benefit from internship experiences: Each student in our college must
complete two internships as a graduation requirement. Thus, most of our usability
engineering students had completed two internship experiences, working with others in
the real world, before entering our class.
Nevertheless, we have consistently observed a wide range of collaborative outcomes
among our student groups, including many problems (Carroll and Rosson 2005).
Collaborative learning is a complex social skill. For example, Chan et al. (1997)
showed how the expression of differing views on strategies and issues can entrain
effective learning, though it can also entail conflict that causes groups to function
poorly. Students can develop their own understandings through articulating and demonstrating
knowledge to less advanced peers, as well as benefiting themselves from the
assistance of more advanced peers (Brown and Campione 1996; Hertz-Lazarowitz and
590 Educ Inf Technol (2015) 20:589–617
Miller 1992). Damon (1984) proposed that collaboration with peers is particularly
effective for learning that requires giving up a current understanding in favor of a more
sophisticated concept.
In this paper, we report on efforts to support and to analyze project-based collaborative
learning in an upper division undergraduate usability engineering course. We
investigated distributed collaborative homework (DCHW) activities in which teams
worked for 2–3 weeks, largely out of class, on a significant design document.We think
that this extension of collaborative learning is particularly important to enhance the
authencity of collaboration, which is now often distributed collaboration in the working
world (Malone 2004). We also developed a model and training intervention to enhance
collaborative skills of the students.
The paper is organized as follows. First, we describe the context for this research,
the usability engineering course, and briefly describe our early efforts to incorporate
distributed collaborative homeworks into the course. Next, we describe our new course
design, which incorporated elements to better prepare students to carry out distributed
collaborative homework assignments, and to scaffold collaboration within the assignments
themselves. We provided students with a simple but explicit model of collaboration,
and direct practice with this model. We encouraged students to apply the model
to a series of distributed collaborative homework activities, both explicitly and implicitly,
by using a jigsaw design for the homework activities, such that every team
member’s separate contribution was independently recognizable and essential to the
team being able to synthesize its collective outcome (Slavin 1980). Finally, we turn to
outcomes, including our experiences with introducing a lightweight model of collaboration,
student self-perceptions of their teams’ collaborative interactions, and direct
analysis of activity of the student teams in carrying out the distributed collaborative
homeworks, and teams’ subsequent success in the semester project to which the
homeworks contributed. We close the paper with discussion of implications for enhancing
student engagement in engineering courses through distributed collaborative
assignments, and for designing future collaborative software environments.
2 Research context: Undergraduate usability engineering
The usability engineering course that is the context for our study was originally
developed for Virginia Tech’s undergraduate program in computer science in 1995.
Since 2004 we have offered the course as part of Pennsylvania State University’s
undergraduate program in Information Sciences and Technology. The overall course
design has not changed dramatically through the years, and follows a textbook (Rosson
and Carroll 2002). The course presents the “scenario-based usability lifecycle,” a view
of software development that emphasizes designing for the user experience by describing
requirements, designs, and evaluation contexts as scenarios—narrative descriptions
of how people interact with a system, including what they think and experience as they
do so. We used scenarios as an integrating representational technique, since they
emphasize the user’s point of view and the importance of describing and supporting
work activities as the user sees and experiences them (Carroll 2000). Thus, the course
begins with a module of requirements development, including participatory and ethnographic
methods of gaining insights into user concepts and practices. For example,
Educ Inf Technol (2015) 20:589–617 591
requirements scenarios are descriptions of things users currently and things they would
like to do. A key technique we emphasize is identifying potential tradeoffs in scenarios:
It often happens that features that are desirable to user entail features not desirable to
them, and conversely.
Our course design concept was to sequence major topics following the information
and activity flow of a scenario-based system development process. The course presents
successive life cycle phases: requirements development, activity design, information
design, interaction design, documentation design, formative evaluative, and summative
evaluation. Each phase has characteristic scenarios and scenario-based techniques, and
connects in various ways with other phases. Integrated into the various phases are
discussions of a variety of key usability engineering concepts. Thus the activity design
phase introduces metaphor as an approach to for conceptualizing new activities; the
information design phase presents principles for designing graphical layouts, such as
white space and symmetries; the formative evaluation phase emphasizes thinking aloud
protocol methods; the summative evaluation phase presents the logic of experimental
design and elementary statistics. In the balance of this description, we focus on the
instance of the course that was offered in Spring 2009; the webpage for this version of
the course is at ist413.ist.psu.edu/sp2009/.
The course was strongly problem-based. Students individually worked on a series of
nine homework assignments throughout the semester, they took eight short quizzes on
reading assignments, and two long quizzes comprehensively addressing the first and
second halves of the course. Examples of homework assignments included the following:
elaborating the requirements analysis in one of the case studies by developing a
new user scenario and identifying in that scenario a new design tradeoff, identifying
metaphors used in one of the case studies and analyzing the role those metaphors
played in activity design for that case, critically analyzing a web page with respect to
Gestalt psychology and other principles of graphical design, and contrasting why and
how developers used prototypes in two of the case studies.
The students worked in teams during most class meetings, carrying out seventeen inclass
activities, three distributed/collaborative homework assignments, to which we
return in the next section, and a project that spanned the entire 15-week semester and
involved four document submissions spaced throughout the semester. Examples of inclass
activities included the following: debating and evaluating four alternative views of
usability engineering, brainstorming and critiquing alternative ideas user interface
design ideas for the team’s semester project, carrying out simple experiments with
digital watches to contrast model-based evaluation with thinking aloud evaluation,
discussing future challenge areas for usability engineering in creativity, mobility,
ubiquitous sensors, and elderly users. For each activity, the teams submitted at the
end of class a short summary of their discussions. For the semester project, teams
submitted a requirements analysis—including reports of input from potential users, a
design document—covering activity, information and interaction design, a usability
evaluation report—including results of evaluation of their prototype, and comprehensive
case study. Slightly more than half of final course grades were determined by
collaborative learning activities.
A central resource for the course is our usability case study library, a collection of 6
cases of system development drawn from a variety of domains—mobile banking,
speech systems (PhoneWriter telephone-to-text), online communities (TappedIn
592 Educ Inf Technol (2015) 20:589–617
professional community of educators), online retail (garden.com), community nonprofit
(Promotion of Animal Welfare and Safety, PAWS, animal recue and rehoming) as well
as an online version of the virtual science fair case which is also described in the
students’ textbook (Rosson and Carroll 2002). Each case study is organized hierarchically
into phases of the usability lifecycle: Requirements Analysis, Activity Design,
Information Design, Interaction Design, Documentation, Usability Testing. Each phase
is further decomposed into subcategories of activity within that phase. Usability Testing
and Requirements Analysis are decomposed into Planning, Methods & Materials, Data
Collection, Interpretation, and Synthesis. The remaining four phases are decomposed
into Exploration, Envisionment, and Rationale.
Within each subcategory we grouped project documents relevant to that activity. For
example, in the TappedIn case, under Information Gathering within Requirements
Analysis, we included documents reporting the survey studies that the designers carried
out to understand member needs. Under Exploration within Activity Design we
included documents reporting brainstorming the developers carried out, and a document
describing their “conference” metaphor. Under Envisionment within Information
Design, we included sketches and screen mock-ups describing how TappedIn might
present various kinds of rooms to users. Under Rationale within Interaction Design, we
presented before/after screen shots describing specific user interactions and a document
presenting design rationale. To the extent possible, we directly included documents and
information from developers; however, we also interviewed developers and reconstructed
documents, since actual development projects are sometimes not comprehensively
documented. For more details, see Merkel et al. (2006).
The case study library has proven to be versatile and effective in supporting
a variety of authentic inquiry activities (Carroll and Rosson 2005; Rosson et al.
2004). Case materials help students engage with domain content, reconstruct a
project’s narrative, and vicariously experience domain practices and situated
concepts (Carroll and Rosson 2006). Case materials are good for evoking
critical thinking; for example, students were asked to analyze the underlying
interaction design issues in the mobile banking case study, and to explain how
each issue was resolved, or not resolved, in the actual design. Case materials
also afford meta-cognitive “what-if?” reasoning; students can study a detailed
and vivid report of a system development project, and then be asked to imagine
how and why it could have been done differently; for example, in one class
activity we gave each group an extra requirement not present in the garden.com
case study, and asked them work out the consequences of this new requirement
for the overall analysis.
Initially we developed a read-only browser for the case studies (Carroll and Rosson
2005; Rosson et al. 2004). This version is openly accessible at ucs.ist.psu.edu.
However, we wanted students to be able to interact with cases, not just read them.
We wanted them to be able to annotate case study objects, that is, to make notes and
highlight for their own future reference, and to share annotations with their team
members. We wanted them to be able to bookmark locations in case documents to
make facilitate subsequently return to and working with a case.We also wanted student
teams to be able to create their own cases as a capstone course activity, integrating
across the phases of the usability lifecycle. For several years, we jury-rigged student
case creation, by creating multiple “student” instances of the read-only browser, and
Educ Inf Technol (2015) 20:589–617 593
allowing student teams to edit a case in their own browser instance. In 2008, we created
an editable case library (Jiang et al. 2010a, b). Our new browser allows authenticated
users to create and edit case materials in a web-browser.
3 Course design: Collaboration model and distributed collaborative homework
activities
Collaborative learning is a core feature of authentic learning: People always construct
and apply knowledge in interaction with other people. Engineering is especially
collaborative, because complex projects typically require multi-disciplinary expertise.
Accordingly, we have always regarded collaborative learning as essential to our course
(Carroll and Rosson 2005; Rosson and Carroll 2002; Rosson et al. 2004). However, we
have not always explicitly analyzed and supported collaboration per se.
In early version of the course, starting in 1995, we provided general coaching to
student groups as to how they should try to collaborate.We suggested they should share
all of their ideas, listen to others’ views, criticize constructively, try to be sure that
everyone did his/her part, and make their group work greater than the sum of the
individual parts. In general, the student teams did behave cooperatively, and appeared
to enjoy working together both in class and outside class. They frequently became
passionate about their projects, and clearly spent much time and effort on this part of
the course.
However, there also were recurring problems. Some students contributed little or
nothing, and allowed their team members to do all the “joint” work. We attempted to
manage this by weighting individual grades according to a student’s team contribution,
assessed by querying each team member about the other members’ contributions.
Indeed, such approaches are now quite pervasive and supported by standard tools such
as the Comprehensive Assessment for Team-Member Effectiveness (CATME; Loughry
et al. 2007). Our students are uneasy with being asked to assess their peers, but we
found that discussing the rationale for the assessment mitigates this. We also have used
the technique of having the instructor observe group work sessions and noting who is
actively engaged in the joint activity. Indeed, even noting attendance at group work
sessions can be a meaningful proxy for assessing contribution.
A more subtle problem was that many teams used a management strategy of “divide
and conquer”. The key joint work they did was to divide the project up into nearly
independent subprojects, each of which could be carried out by a single student (i.e.,
non-collaboratively). At the conclusion of their work they more or less stapled the
pieces together and handed the ensemble in as their “collaborative” project.
Starting with the 2006 version of the usability engineering course we included
distributed collaborative homework activities (DCHWs; Carroll et al. 2010). Our
objective was to provide students with an opportunity to practice collaboration per
se.We conjectured that carrying out a mediated collaborative activity, one that created a
persistent and therefore reviewable record of team interactions, might help the students
to see and reflect on what they were doing, and to better understand and perhaps
improve their collaborative practices. In addition, these DCHWs introduced the students
to collaborative workspaces incorporating discussion forums, chats, and shared
documents; such tools are now part of the design palette for interactive systems, and
594 Educ Inf Technol (2015) 20:589–617
regularly used by practicing usability engineers. We used an open-source system that
was particularly strong in supporting integrated synchronous/asynchronous interactions
(BRIDGE (Basic Resources for Integrated Distributed Group Environments), Ganoe
et al. 2003).
BRIDGE supports persistent text chat, threaded discussion forums, and various sorts
of document types. BRIDGE tools and content objects can be accessed and edited
synchronously or asynchronously; edits are pushed to all subscribed clients character
by character. BRIDGE also incorporates a buddy list awareness tool, displaying the
status of fellow team members, and a timeline file viewer, showing when various
versions of shared documents were created, and providing direct access to all versions
(Fig. 1). We demonstrated BRIDGE functions for the class, emphasizing that team
members could exchange ideas, raise issues, present their contributions, and comment
on work that was planned or completed either synchronously, if members happened to
be in the workspace at the same time, or asynchronously, if members happened not to
be working at the same time. Most students had some experience with collaborative
tools, but did not use such tools routinely (for example, Goggle docs was in Beta until
July 2009).
For the 2009 offering of the usability engineering course, we designed three
DCHWs. These homework activities were closely integrated with the semester
project-planning that the students were doing, specifically with the user interface
design, documentation design, and usability evaluation phases of the semester projects,
but they were also separately graded course components. In each assignment, students
reviewed corresponding system development across different cases, identifying ideas
and techniques in the case studies that could be applied in their team’s semester project.
Prior to this series of DCHWs, we also included explicit training and practice on a
simple model of collaboration.
Fig. 1 BRIDGE workspace for one of HW1 showing the persistent chat log (lower left), buddy list (upper
left), two shared documents (lower right), and a timeline file viewer (top right) presenting versions of various
team objects
Educ Inf Technol (2015) 20:589–617 595
Our model analyzes effective collaboration as consisting of effective communication
(team member’s build on each other’s ideas and work to develop a joint understanding),
planning (the group’s activity is directed by an agenda of goals), productivity (the team
stays on track with respect to task goals and ensures work quality), and evaluationnegotiation
(different perspectives among members are made visible and addressed,
and the group’s results are critically assessed). These four facets of collaboration can be
operationalized as four distinct roles team members adopt and perform (Borge 2007), in
our usability engineering course we have treated them more abstractly as four key
qualities or responsibilities of an effective collaborative interaction (Borge and Carroll
2010; Carroll et al. 2008).
In week 2 of the course, student teams videotaped their own interaction as they
worked on an in-class activity on requirements change (they were presented with one
further requirement for the garden.com design, and asked to analyze the impact of this
requirement on the upstream design). The next week, the class was presented with a
brief lecture on the four facets of collaboration and participated in video-based training
on the facets (they viewed examples of student team interactions from a prior version of
the usability engineering course and learned to classify behaviors as illustrating the
facets). In week 4 of the course, teams were asked to review their collaborative
interaction videos with respect to how well they enacted the four facets. Our rationale
for this training approach was that direct self-confrontation would help the students to
recognize how their own collaborative skills could indeed be improved. This collaboration
thread of instruction during weeks 2–4 of the course was actually fairly lightweight.
The lecture we provided was only a few minutes long, and the other activities
were interleaved with other class activities.
In week 7 of the course the student teams started the first of the three DCHWs. The first
homework, hereafter HW1, occurred over 3 weeks, spanning Spring Break week. This
scheduling helped to motivate a distributed-collaborative activity: Students were told they
could and should work on this assignment during Spring Break, and their results were due
immediately after Spring Break.HW2 started immediately afterHW1 and ran for 2weeks;
HW3 started immediately and ran for another 2 weeks. On the day that one of the DCHWs
started, the teams were given about 30 min to plan and organize. On the day a homework
activity ended, teams were given 30 min to discuss how things had gone.
HW1 addressed the user interface design for the semester project. In each team, one
student was to analyze one of the case studies with respect to user interface metaphors
developed for activity, information and interaction design phases, and the rationales and
experiences with those metaphors that were reported in the case studies. Based on this
analysis, and on the team’s requirements analysis for their own project, each student
was to make specific design recommendations for the whole team to consider. Then,
the team was to pool these recommendations, evaluate and integrate them, and develop
an interface design envisionment illustrated with at least two scenarios. Finally, the
team was to create a document describing their design process and outcome, including
design rationale. The case studies were of course developed to address stakeholders and
requirements different from those of the students’ projects, so we emphasized that the
homework activity was an example of creative use of analogical reasoning.
Collaborative activities in which the separate contribution of each learner must be
integrated with, and not merely aggregated with the contributions of other team
members is called a jigsaw design (Slavin 1980). Thus, each student’s individual
596 Educ Inf Technol (2015) 20:589–617
analysis of the user interface design ideas and argumentation from a different case study
is a distinct but essential investigatory contribution to the team effort. The results of the
individual analyses had to be pooled and integrated to reach the specified result. We
employed a jigsaw design to more clearly motivate collaboration beyond “divide and
conquer” as well as to better ensure that every member actively participated in the
collaborative learning activity. In addition to their jigsaw design, these collaborative
homework activities involved far more complex analysis of the case studies than the
DCHWs we had used in 2006. Specifically, they were far too complex to be carried out
in class. The students were told that we would assess evidence of their collaborative
process in the document versions and chat logs in their BRIDGE workspaces, as well as
their team products, that is, the user interface design and rationale.
HW2 addressed the usability evaluation phase of the semester project. In each team,
one student was to analyze one of the case studies with respect to the way usability
evaluation was implemented in that project. (Team members were to work on different
case studies in each of the three DCHWs.) Based on this analysis, each team member
was to make recommendations with rationales to the whole team as to what usability
evaluation work should be carried out for the team’s semester project. Then, as in HW1,
the team was to pool the various recommendations, evaluate and integrate them, and
develop usability evaluation plan for their semester project. They were asked to detail
this plan with respect to the case study subcategories of Planning, Methods and
Materials, Data Collection, Interpretation, and Synthesis, modeling on the best practices
from the case study library.
HW3 addressed the documentation design phase of the semester project. In each
team, one student was to review one of the case studies with respect to its documentation
design. Based on this analysis, each student was to make specific design
recommendations for the whole team to consider. Then, the team was to pool these
recommendations, evaluate and integrate them, and develop a documentation design
envisionment. Finally, the team was to create a document describing their design
process and outcome, including design rationale.
For each of the DCHWs, the teams received these instructions:
You can do some face-to-face coordination in class, but most of your work should
be carried out and documented in BRIDGE. Please bear in mind that distributed
collaborative work requires extra coordination effort. Collaborative effort can be
very powerful, but can also be quite chaotic and wasteful. You should use the
collaborative roles to increase the effectiveness of each member’s efforts: use of
the roles should be evident in the collaborative documents (e.g., the text chat) you
create or this assignment.
We will grade this assignment by reviewing the collaborative documents, including
text chat, that you create in the course of this activity. You do not have to turn
in any other product. Please make it obvious in your group’s BRIDGE workspace
what documents belong to this assignment.
In sum, each of the three DCHWs took place over a 2–3 week period. Students had
to conduct a creative jigsaw analysis of materials in the case studies library. Students
reviewed different phases of design across different cases identifying ideas and techniques
in the case studies that could be applied to their team’s semester project, and
Educ Inf Technol (2015) 20:589–617 597
needed to integrate separate analyses of team members. Each team member was to
recommend user interface design ideas and design rationales to the team based on their
individual analysis. They were told to first gather and consider all the different ideas
that members had identified, and then prioritize, select and adapt ideas and techniques
to their own project. The final result of this particular homework was to envision a user
interface design with at least two user interaction scenarios. In the syllabus and in class
discussions, these it was emphasized that grades would be based not only on outcomes
and products handed in, but on collaborative process.
4 Data analysis and results
Our investigation is a case study of a single class. We did not sample students or course
topics; we studied the students who enrolled in the class. There were 45 students enrolled
in the course; their project-based work, such as the DCHWs and the course semester
project, were carried out in teams of 5–6 members.We did not run a control class.We did
try to gather a variety of data including coding of videotaped student interactions, selfreports—
both Likert items and open response items—gathered in surveys after each
DCHW, logged interactions with the BRIDGE system, and graded assessments of the
homeworks that were submitted. Since we do not have a basis for statistical inference, we
present a multi-method descriptive analysis of data from our case study.
Our data analysis is divided into three parts. First, we report on the training and
reflection we provided for the four facets of collaboration model. Next, we summarize
self-report data from surveys administered after each of the three DCHWs. Finally, we
present log data describing how students actually used the BRIDGE tools in carrying
out the distributed activities.
4.1 Video-based training and analysis of collaboration skills
As mentioned earlier, in week 2 of the course we videotaped the student teams working
on an in-class exercise. In this exercise, each team was presented with one additional
design requirement for one of the software development projects documented in the
case study library, garden.com. The new requirements included extending the business
to address international markets, focusing on business-to-business supply chains, and
developing for kiosk access as well as web. Teams were asked to identify new
stakeholders in the design, possible new fieldwork that would have to be done,
potential impacts on design rationale, and so forth.
We analyzed the videos for evidence that the team interaction addressed the four
facets of our collaboration model, using a scoring rubric that decomposed each facet
into subskills (Borge 2007; Borge and Carroll 2010): Communication was decomposed
into equity in participation across the team, listening to other members, building on
ideas, creating common ground, and being clear, concise and relevant to the task;
planning was decomposed into setting an agenda, providing and using feedback,
identifying project and interaction goals, and distributing work and responsibilities;
productivity was decomposed into keeping track of progress, focusing on task-oriented
activity, checking and improving work products, and updating status among members;
evaluation-negotiation was decomposed into considering different points of view,
598 Educ Inf Technol (2015) 20:589–617
thinking critically and managing criticism, identifying and evaluating tradeoffs, sharing
in decision making, and keeping track of ideas considered. (Note that communication
and evaluation/negotiation were assessed for 5 subskills, but planning and productivity
were assessed for only 4.)
Seven teams were scored for each facet using a three-point scale (0-1-2). (There was
a problem with video collection for the eighth team; we videotaped another activity for
this team to give them analysis and feedback, but we do not consider them in this data
analysis). Two coders analyzed 20 % of the dataset with a Kappa of .69; differences
were discussed and resolved for coding the balance of the data. Performance is
summarized in Table 1.
Overall, teams scored 37 % of possible points. Normalizing scores (for total points
possible), five of the seven teams followed the pattern Communication ≥ Evaluation/
Negotiation ≥ Productivity ≥ Planning. We used the video recordings of these team
interactions for self-confrontation training (Fukkink et al. 2011) as an in-class activity.
There was anecdotal evidence that team members found this experience useful in
gaining insight into their own collaboration practices. One student commented,
“Wow, I never knew how much I dominated conversations. (Pointing to video) Look,
I totally cut Ally off and didn’t even let her finish her sentence”. Another student
reported, “I have worked with these guys on other projects [referring to team members],
so I guess we have gotten comfortable and we just agree with everything we say.
We don’t ever explore any alternative ideas.”
In week 12 of the course, we again videotaped the teams as they worked on a class
activity, and scored their collaborative process using the same rubric used in week 2.
We, of course, did not control the learning activity; in week 2, the students were
learning about system requirements and performed a requirements change activity,
whereas in week 12 they were learning about documentation design, and worked on
documentation for their semester project. In the second assessment, teams scored better
in each of the four facets of our collaboration model; 53 % of possible points, an overall
improvement across facets of 41 %. Normalized scores for four of the seven teams
followed the ordering cited above. Between the two assessments, teams improved least
on communication and most on planning, which is at least in part due to ceiling/floor
effects. In Evaluation/Negotiation, teams improved most notably in the subskill of
thinking critically and managing criticism. In Productivity, they improved most in the
subskills of keeping track of progress, and checking and improving work products.
4.2 Student self-reports
After each of the three DCHWs, students had the opportunity to take a survey probing
their self-perception of their team’s collaborative activities. For our study, this is
Table 1 Analysis of four facets of collaboration for seven student teams (assessed on a 3-point scale)
Communication Planning Productivity Evaluation/negotiation
Week 2 assessment 6.7/10 0.14/8 2.14/8 4.29/10
Week 12 assessment 7.0/10 3.29/8 3.57/8 5.14/10
Educ Inf Technol (2015) 20:589–617 599
confounded with their self-perception of the collaborative facets and their use of the
BRIDGE environment.
The survey for HW1 consisted of 6 open-response items; 29 students completed the
survey. Questions 1 and 2 asked how the students used BRIDGE, and what salient
benefits or problems they experienced in distributed collaboration. Questions 3, 4, and
5 asked them to identify strengths and weaknesses of BRIDGE with respect to
information pooling, discussion/evaluation of information, and negotiation/decision
making. Question 6 asked them how they used the four collaborative facets and asked
them to identify strengths and weakness of the four facets. To simplify, and because use
of BRIDGE and distributed collaboration are confounded in our study design, we
collapse across questions 2–5; we noticed that students often made similar points across
the questions, treating them as general probes for articulating mediated collaborative
practices and issues. Also, these were open-response items asking for both benefits and
problems/strengths and weaknesses; there was great variation in how many issues
given students identified. Therefore, it seemed most useful to focus on the issues
themselves, that is, formatively (Scriven 1967).
In their responses to question 1, eight of 29 students reported that teams relied
mostly on face-to-face interaction; five students stated that their teams worked only in
BRIDGE (this included two who said their teams worked only face-to-face). Most
students reported that their teams combined face-to-face and remote interaction: Five
responses emphasized early or initial face-to-face coordination and planning followed
by execution mostly via BRIDGE. Other responses emphasized using BRIDGE collect
documents, to edit document collections, to chat, to coordinate status, and to interact
during Spring Break.
In responding to questions 2–5, students raised a wide variety of benefits and
problems of distributed collaboration. In enumerating strengths of BRIDGE, 16 students
emphasized chat, 13 emphasized real-time collaborative document editing (including
collaborative drawing), and 7 emphasized posting ideas and information in the
forum. More specifically, several students emphasized the utility of persistent chat as
creating a reviewable record of past team discussions, and of being able to interleave
chat and document editing. Students observed that the discussion forum made it
possible to contribute in parallel, to consider a range of ideas and information at once,
to see at a glance who had posted what, and to considered everyone’s ideas in making a
decision. One student stressed the benefit of having a single group document to edit, as
opposed to having a designated scribe in the group. Eight students emphasized the
benefits of having a team document repository, a central place to share their work. One
student made the point that displaying the team’s documents in a timeline view
supported awareness of dates and sequence in the teamwork.
In enumerating problems of distributed collaboration in BRIDGE, 8 students also
emphasized the chat, noting that online discussions could be awkward when more than
one person is trying to “talk,” in having to wait for someone else to finish typing, and in
having to manage window resizing and other user interface issues. Eleven students
emphasized problems of managing many document windows, and finding information
distributed among many files and discussion posts. Five students emphasized problems
of limited awareness of what teammates were doing, of being able to convey what they
really meant in text only, and in general, the challenge of learning to work with other
people in a new way. Six students raised an assortment of issues regarding missing or
600 Educ Inf Technol (2015) 20:589–617
weak functionality for alternate text and table formats, graphics editing, voting support
for decision making, etc.
In responding to question 6, fourteen students said their teams had employed some
of the facets, one thought that their team had employed the ideas implicitly. Eleven
students said that their teams did not use the collaborative facets, though a couple
seemed to blame this on other members, and two clearly regretted that their teams had
not used the facets more systematically. Four students said their teams used different
collaboration models; three emphasize the utility of a “team leader,” and the fourth
criticized the facets for not dividing work equally, and stated that his/her team had
followed such an equal division model.
The survey for HW2 consisted of 8 items; four item pairs in which the first was a 7-
point Likert item asking how much a student’s team focused on a given collaborative
facet, and the second was an open-response item asking for an example of how the
team focused on that facet; 20 students completed the survey. For all four Likert items,
the modal response was “frequently” on a scale of constantly, very frequently, frequently,
occasionally, rarely, very rarely, and never. The strongest mean item response
was for planning (with a mean of 3.2; that is, slightly less than “frequently”); the
weakest response was for productivity (with a mean of 4.1; slightly less than “occasionally”).
Communication and evaluation/negotiation had means of 3.6 and 3.7,
respectively.
Half of the examples students provided were relevant and appropriate (40/80),
though few were specific episodes illustrating how a collaborative facet was practiced:
for communication, “We encouraged discussion of everything posted in the BRIDGE
space.”; for planning, “We set goal dates and times and set standards for our individual
parts (length, detail).”; for productivity, “We double checked that everything was done
and done to the best of our ability.”; for evaluation-negotiation, “When we were
completing the assignment, we came together and rapidly went through all aspects of
the assignment in a collaborative fashion, so that everyone has a say in, and was aware
of, all goals.”
Unfortunately, many of the examples provided by the students were disappointing.
Many were generalities about collaboration that revealed little about teamwork and the
collaborative facets (18/80), for example, “everyone knew what was required of them”.
Some responses were not attributed to the correct collaborative facet (12/80), for
example, “splitting up work” is part of planning, not communication; indeed, most of
these cases involved confusing planning with the other facets. Some responses for
communication, seemed to confuse technology to communicate with addressing the
collaborative facet of communication (e.g., “Making sure that everyone knew what
time to be logged on to the system”). A few students (6; 3 for productivity and 3 for
evaluation-negotiation) said their group did not address the facet!
The survey for HW 3 consisted of 6 open-response items; 26 students completed the
survey. Question 1 asked how collaborative practices and use of BRIDGE has changed
through the semester. Question 2 how BRIDGE supported or failed to support team
awareness, and how it might be improved. Questions 3, 4, and 5 – as in the HW 1
survey—asked students to identify strengths and weaknesses of BRIDGE with respect
to information pooling, discussion/evaluation of information, and negotiation/decision
making, and Question 6 asked them how they used the four collaborative facets, and to
identify strengths and weakness of the four facets.
Educ Inf Technol (2015) 20:589–617 601
More than a third of the respondents (10/26) said their team’s use of BRIDGE did
not change through the semester; it seemed in these cases, chat and/or discussion were
used a little, and beyond that, assignments and components of assignments were posted
to and assembled in BRIDGE. Nine students described trajectories of learning to
collaborate online: Two said they had replaced some face-to-face interactions with
online interactions (shared documents and chat, respectively). Two others said they
began to post work to get ideas from fellow team members. Two others said they
moved from chat to asynchronous collaboration; another said that he/she came to find
real-time coordination difficult, and began to avoid it. However, another said he/she
moved from merely sharing documents to more interactive uses of BRIDGE.
More than one third of the students (10/26) emphasized chat as a primary awareness
tool for BRIDGE-based collaboration. Three other students emphasized the shared wiki
environment, in which every user action is logged. Three students said they used email
(i.e., outside BRIDGE) to maintain team awareness. In suggesting possible enhancements
for awareness support students mentioned easier threading in the discussion
forum, a discussion board tool, voice chat, streaming video, feed syndication (e.g.
RSS), and blogging. Two students said they had had versioning problems that led them
to redo work; they suggested better versioning awareness. This is unfortunate since
BRIDGE is wiki based and had actually stored all of their versions, but apparently not
made that sufficiently clear.
Questions 3–5 were repeated from the survey for HW1 asking, respectively, how
students used BRIDGE to pool, discuss and evaluate, and negotiate and make decisions
about information gathered individually by team members. Relative to HW1, it seemed
that there was more differentiation among the BRIDGE tools with respect to these three
information tasks. Eighteen students emphasize use of chat for discussion/evaluation and
for negotiation and decision-making. Unlike in HW1, not one student mentioned chat as
a key tool for information pooling. Conversely, thirteen students mentioned creating
shared text documents to pool information, but only two mentioned shared documents as
a tool for discussion/evaluation, and none mentioned this for negotiation/decision making.
With respect to shared documents, eight students described a strategy of individual
team members working on separate documents, which were subsequently integrated
through cut-and-paste. Five students described a strategy of creating a single shared
document, in some cases with an outline or labeling structure, to pool team information.
One described a strategy of compiling ideas in a shared document, and then commenting
on the ideas in an associated forum. Eleven students emphasized that negotiation and
decision making typically occurred outside BRIDGE, generally face-to-face.
Students mentioned a variety of specific affordances of collaboration in BRIDGE:
The utility of having information available in a single place, of posting information or
ideas whenever they came to mind, of getting feedback from other team members, of
monitoring the team’s activity, of keeping track of who had suggested a particular idea,
and of interleaving activities like idea generation and search. Students also mentioned a
variety of potential improvements to BRIDGE: enabling editing of forum posts,
supporting markup and tagging in text documents, making version support clearer,
providing better font support for displaying chat, and providing notification messages
to alert members when edits are made to shared objects in BRIDGE. Several students
emphasized that coordination and collaboration mediated by BRIDGE remained difficult
for them relative to face-to-face interactions.
602 Educ Inf Technol (2015) 20:589–617
In responding to question 6, thirteen students said their teams had employed the
facets, though several indicated specific facets they thought could still have been
improved upon. One student emphasized that his/her team had explicitly assigned
and rotated facets as team roles through the various homework assignments. Another
said the collaborative facets were obvious for senior-level students, and ought to be
taught at a lower level. One student said that his/her team had not used the facets
effectively. Seven students said that their teams did not use the collaborative facets, five
of these said the teams did not use the facets because they were not needed (that is,
because the team was effective without explicitly using the facets). One student blamed
non-use of the facets on other members.
In summary, students identified a wide range of issues in distributed collaborative
learning. Chat was identified as a key tool for both awareness and collaborative work.
Real-time collaborative document editing and the discussion forum were also seen as
useful. Students liked the flexibility of contributing in parallel, interleaving various
collaborative activities, of collaborating asynchronously on their own schedule, and
appreciated that the system kept track of who had done what in the collaborative space.
They wanted notification support for activity in the shared space, better formatting
support for text and tables, and clearer versioning support. Through the semester
(comparing the first and third surveys), some students expanded and articulated their
online collaborative practices. Several students described version of a strategy in which
individual members organized separate documents, or parts of documents, prior to
creating an integrated team product. However, some key collaborative activity, including
negotiation and decision making, was often carried out face-to-face. Students
reported using our collaborative facets to some extent, but some criticized our model,
and others reported difficulties in using it. However, only half of the examples students
were able to provide of the facets were relevant. And interestingly, the facet students
reported as being most used—planning—was the one found least used in our video
coding (section 4.1).
4.3 Student performance data
To complement self report data, we examined performance data of two sorts. First, we
examined actual use of BRIDGE through session logs. We counted the number of chat
messages, discussion forum posts, and discussion forum threads produced by each
team. Second, we graded document deliverables from the three DCHWs on a scale of
0–10 (the grading rubric used for the three DCHWs appears as Appendix 1). Table 2
summarizes this performance data.
There was a substantial range in group performance outcomes; scores on individual
DCHWs ranged from 1 to 8; total scores across the three DCHWs ranged from 8 to 23;
chat messages ranged from 10 to 646; forum posts ranged from 0 to 75.
There seemed to be a positive overall relationship between activity in BRIDGE and
team outcomes: Total homework score correlates strongly with number of chat messages
posted (r=.66). However, total homework score did not correlate highly with
number of discussion forum posts (r=.17). It seemed that chat and the forum had a
somewhat complementary usage distribution; number of chat messages posted correlated
negatively with number of forum posts (r=−.47). Total homework score correlated
strongly with the sum of chats and forum posts (r=.71).
Educ Inf Technol (2015) 20:589–617 603
We examined the contents of chat and forum posts, focusing on the two highest
performing teams (Blue and Red) and the two lowest-performing teams (Brown and
Orange) with the goal of better understanding how teams used BRIDGE to carry out
their course activities. Blue Team had the highest homework score for all three
DCHWs, and posted the largest number of chat messages—more than a third of the
chats posted by the entire class, though they did not make a single forum post. The
team’s strategy was well aligned with the jigsaw learning activities (Slavin 1980). Each
member created a separate BRIDGE document for his/her individual contribution, and
the team used chat to manage this activity (organize the work, provide status updates on
individual contributions; excerpt 1 in Table 3). Then, the team used chat to discuss and
integrate various ideas that had been individually identified and synthesize a work
product (excerpt 2 in Table 3). Members of Blue Team chat described this strategy in
their self-report data, for example, “BRIDGE featured a live chat element which was
useful for quick conversation as opposed to email,” and “We ‘chatted’ easily about our
ideas and plans.”
The excerpts in Table 3 also illustrate cognitive aspects of the successful team
activity. In the first excerpt, the team sets specific goals for the activity; members
acknowledge and coordinate, one explicitly stating which case study she will work on.
The members named their individual BRIDGE documents with the name of their case,
making it easy for fellow members to find and read their work. The second excerpt
shows how Blue Team later collaboratively pooled and analyzed their case study
summaries to extract and jointly develop ideas. Thus, Tom and Sue compare, contrast
and synthesize ideas about calendars and chat from their respective case study
summaries.
Red Team received the second-highest total homework score. They produced 347
chat messages, but like Blue Team, did not use the discussion forum. The team
organized their initial collaborative work around a single shared document for each
of the three DCHWs. The team’s strategy was for each member to contribute his/her
individual analysis directly to the shared document. The team used chat to coordinate
this initial aggregation, and then subsequently to synthesize a team-level analysis,
edited from the individual contributions. Even though they used BRIDGE documents
Table 2 Performance data: graded scores for DCHWs and number of chat messages, forum posts, and forum
threads for the eight student teams
Team HW 1
Score
HW 2
Score
HW 3
Score
Total HW
Score
Chat
messages
Forum activity;
threads (posts)
Orange 3 3 4 10 39 0
Blue 8 8 7 23 646 0
Brown 4 3 1 8 130 4 (7)
Green 7 7 5 19 10 39(57)
Purple 5 6 6 17 81 28(75)
Silver 4 3 4 11 52 2(2)
Yellow 7 6 6 19 591 0
Red 7 7 6 20 347 0
604 Educ Inf Technol (2015) 20:589–617
quite differently, their discussion interactions were similar to the examples in Table 3
with respect to explicitly coordinating activity, anticipating partners’ needs, and collaboratively
synthesizing results.
Brown Team had the lowest total homework score of the eight teams, 8/30. They
made moderate use of chat and some use of the threaded discussion tool. Their use of
chat was primarily for management of team process, as opposed to planning, creating
and evaluating project content (recall Table 3): “I’d say write your notes in the text
document, so we can get some ideas together and end up making a quick UI design
later”, and “alright. Added information about the virtual science fair application.” The
team did not coordinate as explicitly or reliably as the Red and Blue teams. There were
a few cases in which the team appeared to be carrying out technical work through chat,
but these were postings were of relatively large texts, such as scenario descriptions of
user interaction. This is an unfortunate appropriation of chat in that the small chat
window would have made reading these messages rather cumbersome. This team
created four discussion forum threads, with a total of seven forum posts. Only one
thread, which had three posts, involved more than a single team member.
Orange Team had the second worst overall homework score, 10/30. The team
communicated less online than any of the other teams, a total of 39 chat messages.
Table 3 Examples of Blue Team using chat for team management (in excerpt 1) and for technical discussion
of user interface design ideas (in excerpt 2)
(Excerpt 1)
Tom 1:12 PM: Ok so everyone write a summary of their case study for those three parts and then we can
discuss it
Zack 1:19 PM: woohoo
Zack 1:19 PM: yeehaw
Sue 1:23 PM: I got TAPPED IN
Tom 2:00 PM: Make sure to rename the page for the page that you summarized
(Excerpt 2)
Tom 2:05 PM: One idea would be instead of … use the calendar book idea that was in the phone writer
analysis
Tom 2:06 PM: or use that in addition to the refrigerator idea
Sue 2:06 PM: I thought the log-in idea was just for the buddy chat function though
Sue 2:07 PM: not the calendar idea
Tom 2:07 PM: it would be … you could have that for the chat idea instead of the calendar
Sue 2:07 PM: more so of an “Log in to access your page and chat with buddies”
Tom 2:07 PM: use kind of like email but through chat
Sue 2:07 PM: i think i get it
Tom 2:08 PM: so for example if the kid wanted to tell you something like right away but you are not logged on
… you will get that message once you log on
Tom 2:08 PM: so it is like an on going chat that never stops
Tom 2:08 PM: but doesn’t require an immediate reaction
Sue 2:09 PM: oh that makes sense … but that’s not like a real calendar for best buddies, it’s more like a
personal calendar between buddies:
Sue 2:09 PM:?
Educ Inf Technol (2015) 20:589–617 605
The content of these few messages was also quite meager. We could find no chat
messages directed to idea generation, to critical evaluation of ideas, or to negotiation.
Their chat was devoted to team management and coordination, for the most part
dividing work, and reminding members to do their individual parts of the work and
post their individual contributions to a shared document. Members did not interact
much, and often seemed unaware of one another: “hey you cut my part while i was
redoing it/i didn’t cut out anything hahaha i might have closed it didnt realize anyone
else was on.”
Green Team and Purple Team made the most extensive use of the discussion forum,
and relatively modest use of chat (especially Green Team, which posted only 10 chats
totally). These two teams received middling performance scores for the DCHWs. We
examined their interactions to try to get some insight into the overall negative correlation
between use of chat and use of the forum. These two teams seemed to specialize
the two communication channels, using the forum more for discussing and producing
team-level work products, and using chat more for managing collaboration. In the selfreport,
Purple Team members described using chat to exchange status information and
solve coordination problems: “we typed what we were doing in the BRIDGE chat so
other could read about what we were doing or going to do.” An example of this from
the Purple Team’s chat log is: “For tomorrow, it seems like we may have time to work
in groups, so I would say try to have at least your personal portion of the project done
and be ready to complete the diagram and begin the documentation.”
This coordination/management use of chat contrasted with their characterization, in
the self-report data, of the using threaded discussion to “post/discuss their ideas in order
to pool all of the information in one place” and to “comment on each others work”. The
Purple Team created an elaborate structure of 28 threads, including separate discussion
forums for each of the three DCHWs, discussion forums for individual analyses, and a
discussion forum for the team-level analysis. Table 4 presents discussion forum excerpts
from the Purple Team, illustrating the discussion of design ideas in this channel.
To summarize the performance data, number of chat messages exchanged among
members of a group was an indicator of successful performance on the DCHWs. Our
two most successful teams (Blue and Red) produced a high number of chats, and made
no use of the forum. Our more detailed analysis of these teams indicated that they used
chat to coordinate work (ideas and plans), to be aware of one another’s progress, and
finally to synthesize their joint outcome. The two lowest performing teams (Brown and
Orange) made far less use of chat, and when they did use it, tended to do so for group
management, not for discussion or doing the work itself. In one case where the chat was
used for work, it was really misused—a large chunk of text was shared through the
small chat pane. The teams that made most extensive use of the discussion forum
(Green and Purple) also seemed to use chat primarily for managing process, as oppose
to content, of the teamwork. Both developed the content of their teamwork in the
discussion forum. We stress that these patterns were not determinate; thus, the Green
and Yellow teams scored just below the Red team overall, but where Yellow’s
interactions looked quite similar to those of Red (and Blue), Green’s were entirely
different (e.g., they had the greatest amount of forum activity, and the least amount of
chat activity of the eight teams). This suggests there is more than one effective pattern
of appropriation for BRIDGE and/or team strategy for the DCHWs. We must leave
these questions open for further research.
606 Educ Inf Technol (2015) 20:589–617
Table 4 Examples of the Purple Team using the discussion forum for technical collaboration
Member Date/Time Post Comment
Amy 3/11/09
13:53
I was just reading the PAWS scenario
and they had agreements for things.
This made me think of copyrights
for members of the photoclub.
Viewers of the site may need to
follow some policies and
agreements
Amy 3/11/09
13:55
Another metaphor may be an
application process for potential
members. We could somehow
integrate this to the site
Mike 3/11/09
16:15
An agreement for use may definitely
be something we want to integrate
into the site to protect the work of
the members. This could most likely
just be done through a link off of
the page that displays the members’
work which explains how the
photographs displayed there can and
should be used. The new member
application process does seem like
an interesting idea, but would
members actually need to apply?
The whole process may discourage
members from actually joining the
club. What could we do is have an
information request area of the site
for new members to ask questions
pertaining to the club.
Eddy 3/15/09
0:30
Yeah maybe an agreement right on the
homepage that says by viewing the
site, you agree to the following terms.
According to the club president,
students just need to send him
an email saying they want to join.
Thomas 3/15/09
16:13
Yea, I couldn’t imagine that the
members would have to fill out all
these details to join a university club
Amy 3/31/09
14:42
A lot of the case studies seem to delve
into making the users do a thinkaloud
and then the researchers/
developers recording them.
Furthermore, at the end they obtain
feedback through surveys. How do
you guys want to run our usability
evaluation?
Mike 3/31/09
20:09
Both the think aloud testing and survey
based evaluation will most likely be
the methods that work best for our
project. The only issue that may
arise is the inability to get enough
club members/outside participants
Educ Inf Technol (2015) 20:589–617 607
5 Discussion
In this research, we investigated distributed collaborative homework activities
(DCHWs) as an approach to strengthening collaborative learning in a problem-based
course in Usability Engineering. They worked on a series of three DCHWs, each a
case-based design analysis structured as a jigsaw learning activity, addressing user
interface design, usability evaluation design, and documentation design, respectively,
and spanning 2–3 weeks in a semester class. Students were provided with a collaborative
software environment (BRIDGE) to carry out the DCHWs, and were introduced
to a simple model of collaboration, articulating communication, planning, productivity,
and evaluation-negotiation as four facets of effective collaboration.
We found relatively poor mastery of collaboration, as operationalized by our model,
especially early in the semester. Teams were relatively stronger at communication, and
relatively weaker at planning. However, we are somewhat encouraged by the fact that
student collaborative performance improved during those 10 weeks, 41 % overall, and
especially in the Evaluation/Negotiation subskill of of thinking critically and managing
Table 4 (continued)
Member Date/Time Post Comment
to do the think aloud testing since
we would need to coordinate
schedules and get enough people
that were actually interested in the
project to participate. To counteract
this, we could split the testing into
two different evaluation methods:
Think aloud usability testing: As stated,
in the post above, a combination of
recording the thoughts of the users
while testing followed up with a
survey on their opinion system.
Individual beta testing: Giving outside
users access to the site, allowing
them to test the features at their own
convenience. This would be allowed
up by filling out a survey like the one
in the think aloud testing.
Getting a good mix of different levels
and backgrounds in our participants
should definitely be a goal, but may
not be possible due to lack of
interested participants. Getting
people should be the first priority.
Thomas 4/1/09
1:41
I think individual testing would be the
best approach because it’s also about
how easy the site is to use to
newcomers
But, I agree with Mike and that we
should get a good mix of different
experience levels and backgrounds.
It’ll help us get better data.
608 Educ Inf Technol (2015) 20:589–617
criticism, and in the Productivity subskills of keeping track of progress, and checking
and improving work products.
In their survey self-reports, most students indicated that their teams had made some
use of the collaborative model, but quite a few were skeptical, admitted their teams had
not taken it very seriously, or stated that their team had followed alternative conceptions,
generally emphasizing the central authority of a team leader. Interestingly,
students indicated that planning was the facet their teams had focused on most
frequently, though planning was the facet we assessed as weakest, based on analysis
of the videotaped face-to-face team interactions. Overall, students were not very
successful at providing specific examples of how their teams had focused on each of
the four collaborative facets, when prompted to do this in the survey.
In their self-reports of DCHW practices and experiences, students enumerated a
variety of benefits for mediated collaboration including managing individual contributions
to the collective product. Students noted the general utility of mediated collaboration,
such as generating and keeping track of a wider range of individual ideas and
contributions, improved awareness of fellow team members, and coordinating work
through a synchronized shared document repository. Students also seemed sensitive to
specific collaborative technology affordances, such as BRIDGE’s persistent chat with
respect to supporting coordination and awareness, and BRIDGE’s version timeline with
respect to awareness of events in a schedule.
About one third of the students indicated that their understanding and utilization of
online collaboration had developed through the semester. Students reported that, as the
semester progressed, they used chat more for discussion/evaluation and for negotiation
and decision making, but stopped relying on chat to pool information, in favor of using
shared documents. About a third of the students described fairly articulated collaboration
strategies for pooling and organizing individual contributions through shared
documents.
Students also enumerated many challenges and problems of mediated collaboration,
such as finding particular bits of information in a collection of files, maintaining
awareness of physically remote collaborators, and the limited expressivity of purely
text-based interaction. Several students noted that they used email (that is, outside
BRIDGE) to enhance team awareness. More specifically with respect to BRIDGE,
students noted limitations in support for fonts and tables, markup and tagging, graphics
editing, threading in the discussion forum, voice chat, streaming video, feed syndication,
blogging, voting for decisions, and notifications to alert members when edits are
made to shared objects in BRIDGE.
Our log analysis of student activity in BRIDGE showed that, while all the teams in
fact did make use of the collaborative tools, the amount and the nature of their use
varied substantially. The log data emphasized the importance of chat: There were far
more chats than discussion forum posts. Moreover, use of chat correlated strongly with
total scores for the DCHWs. We saw some differentiation between management/
coordination activity and problem solving analysis and development of the collaborative
work product: For teams that used both chat and the discussion forum, there was
some specialization toward using chat for management and the forum for technical
work.
Our study is a classroom case study, an exploration of one Usability Engineering
class. We did not sample students or subjects, and we do not have a control condition.
Educ Inf Technol (2015) 20:589–617 609
We have been investigating roughly the “same” course for 15 years, but this was only
the second time we had introduced DCHWs, and it was the first time we had tried
jigsaw learning or the three homework assignments described above, or gathered data
that was more than anecdotal. Thus, the implications of this study are mostly formative
and programmatic directions for further investigation.
5.1 Teaching and guiding collaborative skills
We tried to create a lightweight and action-oriented instructional intervention for the
collaboration model. We gave a short in-class presentation, accompanied by brief
printed materials, and we designed a self-confrontation activity in which the students
practiced the concepts by discussing and critiquing the performance of their own team.
We saw evidence of adoption and improvement in our analyses of videotaped team
interactions. And a (slight) majority of students reported in surveys that they used the
model.
However, there was also considerable non-adoption, criticisms of the model, adoption
of alternative models, and also complaints that teaching collaboration in a seniorlevel,
upper-division course was not appropriate. Students tended to put this in terms of
already knowing how to collaborate, but that was certainly not our impression. Possibly
they were just embarrassed to have to deal explicitly with something they recognized
ought to have been an already-mastered skill. One implication of this is to address
collaboration explicitly but in lower-division courses. In our program (the College of
Information Sciences and Technology), this has in fact been adopted; a collaboration/
teamwork module was inserted into our introductory course, teaching the same model
we employed here.
Another direction is to try to make the instructional intervention even more lightweight.
For example, we could stress only planning. Our analysis of the week 2 team
interactions suggested that planning was, by far, the least enacted collaborative facet.
Another possibility is to introduce just one or two facets, say planning and evaluation,
and then ask the students to elaborate the model as they feel necessary. Being asked to
develop a model of collaboration would surely be seen as more appropriate for upperdivision
students, and would be a more engaging way to encounter critical thinking
about collaboration.
Part of our collaboration intervention was to design jigsaw learning activities,
activities in which each participants contribution is independently essential to collective
performance. This seemed to work to well, or at least much better than the non-jigsaw
tasks we had used in a prior offering of the course. Free riding is a pervasive and
abiding challenge for any collaborative endeavor, and a fortiori for distributed collaboration.
The jigsaw makes each team member publicly accountable for unique contributions,
and makes it impossible for the team to proceed to synthesize an outcome if
anyone has failed to provide their input. Of course, this does not mean free riding
cannot occur, just that it will be visible if it does occur.
5.2 Tool support for distributed collaborative homework activities
When BRIDGE was initially developed, in the early 2000s, integrating wikis, discussion
forums, persistent chat, shared whiteboards, and character-by-character shared data
610 Educ Inf Technol (2015) 20:589–617
streams into a single shared workspace that allowed multiple objects to be open and
edited simultaneously were novel functionalities and interactions. Our students understood,
utilized, and appreciated many of the collaborative affordances of BRIDGE. In
their self-reports they described a wide range of adoptions and innovative appropriations
of BRIDGE, and their performance suggested that use of the tools, chat in
particular, was beneficial. However, the students also made many legitimate critical
comments, given the state of the art, and had a long wish list of further features.
Most basically, this is good news with respect to the goal of enhancing project-based
learning through distributed collaborative homework activities. General computational
infrastructures are enabling a growing variety of collaborative interactions. Students are
more ready to appropriate and use these capabilities, and also more able to think
critically about them. Collaborative tools and infrastructures, and distributed collaborative
homework activities, are now more clearly part of the standard instructional
design palette.
This study helped us to articulate our software strategy in two directions: For general
collaboration support, it makes sense to appropriate and configure general tools,
notably Google Docs, Yammer, and open source discussion forum platforms. The
capabilities of these freely available services are highly refined, and many students
are already experienced in using them. For more specific collaborative activities,
custom software may still make sense, but it also is highly desirable to leverage implicit
standards, like pdf document format and Google formats and services, and to support
import and export of content objects in standard file formats.
Merely adopting standard formats and services may not be the best we can do
however. Our students appropriated chat, discussion forum and shared documents in
specific ways to manage complex jigsaw DCHW activities. For example, they used
chat to brainstorm, plan, and coordinate, but used shared documents and forums to
develop content. In other words, they made activity-specific structured appropriations
of the general tools incorporated in BRIDGE. Students also made a wide variety of
specific suggestions about the need to better integrate various BRIDGE tools. Although
more refined and widely adopted general tools exist now, these would still need to be
specialized for particular instruction activities, and they still do not provide much
integration across tools. We see a valid and continuing role for technology innovation
on the part of instructors is in the specialization and integration of general collaborative
infrastructures and tools.
In our Usability Engineering course, we are pursuing both of these directions. First,
we have focused more on leveraging free and open source platforms and standard data
formats to provide general collaborative tools. For example, we developed a lightweight
classroom back channel and an anchored discussion forum using standard
components (AJAX, PHP, MySQL) that can be utilized in pretty much classroom
context. The classroom back channel allows students to contribute publicly to in-class
activities, including even the most inherently passive classroom activities, like lecture
(Du et al. 2009, 2012). The anchored discussion forum allows students to publish
pieces of their work for other class members’ comments and suggestions; the anchor
documents and the comments are displayed in an integrated browser view.
We are also pursuing the direction of investigating issues in specialization and
integration of classroom technology. Based on the results of the classroom trial, we
developed a web-based collaborative workspace with specific tool support for the
Educ Inf Technol (2015) 20:589–617 611
activities students carried out in the DCHWs, such as pooling and organizing individual
work in requirements and design analysis, evaluation planning, and critical thinking
about design and evaluation through structured tradeoff analysis. Our workspace
provides a hierarchical visualization of shared objects, similar to a desktop file system,
that makes it easier for teams to organize shared information, and a structured collaborative
editor to develop design tradeoff analysis (Fig. 2). We also now support
document import/export in standard formats.
A key integration issue for our students was awareness. They felt it was difficult to
keep apprised of what fellow team members had recently done in the various BRIDGE
tools especially when working asynchronously, for example, editing a shared document
or posting/commenting in a forum. However, the busy lifestyle of contemporary
university students requires effective asynchronous collaboration. This emphasizes
the need for awareness tools that provide easy access to an integrated record of who
on the team did what. BRIDGE included a project timeline indicating when team
documents were created or modified; we included a version of this tool in our
redesigned Workspace (center bottom pane in Fig. 2).
We also developed team agenda and “to do” tools, integrated with a persistent chat
tool. These tools allow teammates to select chat posts and designate them as agenda or
to do items these items. Agenda items can be viewed and edited to guide future chat
sessions; to do items are assigned dates and placed on the calendar. We also added a
critical evaluation tool to help students be more aware of how design decisions should
be made and to support the process. Each tool’s pane is organized to present most
Fig. 2 Design tradeoff analysis in collaborative web-based workspace; a resource explorer (upper left)
displaying the hierarchy of objects (documents, images, agenda items, etc.) created in the workspace; a
bookmark list (upper center); a meeting agenda (upper right); a group chat (bottom right); a timeline view of
team activity and a buddy list (bottom middle); a document list (bottom left); a team to-do list (middle left); an
object viewer opened on an Activity Design document (center of window); and a widget to support design
tradeoff analysis (on top of window panes)
612 Educ Inf Technol (2015) 20:589–617
recently added or modified objects at the top by default, emphasizing changed items to
team members. The various tools differentiate and codify distinct and important
functional appropriations of chat and the discussion forum that we observed in our
study. The Workspace design both articulates and integrates observed patterns of
problem solving and team coordination/management.
5.3 Conclusion
We are investigating possibilities for supporting sustained, authentic, and effective
collaborative learning in an undergraduate course in Usability Engineering. Our approach
involves case-based learning in which teams of students critically analyze and
learn from documented usability engineering projects (Carroll and Rosson 2005). In
this paper, we reported a study involving explicit training in collaboration skills, use of
three jigsaw distributed collaborative homework (DCHW) activities through an eightweek
period of the course, and experiences with collaborative software support for
those activities.
Our study provided measured encouragement for our approach to teaching collaboration,
and for the DCHWs and the collaborative software we employed. It also
provided specific critical insights and further directions for this work.
Acknowledgments We are grateful to Mary Beth Rosson, who led the original development of the projectbased
usability engineering course and the textbook for that course (Rosson and Carroll 2002), and who
provided advice throughout this project. We also thank Craig Ganoe and Philip Isenhour, the primary
developers of BRIDGE. We also thank our usability engineering students at Penn State who allowed us to
study their learning activities. This work was supported the US National Science Foundation (0736440).
Appendix
Table 5 Grading Rubrics for the three DCHWs (numbers in parentheses indicate the possible points teams
could earn for each criterion by displaying the corresponding behaviors)
Criterion Points Definition
HW1 Case study
coverage
2 Cover all case analysis required in the assignment; Identify key aspects
related to information design, interaction design and activity design;
can make connections to team project (2)
Cover most case analysis required in the assignment; identify key aspects
related to information design, interaction design and activity design;
(1.5)
Cover less than 3 case analysis; only list concepts enumerated in the case
studies and cannot make connections to team project (1)
Design coverage 2 Cover all phases required in the homework and clearly describe design
features and well explain rationale and motivation (2)
Cover all phases required and clearly describe design features with
moderate explanation (1.5)
Design are not well explained and/or do not fully cover all required
phases (1)
Applying to
projects
2 Link case study well to team project; can inform team project design;
design addresses requirements and problems in team project; can use
Educ Inf Technol (2015) 20:589–617 613
Table 5 (continued)
Criterion Points Definition
knowledge and tools taught and presented in case studies to support
design decision (2)
Can make connection between case studies and team project; can use
method and knowledge taught and presented in case studies to inform
design, but not well explained; (1.5)
Can not or hardly raise rationale to support design; Only come up with
list of features but without evidence or theoretical support (1)
Scenarios 2 Provide vivid scenarios that capture real-world problems; stakeholders,
problems, activities and constraints are clearly presented in the scenarios;
Scenarios from different phases are built on the ones from
previous phase (2)
Scenarios are representative of real-world problems, but lack clarity of
stakeholders, problems, activities or constraints; but analysis can still
be drawn from scenarios (1.5)
Scenarios disconnect with analysis, lack of context and activities;
connections between scenarios are vague or totally irrelevant (1)
Format and
organization
2 Deliverables are well organized; readers can find different information
quickly and easily; Headings and captions are consistent; text are
organized in paragraphs (2)
Information is not so well organized; narrative is not very consistent
throughout the document; (1.5)
Documents are just a pile of individual work; narrative are not consistent
at all; documents are stuffed with bullet points and unfinished
sentences (1)
HW2 Case study
coverage
2 Cover all case analysis required in the assignment; Identify key
evaluation methods introduced; can use these technology to inform
evaluation design for team project (2)
Cover most case analysis required in the assignment; identify evaluation
methods and inform evaluation design for team project (1.5)
Cover less than 3 case analysis; only list concepts enumerated in the case
studies and cannot make connections to team project (1)
Applying to
projects
2 Link case study well to team project; can inform evaluation plan and
instrument design; evaluation plan is well oriented; Showing
knowledge of advantage and disadvantage of different evaluation
methodologies, and well suit them for team project (2)
Can make connection between case studies and team project; can use
method and knowledge taught and presented in case studies to inform
evaluation design, but not well explained; (1.5)
Can not apply evaluation methods to evaluate team project; do not show
awareness of using different method (1)
Scenarios 2 Provide vivid scenarios that capture real-world problems; stakeholders,
problems, activities and constraints are clearly presented in the scenarios;
Scenarios from different phases are built on the ones from
previous phase (2)
Scenarios are representative of real-world problems, but lack clarity of
stakeholders, problems, activities or constraints; but analysis can still
be drawn from scenarios (1.5)
Scenarios disconnect with analysis, lack of context and activities;
connections between scenarios are vague or totally irrelevant (1)
Instruments 2 Instruments are well design for evaluating targeted design; show
relatively comprehensive consideration of choosing different methods;
multiple angles are considered in evaluation plan (performance,
experience, emotional, learning, novice and experts, etc.) (2)
614 Educ Inf Technol (2015) 20:589–617
Table 5 (continued)
Criterion Points Definition
Instruments are designed for targeted evaluation; evaluation conditions
are not fully explored; Methodological strengths and weakness are
explained and more than one method were chosen (1.5)
Instruments are not well design; various evaluation conditions are not
considered (1)
Formatting and
organization
2 Deliverables are well organized; readers can find different information
quickly and easily; Headings and captions are consistent; text are
organized in paragraphs (2)
Information is not so well organized; narrative is not very consistent
throughout the document; (1.5)
Documents are just a pile of individual work; narrative are not consistent
at all; documents are stuffed with bullet points and unfinished
sentences (1)
HW3 Case study
coverage
2 Cover all case analysis required in the assignment; Identify key
documentation design method; can use these methods to inform team
project documentation design for different purposes (2)
Cover most case analysis required in the assignment; identify key
documentation design method and apply them to inform team project
(1.5)
Cover less than 3 case analysis; only list documentation methods and
forms (1)
Applying to
projects
2 Link case study well to team project; can inform documentation design;
design addresses requirements and problems in team project; can use
knowledge and tools taught and presented in case studies to support
design decision (2)
Can make connection between case studies and team project; can use
method and knowledge taught and presented in case studies to inform
design, but not well explained; (1.5)
Can not or hardly raise rationale to support design; Only come up with
list of features but without evidence or theoretical support (1)
Scenarios 2 Provide vivid scenarios that capture real-world problems; stakeholders,
problems, activities and constraints are clearly presented in the scenarios;
Scenarios from different phases are built on the ones from
previous phase (2)
Scenarios are representative of real-world problems, but lack clarity of
stakeholders, problems, activities or constraints; but analysis can still
be drawn from scenarios (1.5)
Scenarios disconnect with analysis, lack of context and activities;
connections between scenarios are vague or totally irrelevant (1)
Documentation
Design
2 Apply documentation design in different situations for team projects and
provide clear rationale, connecting with scenarios (2)
Formatting and
organization
2 Instruments are well design for evaluating targeted design; show
relatively comprehensive consideration of choosing different methods;
multiple angles are considered in evaluation plan (performance,
experience, emotional, learning, novice and experts, etc.) (2)
Instruments are designed for targeted evaluation; evaluation conditions
are not fully explored; Methodological strengths and weakness are
explained and more than one method were chosen (1.5)
Instruments are not well design; various evaluation conditions are not
considered (1)
Educ Inf Technol (2015) 20:589–617 615
References
Barab, S., Kling, R., & Gray, J. H. (Eds.). (2004). Designing for virtual communities in the service of learning.
Cambridge University Press.
Borge,M. (2007). Regulating social interactions: developing a functional theory of collaboration. Dissertation
Abstracts International 241.
Borge, M., & Carroll, J. M. (2010). Using collaborative activity as a means to explore student performance and
understanding. In K. Gomez, L. Lyons, & J. Radinsky (Eds.), Learning in the disciplines: Proceedings of
ICLS 2010: 9th international conference of the learning sciences (Chicago, June 29–July 2) volume 1, full
papers. Chicago: International Society of the Learning Sciences.
Bottge, B. A., & Osterman, L. (1998). Bringing the workplace to the classroom. Educational Leadership,
55(8), 76–77.
Brown, A. L., & Campione, J. C. (1996). Psychological theory and the design of innovative learning
environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations
in learning: New environments for education (pp. 289–325). Mahwah: Erlbaum.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational
Researcher, 18(1), 32–42.
Carroll, J. M. (2000). Making use: Scenario-based design of human-computer interactions. Cambridge:MIT
Press.
Carroll, J. M., & Rosson, M. B. (2005). A case library for teaching usability engineering: design rationale,
development, and classroom experience. ACM Journal of Educational Resources in Computing (now
ACM Transactions on Computing Education), 5(1), Article 3, 1–22.
Carroll, J. M., & Rosson, M. B. (2006). Cases as minimalist information. IEEE Transactions on Professional
Communication, 49–4, 297–310.
Carroll, J.M., Borge,M., Xiao, L. & Ganoe, C.H. (2008). Realistic learning activity is not enough. In Diaz, P.,
Kinshuk, I.A, and Mora E. (Eds.), Proceedings of the 8th IEEE International Conference on Advanced
Learning Technologies: iCALT 2008 (Santander, Spain, July 1–5). Los Alamitos, CA: IEEE Computer
Society, pp. 3–7.
Carroll, J.M., Borge, M. Ganoe, C.H. & Jiang, H. (2010). Distributed collaborative homeworks: Learning
activity management and technology support. Proceedings of IEEE EDUCON 2010: Engineering
Education Conference. (Madrid, April 14–16). Pp. 1585–1594. Published in IEEE Digital Library.
Chan, C., Burtis, J., & Bereiter, C. (1997). Knowledge building as a mediator of conflict in conceptual change.
Cognition and Instruction, 15, 1–40.
Damon,W. (1984). Peer education: the untapped potential. Journal of Applied Behavioral Psychology, 5, 331–
343.
Dewey, J. (1966). Democracy and education. New York: Macmillan/Free Press. First published 1916.
Du, H., Rosson, M.B., Carroll, J.M., & Ganoe, C.H. (2009). I felt like a contributing member of the class:
Increasing class participation with class commons. Proceedings of the 2009 International ACM
SIGGROUP Conference on Supporting Group Work, GROUP 2009. (Sanibel Island, Florida, May 10–
13). New York: ACM, pp. 233–242.
Du, H., Rosson, M.B., & Carroll, J.M. (2012). Communication patterns for a classroom public digital
backchannel. Proceedings of SIGDOC 2012: 30th ACM International Conference on Design of
Communication (October 3–5, Seattle, WA). New York: ACM
Fukkink, R. G., Trienekens, N., & Kramer, L. J. (2011). Video feedback in education and training: putting
learning in the picture. Educational Psychology Review, 23(1), 45–63.
Ganoe, C.H., Somervell, J.P., Neale, D.C., Isenhour, P.L., Carroll, J.M., Rosson, M.B., & McCrickard, D.S.
(2003). Classroom BRIDGE: using collaborative public and desktop timelines to support activity
awareness. ACM UIST 2003: Conference On User Interface Software and Tools. New York: ACM,
pages 21–30.
Hertz-Lazarowitz, R., & Miller, N. (Eds.). (1992). Interaction in cooperative groups. New York: Cambridge
University Press.
Hutchins, E. (1995). Cognition in the wild. Cambridge: MIT Press.
Jiang, H., Carroll, JM., Borge,M., & Ganoe, C. (2010). Supporting partially distributed, case-based learning in
an advanced undergraduate course in usability engineering. Proceedings of iCALT 2010: 10th IEEE
International Conference on Advanced Learning Technologies. (Sousse, Tunisia, July 5–7). Proceedings
in IEEE Digital Library.
Jiang, H., Ganoe, C. H., & Carroll, J. M. (2010b). Four requirements for digital case study libraries. Education
and Information Technologies, 15(3), 219–236.
616 Educ Inf Technol (2015) 20:589–617
Lave, J., &Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge
University Press.
Levine, L. J., & Shapiro, N. (2004). Sustaining and improving learning communities. San Francisco: Jossey-
Bass.
Loughry, M. L., Ohland, M. W., & Moore, D. D. (2007). Development of a theory-based assessment of team
member effectiveness. Educational and Psychological Measurement, 67(3), 505–525.
Malone, T. W. (2004). The future of work: How the new order of business will shape your organization, your
management style, and your life. Cambridge: Harvard Business School Press.
Merkel, C. B., Rosson, M. B., & Carroll, J. M. (2006). Using cases to teach usability engineering: Designing
the tapped in case study. In W. Aung, R. V. Espinosa, J. Moscinski, S.-H. Ou, & L. M. Sanchez Ruiz
(Eds.), International network for engineering education and research (iNEER) Special Volume:
INNOVATIONS 2006 - World Innovations in Engineering Education and Research (pp. 351–363).
Arlington: iNEER.
Resnick, L. B. (1987). Learning in school and out. Educational Researcher, 16(9), 13–20.
Resta, P., & Laferrière, T. (2007). Technology in support of collaborative learning. Educational Psychology
Review, 19(1), 65–83.
Rosson, M. B., & Carroll, J. M. (2002). Usability engineering: Scenario-based development of humancomputer
interaction. San Francisco: Morgan-Kaufmann.
Rosson, M.B., Carroll, J.M. & Rodi, C. (2004). Teaching computer scientists to make use. In Alexander, I.F.,
and Maiden, N. (ed.), Putting scenarios into practice: The state of the art in scenarios and use
cases (pp. 445–463). John Wiley & Sons.
Savery, J. R., & Duffy, T. M. (1996). Problem-based learning: An Instructional Model and its Constructivist
Framework. In B. G. Wilson (Ed.), Constructivist learning environments: Case studies in instructional
design (pp. 135–148). Englewood Cliffs: Educational Technology.
Scardamalia, M., & Bereiter, C. (1993). Technologies for knowledge-building discourse. Communications of
the ACM, 36, 37–41.
Schön, D. A. (1991). The reflective turn: Case studies in and on educational practice. New York: Teachers
College Press.
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne, & M. Scriven (Eds.), Perspectives
of curriculum evaluation (pp. 39–83). Chicago: Rand McNally.
Slavin, R. E. (1980). Cooperative learning. Review of Educational Research, 50(2), 315–342.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge:
Harvard University Press.
Educ Inf Technol (2015) 20:589–617 617

No comments: