Paper submitted to ASCILITE 2013
It is an unusual Australian University that is not currently expending time and resources in an attempt to harness learning analytics. This rush, like prior management fads, is likely to face significant challenges when it comes to adoption, let alone the more difficult challenge of translating possible insights from learning analytics into action that improves learning and teaching. This paper draws on a range of prior research to develop four questions – the IRAC framework – that can be used to improve the analysis and design of learning analytics tools and interventions. Use of the IRAC framework is illustrated through the analysis of three learning analytics tools currently under development. This analysis highlights how learning analytics projects tend to focus on limited understandings of only some aspects of the IRAC framework and suggests that this will limit possible impacts.
Keywords: learning analytics; IRAC; e-learning; EPSS; educational data mining; complex adaptive systems
The adoption of learning analytics within Australian universities is trending towards a management fashion or fad. Given the wide array of challenges facing Australian higher education the lure of evidence-based decision making has made the quest to implement some form of learning analytics “stunningly obvious” (Siemens & Long, 2011, p. 31). After all, learning analytics is increasingly being seen as “essential for penetrating the fog that has settled over much of higher education” (Siemens & Long, 2011, p. 40). The rush toward Learning Analytics is illustrated by its transition from not even a glimmer on the Australian and New Zealand higher education five year technology horizon in 2010 (Johnson, Smith, Levine, & Haywood, 2010) to predictions of its adoption in one year or less in 2012 (Johnson, Adams, & Cummins, 2012) and again in 2013 (Johnson et al., 2013). It is in situations like this – where an innovation has achieved a sufficiently high public profile – that the urgency to join the bandwagon can reach such heights as to swamp deliberative, mindful behaviour (Swanson & Ramiller, 2004). If institutions are going to successfully harness learning analytics to address the challenges facing the higher education sector, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.
This paper describes the formulation and use of the IRAC framework as a tool to aid in the mindful implementation of learning analytics. The IRAC framework consists of four broad categories of questions – Information, Representation, Affordances and Change – that can be used to scaffold analysis of the complex array of, often competing, considerations associated with the institutional implementation of learning analytics. The design of the IRAC framework draws upon bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005). In turn considerations within each of the four questions are further informed by a broad array of research from the fields including learning analytics, educational data mining, complex adaptive systems, ethics and many more. It is suggested that the use of the IRAC framework to analyse applications of learning analytics in a particular context for a specific task will result in designs that are more likely to be integrated into and improve learning and teaching practice.
Learning from the past
The IRAC framework is based on the assumption that the real value and impact of learning analytics arises when it is integrated into the “tools and processes of teaching and learning” (Elias, 2011, p. 5). It is from this perspective that the notion of Electronic Performance Support Systems (EPSS) was seen as likely to provide useful insights as EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63). This captures the notion of the performance zone defined by Gery (1991) as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful task completion. For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 540). This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artefacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artefacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artefacts that “more often interferes and confuses than aids and clarifies” (p. 9). A danger faced in the rush toward learning analytics.
The notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artefacts – along with insights from other literature – provide the four questions that form the IRAC framework. The IRAC framework is intended for use with a particular task in mind – Olmos & Corrin (2012) amongst others reinforce the importance for learning analytics of starting with “a clear understanding of the questions to be answered” (p. 47) – and context – a nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). When used this way, it is suggested that the IRAC framework will help focus attention on factors that will improve the implementation and impact of a learning analytics intervention. The following lists the four questions at the core of the IRAC framework and briefly describes some of the associated factors including:
- Is all the relevant Information and only the relevant information available?
While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?
- Does the Representation of the information aid the task being undertaken?
A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993). Representation has a profound impact on design work (Hevner, March, Park, & Ram, 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540). Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis. Considerations here include, how easy can people understand the implications and limitations of the findings provided by learning analytics?
- Are there appropriate Affordances for action?
A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106). The nature of such affordances are not inherent to the artefact, but instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). Tutty, Sheard and Avram (2008) suggest there is evidence that institutional quality measures not only inhibit change, “they may actually encourage inferior teaching approaches” (p. 182). The consideration here is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.
- How will the information, representation and the affordances be Changed?
The idea of evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated. Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Universities are a complex system (Beer, Jones, & Clark, 2012) and such systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?
It is proposed that the lens provided by the IRAC framework can help increase the mindfulness of innovation arising from learning analytics. In particular, it can move consideration beyond the apparent over emphasis on the first two questions and raise awareness of the last two questions. This shift in emphasis appears necessary to increase the use and effectiveness of learning analytics. The IRAC framework can also provide suggestions for future directions. In the last section, the paper seeks to illustrate the value of the IRAC framework by using it to compare and contrast three nascent learning analytics tools against each other and contemporary practice.
Looking to the future
The Student Support Indexing system (SSI) mirrors many other contemporary learning analytics tools with a focus on improving retention through intervention. Like other similar systems it draws upon LMS clickstream information and data from other student information systems to continuously index potential student risk using a formula to combine various data points into a single indicator. Only a very few such systems, such as S3 (Essa & Ayad, 2012), provide the ability to change a formula in response to a particular context. SSI also represents the information separate from the learning context using a tabular form. SSI does provide common affordances for intervention and tracking, which appear to assist in developing shared understanding of student support needs across teaching and student support staff. Initial findings are positive with teaching staff appreciating the aggregation of data from various institutional systems with basic intervention facilitation and tracking. In its current pilot form, the SSI provides little in terms of change, however it is hoped that the underlying process for indexing student risk, tracking student interventions and monitoring students interventions can be represented in more contextually appropriate ways.
The Moodle Activity Viewer (MAV) currently serves a similar task as traditional LMS reporting functionality and draws on much the same LMS clickstream information to represent student usage of course website content. MAV’s representative distinction is that it visualises student activity as a heat map that is overlaid directly onto the course website. MAV, like many contemporary learning analytics applications, offers little in the way of affordances. Perhaps the key distinction with MAV is that it is implemented as a browser-based extension that depends on a LMS independent server. This architectural design offers greater ability for change because it avoids the administrative and technical complexity of LMS module development (Leony, Pardo, Valentın, Quinones, & Kloos, 2012) and the associated governance constraints. It is this capability for change that is seen as the great strength of MAV, offering the potential to overcome it’s limited affordances, and a foundation for future research.
BIM is a Moodle plugin that manages student use of their choice of any blog as a reflective journal. It is posts written by students that form the information used by BIM. Moving beyond the limitations (see Lodge & Lewis, 2012) associated with an over-reliance on clickstream data. It is also focused on information specific to a particular learning design or context and exploring what process analytics (Lockyer, Heathcote, & Dawson, 2013) can be identified and leveraged to support the implementation of affordances such as automated assessment, scaffolding of student reflective writing, and encouraging connections between students and staff. Like MAV, the work on BIM is also exploring approaches to avoid the constraints on change placed by existing LMS and organisational approaches.
These three brief examples illustrate how the IRAC framework can be used to identify the relative strengths and weaknesses of different learning analytics interventions. It has also hinted at the existing deficiencies of learning analytics when understood as simply another set of passive reports without corresponding affordances for action with a continuous need for change. The IRAC framework is timely as attempts to innovate with learning analytics are proliferating at a time when higher education is increasingly expected to do more with less and can little afford wasted effort that does not result in interventions that actually enhance student learning.
Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87.
Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.
Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland & F. Collopy (Eds.), Managing as designing (pp. 106–112). Standford, CA: Standford University Press.
Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.
Boustani, M. a, Munger, S., Gulati, R., Vogel, M., Beck, R. a, & Callahan, C. M. (2010). Selecting a change and evaluating its impact on the performance of a complex adaptive health care delivery system. Clinical interventions in aging, 5, 141–8.
Buckingham Shum, S. (2012). Learning Analytics. Moscow.
Dickelman, G. (1995). Things That Help Us Perform : Commentary on Ideas from Donald A . Norman. Performance improvement quarterly, 8(1), 23–30.
Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. Learning.
Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK ’12 (pp. 2–5). Vancouver: ACM Press.
Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.
Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.
Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213.
Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663).
Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.
Johnson, L., Adams Becker, S., Cummins, M., Freeman, A., Ifenthaler, D., & Vardaxis, N. (2013). Technology Outlook for Australian Tertiary Education 2013-2018: An NMC Horizon Project Regional Analysis. Austin, Texas.
Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis. New Media Consortium. Austin, Texas.
Johnson, L., Smith, R., Levine, A., & Haywood, K. (2010). The horizon report: 2010 Australia-New Zealand Edition. Austin, Texas.
Leony, D., Pardo, A., Valentın, L. de la F., Quinones, I., & Kloos, C. D. (2012). Learning analytics in the LMS: Using browser extensions to embed visualizations into a Learning Management System. In R. Vatrapu, W. Halb, & S. Bull (Eds.), TaPTA. Saarbrucken: CEUR-WS.org.
Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, XX(X), 1–21.
Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.
Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.
Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49.
Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.
Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).
Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.
Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.
Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.
Young, M. F., Barab, S. A., & Garrett, S. (2000). Agent as detector: An ecological psychology perspective on learning by perceiving-acting systems. In D. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments (pp. 143–173). Mahwah, New Jersey: Lawrence Erlbaum Associates.