Blog

Avoiding the learning analytics fad: Forgotten insights and example applications

Below you will find the presentation slides and related resources for a talk given at the “Inaugural University Analaytics Forum” on September 27, 2013.

Abstract

The presentation is an attempt to develop some insights/guidelines/frameworks for guiding the analysis and design of learning analytics applications in an attempt to increase the likelihood of these applications being integrated into the process of learning and teaching and consequently improve learning and teaching.

Related resources include

  • A short paper introducing the IRAC framework.

    The presentation below uses the IRAC framework to analyse existing learning analytics applications in order to derive some guidelines for analysing learning analytics applications.

  • Moving beyond a fashion another presentation (just slides) which covers the process perspective of this question.

    There is also a blog post that expands on this earlier presentation.

Note: The big blue square on the first slide in the presentation is for a PollEverywhere app which doesn’t come across when uploading to Slideshare.

Presentation

References

Arnold, K. E. (2010). Signals: Applying Academic Analytics. Educause Quarterly, 33(1).

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Beer, C., Jones, D., & Clark, K. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. In Same places, different spaces. Proceedings ascilite Auckland 2009 (pp. 60–70). Auckland, New Zealand.

Carroll, J. M., Kellog, W. A., & Rosson, M. B. (1991). The Task-Artifact Cycle. In John Millar Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface (pp. 74–102). Cambridge, UK: Cambridge University Press.

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK  Õ12, 134–138. doi:10.1145/2330601.2330636

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, (August), 1–13. doi:10.1080/13562517.2013.827653

Dawson, S., Bakharia, A., Lockyer, L., & Heathcote, E. (2011). ÒSeeingÓ networks: visualising and evaluating student learning networks Final Report 2011. Main. Canberra.

Dyckhoff, a. L., Lukarov, V., Muslim, A., Chatti, M. a., & Schroeder, U. (2013). Supporting action research with learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK  Õ13 (pp. 220–229). New York, New York, USA: ACM Press. doi:10.1145/2460296.2460340

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. Learning.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK  Õ12 (pp. 2–5). Vancouver: ACM Press.

Leony, D., Pardo, A., Valentõn, L. de la F., Quinones, I., & Kloos, C. D. (2012). Learning analytics in the LMS: Using browser extensions to embed visualizations into a Learning Management System. In R. Vatrapu, W. Halb, & S. Bull (Eds.), TaPTA. Saarbrucken: CEUR-WS.org.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, XX(X), 1–21. doi:10.1177/0002764213479367

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Ramamurthy, K., Sen, A., & Sinha, A. P. (2008). Data warehousing infusion and organizational effectiveness. Systems, Man and É, 38(4), 976–994. doi:10.1109/TSMCA.2008.923032

Shum, S. B., & Ferguson, R. (2012). Social Learning Analytics. Educational Technology & Society, 15(3), 3–26.

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, (August). doi:10.1177/0002764213498851

Suthers, D., & Verbert, K. (2013). Learning analytics as a middle space. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK  Õ13 (pp. 2–5).

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Advertisements

The IRAC framework: Locating the performance zone for learning analytics

Paper submitted to ASCILITE 2013

Abstract

It is an unusual Australian University that is not currently expending time and resources in an attempt to harness learning analytics. This rush, like prior management fads, is likely to face significant challenges when it comes to adoption, let alone the more difficult challenge of translating possible insights from learning analytics into action that improves learning and teaching. This paper draws on a range of prior research to develop four questions – the IRAC framework – that can be used to improve the analysis and design of learning analytics tools and interventions. Use of the IRAC framework is illustrated through the analysis of three learning analytics tools currently under development. This analysis highlights how learning analytics projects tend to focus on limited understandings of only some aspects of the IRAC framework and suggests that this will limit possible impacts.

Keywords: learning analytics; IRAC; e-learning; EPSS; educational data mining; complex adaptive systems

Introduction

The adoption of learning analytics within Australian universities is trending towards a management fashion or fad. Given the wide array of challenges facing Australian higher education the lure of evidence-based decision making has made the quest to implement some form of learning analytics “stunningly obvious” (Siemens & Long, 2011, p. 31). After all, learning analytics is increasingly being seen as “essential for penetrating the fog that has settled over much of higher education” (Siemens & Long, 2011, p. 40). The rush toward Learning Analytics is illustrated by its transition from not even a glimmer on the Australian and New Zealand higher education five year technology horizon in 2010 (Johnson, Smith, Levine, & Haywood, 2010) to predictions of its adoption in one year or less in 2012 (Johnson, Adams, & Cummins, 2012) and again in 2013 (Johnson et al., 2013). It is in situations like this – where an innovation has achieved a sufficiently high public profile – that the urgency to join the bandwagon can reach such heights as to swamp deliberative, mindful behaviour (Swanson & Ramiller, 2004). If institutions are going to successfully harness learning analytics to address the challenges facing the higher education sector, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

This paper describes the formulation and use of the IRAC framework as a tool to aid in the mindful implementation of learning analytics. The IRAC framework consists of four broad categories of questions – Information, Representation, Affordances and Change – that can be used to scaffold analysis of the complex array of, often competing, considerations associated with the institutional implementation of learning analytics. The design of the IRAC framework draws upon bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005). In turn considerations within each of the four questions are further informed by a broad array of research from the fields including learning analytics, educational data mining, complex adaptive systems, ethics and many more. It is suggested that the use of the IRAC framework to analyse applications of learning analytics in a particular context for a specific task will result in designs that are more likely to be integrated into and improve learning and teaching practice.

Learning from the past

The IRAC framework is based on the assumption that the real value and impact of learning analytics arises when it is integrated into the “tools and processes of teaching and learning” (Elias, 2011, p. 5). It is from this perspective that the notion of Electronic Performance Support Systems (EPSS) was seen as likely to provide useful insights as EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63). This captures the notion of the performance zone defined by Gery (1991) as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful task completion. For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 540). This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artefacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artefacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artefacts that “more often interferes and confuses than aids and clarifies” (p. 9). A danger faced in the rush toward learning analytics.

The notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artefacts – along with insights from other literature – provide the four questions that form the IRAC framework. The IRAC framework is intended for use with a particular task in mind – Olmos & Corrin (2012) amongst others reinforce the importance for learning analytics of starting with “a clear understanding of the questions to be answered” (p. 47) – and context – a nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). When used this way, it is suggested that the IRAC framework will help focus attention on factors that will improve the implementation and impact of a learning analytics intervention. The following lists the four questions at the core of the IRAC framework and briefly describes some of the associated factors including:

  1. Is all the relevant Information and only the relevant information available?

    While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?

  2. Does the Representation of the information aid the task being undertaken?

    A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993). Representation has a profound impact on design work (Hevner, March, Park, & Ram, 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540). Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis. Considerations here include, how easy can people understand the implications and limitations of the findings provided by learning analytics?

  3. Are there appropriate Affordances for action?

    A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106). The nature of such affordances are not inherent to the artefact, but instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). Tutty, Sheard and Avram (2008) suggest there is evidence that institutional quality measures not only inhibit change, “they may actually encourage inferior teaching approaches” (p. 182). The consideration here is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.

  4. How will the information, representation and the affordances be Changed?

    The idea of evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated. Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Universities are a complex system (Beer, Jones, & Clark, 2012) and such systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

It is proposed that the lens provided by the IRAC framework can help increase the mindfulness of innovation arising from learning analytics. In particular, it can move consideration beyond the apparent over emphasis on the first two questions and raise awareness of the last two questions. This shift in emphasis appears necessary to increase the use and effectiveness of learning analytics. The IRAC framework can also provide suggestions for future directions. In the last section, the paper seeks to illustrate the value of the IRAC framework by using it to compare and contrast three nascent learning analytics tools against each other and contemporary practice.

Looking to the future

The Student Support Indexing system (SSI) mirrors many other contemporary learning analytics tools with a focus on improving retention through intervention. Like other similar systems it draws upon LMS clickstream information and data from other student information systems to continuously index potential student risk using a formula to combine various data points into a single indicator. Only a very few such systems, such as S3 (Essa & Ayad, 2012), provide the ability to change a formula in response to a particular context. SSI also represents the information separate from the learning context using a tabular form. SSI does provide common affordances for intervention and tracking, which appear to assist in developing shared understanding of student support needs across teaching and student support staff. Initial findings are positive with teaching staff appreciating the aggregation of data from various institutional systems with basic intervention facilitation and tracking. In its current pilot form, the SSI provides little in terms of change, however it is hoped that the underlying process for indexing student risk, tracking student interventions and monitoring students interventions can be represented in more contextually appropriate ways.

The Moodle Activity Viewer (MAV) currently serves a similar task as traditional LMS reporting functionality and draws on much the same LMS clickstream information to represent student usage of course website content. MAV’s representative distinction is that it visualises student activity as a heat map that is overlaid directly onto the course website. MAV, like many contemporary learning analytics applications, offers little in the way of affordances. Perhaps the key distinction with MAV is that it is implemented as a browser-based extension that depends on a LMS independent server. This architectural design offers greater ability for change because it avoids the administrative and technical complexity of LMS module development (Leony, Pardo, Valentın, Quinones, & Kloos, 2012) and the associated governance constraints. It is this capability for change that is seen as the great strength of MAV, offering the potential to overcome it’s limited affordances, and a foundation for future research.

BIM is a Moodle plugin that manages student use of their choice of any blog as a reflective journal. It is posts written by students that form the information used by BIM. Moving beyond the limitations (see Lodge & Lewis, 2012) associated with an over-reliance on clickstream data. It is also focused on information specific to a particular learning design or context and exploring what process analytics (Lockyer, Heathcote, & Dawson, 2013) can be identified and leveraged to support the implementation of affordances such as automated assessment, scaffolding of student reflective writing, and encouraging connections between students and staff. Like MAV, the work on BIM is also exploring approaches to avoid the constraints on change placed by existing LMS and organisational approaches.

These three brief examples illustrate how the IRAC framework can be used to identify the relative strengths and weaknesses of different learning analytics interventions. It has also hinted at the existing deficiencies of learning analytics when understood as simply another set of passive reports without corresponding affordances for action with a continuous need for change. The IRAC framework is timely as attempts to innovate with learning analytics are proliferating at a time when higher education is increasingly expected to do more with less and can little afford wasted effort that does not result in interventions that actually enhance student learning.

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87.

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland & F. Collopy (Eds.), Managing as designing (pp. 106–112). Standford, CA: Standford University Press.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Boustani, M. a, Munger, S., Gulati, R., Vogel, M., Beck, R. a, & Callahan, C. M. (2010). Selecting a change and evaluating its impact on the performance of a complex adaptive health care delivery system. Clinical interventions in aging, 5, 141–8.
Buckingham Shum, S. (2012). Learning Analytics. Moscow.

Dickelman, G. (1995). Things That Help Us Perform : Commentary on Ideas from Donald A . Norman. Performance improvement quarterly, 8(1), 23–30.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. Learning.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK ’12 (pp. 2–5). Vancouver: ACM Press.

Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213.

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663).
Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Johnson, L., Adams Becker, S., Cummins, M., Freeman, A., Ifenthaler, D., & Vardaxis, N. (2013). Technology Outlook for Australian Tertiary Education 2013-2018: An NMC Horizon Project Regional Analysis. Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis. New Media Consortium. Austin, Texas.

Johnson, L., Smith, R., Levine, A., & Haywood, K. (2010). The horizon report: 2010 Australia-New Zealand Edition. Austin, Texas.

Leony, D., Pardo, A., Valentın, L. de la F., Quinones, I., & Kloos, C. D. (2012). Learning analytics in the LMS: Using browser extensions to embed visualizations into a Learning Management System. In R. Vatrapu, W. Halb, & S. Bull (Eds.), TaPTA. Saarbrucken: CEUR-WS.org.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, XX(X), 1–21.

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49.

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Young, M. F., Barab, S. A., & Garrett, S. (2000). Agent as detector: An ecological psychology perspective on learning by perceiving-acting systems. In D. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments (pp. 143–173). Mahwah, New Jersey: Lawrence Erlbaum Associates.

Moving beyond a fashion – Blended Learning 2013

In September 2013, Col and I gave a presentation and ran a workshop around Learning Analytics at Blended Learning 2013. The slides for the presentation and the workshop follow.

Presentation

A blog post attempting to capture what was said in the presentation is now available.

This draft short paper describes in a bit more detail the IRAC framework that is mentioned in the presentation and formed the structure for the workshop.

Workshop

The following collection of Powerpoint slides provided the skeleton for our first workshop around Learning Analytics. The four questions arise from the IRAC framework and are to do with Information Representation, Affordances and Change. In reality there are 6 questions

  1. What’s your context?
  2. What’s the task?
  3. What Information do you need, how do you access it and how do you analyse it?
  4. How will you Represent your findings?
  5. What Affordances for actions can you provide?
  6. How are you going to Change all of the above?

Introduction and context and task

Covers the introduction to the workshop and the question of context and task. In these slides “task” is covered by a “cook’s tour” of learning analytics applications as a concrete way to provide ideas to what is possible with learning analytics.

Information

The aim here is to give an introduction to the idea of big data and how it looks at information and subsequently the tools and approaches you have to being able to analyse it. Also examines questions around getting access to information such as legal, ethical and the more technical.

Representation

Looks at why representation is important and some of the options that are available and some of those that don’t work so well (e.g. start of a critique of dashboards).

Affordances

Examines the reasons behind the argument that learning analytics tools and the organisations employing them need to seriously look at and aim to improve the affordances that exist for action arising from the insights gained from the representation.

Change

The last step is to look back and see how, who and what can be changed about this whole process. The assumption being that there exist a large number of reasons why everything about a learning analytics intervention needs to open to on-going change.

References

Most of the sources used in the workshop are included below.

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1(1), 3–17.

Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Campbell, J., DeBlois, P., & Oblinger, D. (2007). Academic analytics: A new tool for a new era. EDCAUSE Review, 42(4), 40–42.

Carroll, J. M., Kellog, W. A., & Rosson, M. B. (1991). The Task-Artifact Cycle. In John Millar Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface (pp. 74–102). Cambridge, UK: Cambridge University Press.

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK  Õ12, 134–138. doi:10.1145/2330601.2330636

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, (August), 1–13. doi:10.1080/13562517.2013.827653

Dawson, S., Bakharia, A., Lockyer, L., & Heathcote, E. (2011). ÒSeeingÓ networks : visualising and evaluating student learning networks Final Report 2011. Main. Canberra.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Melbourne: Australian Learning and Teaching Council.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. Learning.

Greenberg, B., Medlock, L., & Stephens, D. (2011). Blend my learning: lessons learned from a blended learning pilot. online] http.

Hardman, J., Paucar-Caceres, A., & Fielding, A. (2013). Predicting StudentsÕ Progression in Higher Education by Using the Random Forest Algorithm. Systems Research and Behavioral Science Systems Research, 30(2), 194–203. doi:10.1002/sres

Johnson, E., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339.

Kay, D., Korn, N., & Oppenheim, C. (2012). Legal, Risk and Ethical Aspects of Analytics in Higher Education. CETIS Analytics Series (Vol. 1, pp. 1–30). Bolton.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, XX(X), 1–21. doi:10.1177/0002764213479367

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49.

Prinsloo, P., & Slade, S. (2013). An evaluation of policy frameworks for addressing ethical considerations in learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK  Õ13 (pp. 240–244).

Reiser, R. (2001). A History of Instructional Design and Technology: Part 1: A History of Instructional Media. Educational Technology Research and Development, 49(1), 1042–1629.

Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169.

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, (August). doi:10.1177/0002764213498851

Siemens, George, & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).

Simon, H. (1996). The sciences of the artificial (3rd ed.). MIT Press.

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

The Indicators Project – ASCILITE’09 presentation

On Monday, 7 December 2009 we’ll be presenting at the ASCILITE’09 conference. This page is where we will store all of the resources associated with that presentation. The resources that are here include:

Resources we plan to place here include:

  • perhaps some comments from audience members;
  • audio from the presentation; and
  • maybe the twitter stream.

Slides

The slides are available for download from Slideshare, or can be viewed here.

Introducing the Indicators Project: Identifying effective use of an LMS

This page holds the details and resources associated for the presentation(s) that will be given based on the first Indicators publication. The presentation was given on the 30th November. It wasn’t ustreamed due to the absence of the required technical support. A modifed version will be given at the ASCILITE’09 conference.

Resources available from the presentation include:

Background

The presentation is based on the first publication of the Indicators project. The CQU presentation will consist of a 25 minute presentation followed by a discussion session where various patterns from the indicators project will be used as a focus for discussing possible areas of interest and further research.

Abstract

Learning management systems have become almost ubiquitous as a technical solution to e-learning within universities. Extant literature illustrates that LMS system logs, along with other IT systems data, can be used to inform decision-making. It also suggests that very few institutions are using this data to inform their decisions. The indicators project aims to build on and extend previous work in this area to provide services that can inform the decision-making of teaching staff, management, support staff and students. Through an initial set of three questions the paper offers support for some existing critical success factors, identifies potential limitations of others, generates some new insights from a longitudinal comparison of feature adoption of two different LMS within the one institution, and identifies a number of insights and ideas for future work.

Slides

Why do international students “break” the link between LMS activity and student grades? What does it mean?

One of the established patterns or trends in LMS evaluation can be coarsely summarised as

The more activity within an LMS the student has, the better the grade.

We have some support for that view, but we also have found a difference. Discovering the reasons for that difference might prove to be an interesting research project.

At CQUNi there are three types of students:

  1. Rok and regional – these are on-campus students attending at CQUni’s Central Queensland campuses.
  2. FLEX – these are distance education students, spread throughout the world and most don’t attend a campus.
  3. AIC – international students studying on-campus at one of CQUni’s metropolitan campuses.

Does the trend hold for FLEX students?

The following graph shows the average number of hits (visits) made by FLEX students to the course site (blue line) and the discussion forum (red line). The students are grouped by grade (HD the highest, F the lowest). In this graph the trend holds. (Click on the graph to see a bigger version).

Average hits on course site and discussion forum for FLEX students

The following graph shows the average number of posts (the red line) and replies (blue line) by FLEX students. Again grouped by grade. A post starts a thread, a reply responds to an existing thread. Again, the trend holds for FLEX students.

Average posts & replies for FLEX students

Does the trend hold for AIC students?

First, let’s look at the average hits on course site and discussion forum for the AIC students. It’s not there for hits on course site, but still a bit on the hits on discussion forum. But a much flatter line.

Average hits on course site and discussion forum for AIC students

What about posts/replies to the discussion forum.

Average posts & replies for AIC students

Certainly not there for posts. Similarly for replies.

Why?

The question now is what do these things mean? We’ll be doing some more statistically analysis on this first. But assuming that there’s a significant relationship here, why? Some possibilities:

  • AIC students are primarily on-campus. Is this why they don’t rely on the discussion forums as much?
  • The trouble with the observation is that the average posts/replies for FLEX and AIC students are around the same. If AIC students were relying less on the forums, wouldn’t posts/replies be much less?
  • Interestingly, the average number of hits on the course sites/discussion forums is much less for AIC students than FLEX. Are AIC students using the forums more to post, and the site less for information distribution?

After answering some of these questions, the obvious next questions are what impact does this have on their learning experience. Is this a good thing? A bad thing? Indifferent?

What, if anything, can/should be done to improve?