The indicators project identifying effective learning: adoption, activity, grades and external factors

Colin Beer, Ken Clark and David Jones.

The following paper is also available via the online Proceedings of ASCILITE’09

Abstract

Learning management systems have become almost ubiquitous as a technical solution to e-learning within universities. Extant literature illustrates that LMS system logs, along with other IT systems data, can be used to inform decision-making. It also suggests that very few institutions are using this data to inform their decisions. The indicators project aims to build on and extend previous work in this area to provide services that can inform the decision-making of teaching staff, management, support staff and students. Through an initial set of three questions the paper offers support for some existing critical success factors, identifies potential limitations of others, generates some new insights from a longitudinal comparison of feature adoption of two different LMS within the one institution, and identifies a number of insights and ideas for future work.

Keywords: LMS, e-learning, academic analytics, benchmarking, evaluation

Introduction

When it comes to Learning Management Systems (LMS) within higher education it appears to be a question of everyone having one, but not really knowing what is going on. This paper reports on the initial steps in a project – the Indicators Project (https://indicatorsproject.wordpress.com) – designed to increase awareness of what is being done with institutional LMS and consequently help address questions such as what can and does influence the quantity and quality of LMS usage by students and staff. The project hopes to eventually provide data that can help improve the decisions made by organizations, management, academic staff, support staff, students and researchers around LMS, e-learning and learning and teaching. In particular, the project aims to enable the examination of LMS usage across institutions, platforms and time.

The almost universal approach to the adoption of e-learning at universities has been the implementation of an LMS such as Blackboard, WebCT, Moodle or Sakai (Jones & Muldoon, 2007). LMS have become perhaps the most widely used educational technologies within universities, behind only the Internet and common office software (West, Waddoups, & Graham, 2006). Harrington, Gordon et al (2004) suggest that higher education has seen no other innovation result in such rapid and widespread use as the LMS. And yet, the quantity and quality of learning occurring within these systems remains limited. Experience from one Australian university shows that as late as the second half of 2006, after over six years of institutional use of an LMS, only just over half of all courses offered had course websites (Jones & Muldoon, 2007). Malikowski et al (2006) found that LMS are primarily used to transmit information to students. Universities are using the LMS for administrative purposes with only limited impact on pedagogy (OECD, 2005). The challenge is not to promote uptake but to encourage, enable and facilitate effective implementation that is likely to have significant impact on student learning (Sharpe, Benfield, & Francis, 2006).

It has been suggested that academic analytics has the potential to improve learning, teaching and student success through an awareness of patterns in the data and the application of predictive modelling techniques (Campbell, DeBlois, & Oblinger, 2007). Academic analytics involves the harvesting and analysis of institutional data to inform decision making (Dawson, McWilliam, & Tan, 2008) and its application within higher education has been enhanced due to the integration inherent in LMS and the resulting ability to capture extensive amounts of data about individual user and designer behaviour (Heathcoate & Dawson, 2005). While there is a growing interest, there is minimal research into how this information can be harnessed in the design, delivery and evaluation of learning and teaching practices. However, it has been shown that such analysis is directly relevant to student engagement, evaluating learning activities and can usefully answer important questions (Dawson et al., 2008).

This paper reports on the initial work and early findings of a project intended to extend prior work and investigate how insights from this data can be identified, distributed and used to improve learning and teaching by students, support staff, academic staff, management and organizations. The paper starts by providing a brief background on previous work in this area. Following this, a short description of the context, evolution and purpose of the project is given. This includes an illustration of how it builds on and extends existing work and a description of the three initial questions examined in this paper. Each of these three questions, associated findings, implications and suggestions for future work are then examined in a separate section. Finally, a summary and some conclusions are offered.

LMS usage, academic analytics and effective learning

The focus of this paper is to identify how LMS are being used and what, if any, indications of effective learning the examination and analysis of this use can reveal. Given the rise of e-learning and the predominance of the LMS, it is no surprise to find that there has been prior research in this area. This section seeks to briefly summarise this existing work and illustrate how the work discussed here is somewhat different. It uses the attributes of method, number of institutions, number of LMS, and time frame to compare and contrast the literature.

There have been three main methods used to examine LMS usage: ask students and staff through surveys or interviews (Ansorge & Bendus, 2003; Byrnes & Ellis, 2006; Woods, Baker, & Hopper, 2004), manually review course sites (Malikowski, 2008; Malikowski et al., 2006), and mine the data in system logs (Dawson et al., 2008). In some cases a mixture of these methods have been employed (Dutton, Cheong, & Park, 2004; Morgan, 2003). These methods have strengths and limitations. Surveys are open to bias, faulty recollection, mis-interpretation of terms – especially when comparing across different LMS – and low response rates. Interviews can suffer some of these problems and are time consuming. Manually checking courses sites is a time-intensive process that can overlook some ephemeral data (Malikowski, Thompson, & Theis, 2007).

There are significant limitations in a purely quantitative analysis of data and this is especially true in a complex educational setting. Data mining can help reveal patterns and relationships but does not tell the user the value or significance of these patterns (Seifert, 2004). A systems scan of designer and user behaviour within an LMS can never describe in full how they are engaging in the use of online environments for teaching and learning (Heathcoate & Dawson, 2005). Captured LMS data does not indicate the nature of the activity that the student is engaged with, or the technical experience of the user accessing the system. Both of which, may influence the quantity of clicks they make within the system (Black, Dawson, & Priem, 2008) and will, therefore, influence the resulting analysis of captured data.

Surveys generally capture the perspective of staff and/or students at one particular point in time. Some survey work has sought to generate a longitudinal perspective through annual surveys. The manual checking of course sites, perhaps because of its resource intensive nature, also appears to be a one off check. Data mining reports also tend to have a limited time-frame. Dawson et al (2008) report on one term, Morgan (2003) – as part of a mixture of methods – does analysis on system logs from three semesters. We have not yet come across published research accounts seeking to analyse LMS usage data over a number of years.

There appears to be a similar absence in published research in terms of applying data mining to comparisons between different LMS. Malikowski et al (2007) have developed a model that abstracts LMS features into a system independent form in order to enable comparisons between different LMS. They illustrate its value by examining usage data from various published reports from 2004 and earlier. The differences in LMS terminology and data models, the increasing likelihood of each institution having only one LMS, the difficulties associated in sharing this information across institutions, and limitations in LMS databases are all likely contributing factors to the lack of cross LMS comparisons using data mining. The same factors may also explain the absence of cross-institutional comparisons.

Due to contextual factors the indicators project is in a position to analyse and compare LMS usage data between two different systems that have been running concurrently since 2004 through 2009. This type of longitudinal analysis of usage data of two different LMS within the same institutional context is apparently unique. This paper only draws on this to address one of the three questions examined below. To achieve this longitudinal analysis, the project has used the model developed by Malikoswki et al (2007) that combines both technical features and learning research to enable a synthesis of research across different LMS. This model (Figure 1) categorises LMS features into one of five categories, grouped into three levels based on observed levels of usage. The levels are based on current, broadly observed usage patterns arising from the literature examined by Malikowski and his co-authors.

Reworked Malikowski model

Figure 1: Flowchart of LMS research categories (adapted from Malikowski, Thompson et al. 2007)

The indicators project

The indicators project commenced out of discussions between two of the authors (Jones and Beer) during 2008 when they were both responsible for providing user support for staff and students for the institution’s installation of Blackboard. It had been obvious for sometime that greater levels of more effective support were required, however, due to organisational factors the number of people providing e-learning support had recently been reduced. At its simplest, there were a large number of support calls at the start of term because course sites had been released with simple, fundamental problems (e.g. a pointer to old course information). The authors had been aware for sometime of academic analytics and associated literature and had long wanted to develop systems to generate lead indicators of potential problems. Such systems would allow pro-active, rather than reactive support. At around this time there was also increased interest from Faculty management in specifying and enforcing minimum standards. In response a simple web-based system was developed that showed a list of all Blackboard course sites with a collection of “traffic lights”. At a glance problems could be seen by the presence of red or yellow “lights” and remedial action could be taken. This approach, was significantly less resource intensive than the manual checks used at some other institutions (Weaver, Spratt, & Nair, 2008). Before the system could be fully completed and integrated with organisational practice an organisational restructure of the teaching and learning support services was instigated. As a result of that restructure responsibility for user support for e-learning was transferred to the IT division and was no longer a responsibility of the authors.

The authors, however, retained an interest in the indicators project for a number of different reasons.
One author’s (Clark) interest arose due to a discrepancy in his pedagogical (student focussed/social aspect of learning) approach and his online usage (content focussed). Through a Master’s project looking at improving his online user behaviour utilising Gonzalez’s (2009) two broad approaches to teaching, what he classed as “informative/individual learning focused’ and ‘communicative/networked focused”, the author hopes to improve understanding of the way that academics use LMS and what this can indicate about teacher/student contact. Another author (Beer), who aside from a related Master’s project, is involved due to his position within a curriculum design and development unit that provides e-learning advice and design support to the teaching community. The last author (Jones) is completing his PhD, which is based on the design of Webfuse (Jones & Buchanan, 1996), a home-grown “LMS”. Consequently, a comparison between the usage of Webfuse and Blackboard is of particular interest.

The indicators project is possible because system usage data for both systems has been kept, two of the authors have access to this data for the entire life-span of these systems, and as a group the three authors have a mixture of technical, local and educational knowledge. Webfuse has been used at CQUni from 1997 through 2010. Blackboard has been in operation from 2004 through 2010. Blackboard usage data for 2004 is patchy, unreliable and incomplete. Consequently the focus of this work is on the period from 2005 through the first term in 2009. All of the work done within the indicators project shares the common approach in that usage data from different LMS is being combined with more specific institutional data (e.g. student results, modes of delivery, teaching responsibilities) and then transformed into categories and statistics that are common and independent of the specific details of the LMS.

Given the differences between the systems, assumptions made between Blackboard and organisational data and other known problems associated with the application of data mining to LMS usage data has not been straight forward. So, rather than being a rational and linear process the indicators project has been exploratory in nature and guided by specific questions that might be answered, or least clarified, by analysis of the usage data. The next three sections of this paper offer descriptions of three of these questions, what our initial analysis has revealed and the additional questions that analysis has raised. The three questions are:

  1. Does LMS feature adoption differ over time and between LMS?
    Draws on Malikowski et al’s (2007) model of LMS features to compare and contrast adoption of the different LMS features between the Blackboard and Webfuse systems between 2005 and 2009.
  2. Is there a link between LMS activity and student grades?
    Draws on student visits to the course website and participation in discussion forums to investigate (and confirm) the existence of a link between LMS activity and student grades.
  3. Is there a link between LMS activity and external factors?
    Draws on a range of data to determine if the quantity and quality of LMS activity is in some way linked with a variety of external factors including: discipline, formal qualifications in learning and teaching, course design influenced by curriculum designer and the student’s mode of study.

Due to the exploratory nature of the indicators project, and the purpose of this paper to explore and share initial results, the following examination of these questions raises more questions than it answers. In part, this is because no statistical analysis has been done to clearly identify significance or relationships. The focus has been on generating and sharing a collection of indicators or patterns that are worthy of future analysis and work.

Does LMS feature adoption change over time and between systems?

Whether or not LMS features are used, why they are used and what impact they have is a key aim of this project. Knowledge of this is important because it is not the provision of features but their uptake and use that really determines their educational value (Coates, James, & Baldwin, 2005). Uptake is unlikely to be uniform as the uses and consequences of information technology emerge unpredictably from the complex interactions between the social system and the nature of information systems (Markus & Robey, 1988). This suggests that given a different social system or different LMS you are likely to find different levels of, and reasons for, use of different LMS features. This section combines Malikowski et al’s (2007) model of LMS feature use shown in Figure 1 and discussed above, with the longitudinal usage data for both Blackboard and Webfuse at CQUni from 2005-2009.

Figures 2 and 3 illustrate the differences and evolution over time of feature adoption between these two systems. The dashed lines indicate the percentage of courses within Webfuse that have adopted a feature. The dark continuous line indicates the percentage of Blackboard courses adopting features. The two straight lines in each graph specify the minimum and maximum level of adoption of these features found in Malikowski et al (2007). The fifth category of the Malikoski framework, computer-based instruction, is not shown here as very few Blackboard courses use this functionality and Webfuse does not provide that functionality.

Figure 2 suggests that adoption of content transmission features in Blackboard during this time averages 91%, while Webfuse averages almost 68%. However, rather than indicating a reduce emphasis on content transmission within Webfuse, these figures indicate a difference in how Webfuse operates. Webfuse automatically creates a default site for all courses that includes a range of information including course synopsis, textbook details, a link to the course profiles etc. The Webfuse content transmission percentage shown in Figure 2 indicates the percentage of staff who use content transmission features above and beyond this default. This suggests that staff using Webfuse can spend less time on content transmission feature, potentially freeing up time for other tasks. Offering some additional support for that observation Figure 2 also shows that Webfuse course sites show a much large adoption rate for class interaction features than both Blackboard course sites and the percentages found by Malikowski et al (2007). Identifying the reasons behind this much greater level of adoption and what, if any, impacts this has on the student learning experience would appear to be an interesting are for further research.

Figure 2 – Longitudinal comparison of Blackboard and Webfuse course site adoption of content transmission and class interaction features (2005-2009)
Feature adoption - Transmit Content - Wf vs Bb Feature adoption - Class Interaction- Wf vs Bb

From 2007 onwards, even the Blackboard course sites at CQUni start to creep above the maximum percentage reported by Malikowski et al (2007). While perhaps showing CQUni moving beyond the reported literature, it should be noted that the Malikowski figures are from research published up to 2004. A similar gradual increase at other institutions may have also occurred. The absence of more recent data across institutions with which to address this question is one problems, which the Indicators Project is attempting to address.

Malikowski et al (2007) were unable to find in the literature specific percentages for the adoption of course evaluation features, apparently because levels of adoption were so low they were not reported. Recent and more widespread knowledge of feature adoption across institutions would help identify if adoption of course evaluation has grown. Figure 3 suggests that adoption of course evaluation features in Blackboard course sites appear to support this low level of usage. The average adoption rate over 4 years is 3%. During this same period Webfuse course sites average almost 77% adoption of course evaluation features with almost 100% for a number of years before a recent drop to below 40%. This significant difference is due entirely to the introduction and use of a feature called the course barometer (Jones, 2002). Initially implemented in Webfuse for use by a single academic, by 2001 the barometer became part of an institutional push to generate lead indicators of student experience and consequently became a part of the default Webfuse course site. That is, the course barometer was automatically added to all course sites, regardless of the desires of the teaching staff. In 2008, the barometer became an optional part of the default course sites, hence the drop in adoption. Around about this same time, use of the barometer was being encouraged by other parts of the university and was being used by some Blackboard courses. The addition of such specific and unique features is less likely with a more traditional commercial or open source LMS.

The disparity between the adoption student evaluation features between Webfuse (avg. of almost 52%) and Blackboard (avg. of 25%) also arises from a unique aspect of Webfuse. In this case, significant effort was expended on an online assignment submission system within Webfuse (Jones, Cranston, Behrens, & Jamieson, 2005) to support CQUni’s muliti-campus operations. Designed for the CQUni context the system provided advantages over the Blackboard feature and by 2008 a number of Blackboard courses were using the Webfuse assignment submission system.

Figure 3 – Longitudinal comparison of Blackboard and Webfuse course site adoption of student evaluation and course evaluation features (2005-2009)
Feature adoption - Evaluating Students - Bb vs Wf Feature adoption: evaluating Courses Bb versus Wf

The above simple comparison of feature adoption between Blackboard and Webfuse has identified a number of areas for further work. A key limitation of the Malikowski et al (2007) model is that it does not provide a good, LMS independent definition of adoption that distinguishes between the feature being present in the course, but either not used, used in only very limited ways or used quite heavily or effectively. For example, based on previous findings (Jones, 2002), it is expected that most of the adoption of course evaluation in Webfuse consisted of very limited usage. Given the different concepts of usage between different types of features, developing such a metric is likely not be simple, however, it would also enable better comparisons between LMS and institutions. The suggestion by Katz (2003) that the adoption of a new LMS is likely to be followed by a drop in performance as users grapple with the new system is of particular interest to CQUni as it moves from Blackboard/Webfuse to Moodle in 2010.

While the trend data in Figure 2 identifies some interesting patterns, identifying the reasons behind these patterns requires additional research methods including surveys and interviews. These methods, combined with additional analysis of system usage could also be used to investigate if there is a trend in the sequence in which different features are adopted. Similarly, examination to see if external factors such as discipline influence the sequence and level of feature adoption may be interesting. Extending this research to further investigate the complex and unpredictable emergence of LMS use from the combination of system characteristics and social context, especially between different institutions, is an area of obvious interest. This may help identify important lessons about what works or doesn’t in terms of encouraging greater and more effective adoption of LMS features. It may also bring into question the presence of some LMS features that are rarely used and also highlight the lack of flexibility inherent in the integrated system architecture used by most LMS.

Is there a link between LMS activity and student grades?

Perhaps the most important indicator for effective learning, at least from the perspective of pragmatic students, is their final grade. Did they achieve the grade they desired? Dawson et al (2008) found significant differences between low and high performing students in the quantity of online sessions times, total time online and the amount of active participation in discussion forums. The question we ask here is if this relationship exists at CQUni. CQUni has three types of students: AIC, CQ and FLEX. CQ students are on-campus students studying at one of CQUni’s traditional, Australian campuses based in Central Queensland. AIC students are generally international students studying at one of CQUni’s campuses in Brisbane, Gold Coast, Sydney or Melbourne. FLEX students study by distance education and rarely, if ever, attend a campus.

The following analysis groups students into groups based on the final grade they achieved in a course. At CQUni grades range from the top grade – high distinction (HD) – through to the lowest – fail (F). From there the average number of: hits on the course website, hits on course discussion forum, discussion forum posts (a post that creates a new thread) and discussion forum replies (a post that continues an existing thread) are calculated for each grade group. Lastly, students were divided up on the type of student (AIC, CQ or FLEX). This analysis was only done for Blackboard courses and not Webfuse courses for two main reasons. Many Webfuse courses using mailing lists, rather than discussion forums, and analysis of mailing list posts and replies is currently not possible. Secondly, by default most areas of a Webfuse course site are openly available to anyone on the web. This means it is not possible to identify all hits on a course website by students.

Figure 4 shows the results of this analysis for FLEX students. The decreasing pattern of usage is quite clear with FLEX students with the top grade averaging 730 hits during term on the course website and 393 hits on the course discussion forum. FLEX students who failed the course average 219 hits on the site and 138 on the forum (there are always less hits on the forum than the site). A similar pattern emerges with discussion forum posts (HD=5, F=3) and replies (HD=13, F=5). There are always less posts than replies as students are more likely to respond to an existing thread than start a new one. This appears to support the premise that better students will use the LMS more. The question of which comes first (high grade or greater LMS use) is an area for more research.

Figure 4 – Average usage of course website by FLEX students by grade.
Average hits on course site and discussion forum for FLEX students Average posts & replies for FLEX students

Figure 5 shows the same analysis for AIC students and appears to suggest that the same relationship (more LMS use, the better the grade) does not exist for these students. Except for average hits on the course discussion forum, HD students from the AICs average less usage than the students receiving Ds, Cs and sometimes Ps. The comparison between Figure 3 and 4 also shows the significantly less use of the course websites made by AIC students. A HD FLEX student averages 730 hits on a course website, while a HD AIC student averages 131 hits. The greater face-to-face support provided and required of AIC students may offer an explanation for this. CQ students lay somewhere between these two groups with greater use than AIC students, but not to the same level as FLEX students. The same applies for the relationship between usage and grade. CQ students with a HD average less discussion forum hits (134) than CQ students with a D (139).

Figure 5 – Average usage of course website by AIC students by grade.
Average hits on course site and discussion forum for AIC students Average posts & replies for AIC students

This result somewhat contradicts existing findings and requires more analysis to determine the significance of this relationship and the use of additional methods to identify why this might be the case. Such methods should include the type of network visualisation developed by Dawson et al (2008), qualitative evaluation into the quality and topics of these forum discussions, and an investigation of the impact of the level of staff participation (discussed briefly in the next section).

Is there a link between LMS activity and external factors

The initial interest in the indicators project arose out of a need to generate lead indicators that enabled support staff to take pro-active steps to address and hopefully prevent potential problems. A key component of that task is to identify potential patterns in system usage that may indicate potential future problems or positive outcomes. For simple checks this is quite straight-forward as there is little difficulty in identifying whether a course site has a discussion forum or the right link to the course synopsis. In terms of the more complex patterns, especially of the sort to be useful in identifying effective or problematic learning situations, the difficulty is much higher. Fresen (2007) draws on a comparative analysis of the literature to present a taxonomy of critical success factors for quality web-supported learning. For the purposes of this paper we’ve drawn on a small number of these factors to guide our initial search for patterns. The factors chosen are: student communication, a number of instructional design factors, reliability of the technology, interaction/facilitation on the part of teaching staff, and the academic background of teaching staff.

An initial investigation into the value of student communication was discussed in the previous section. The finding there was that while there appeared to be an obvious link for FLEX students, this relationship may not be there for on-campus students, especially those at the AICs. This is somewhat understandable given that the LMS is likely to be the main communication means for distance education students. A potential implication is that in a blended learning situation where there are other effective means of communication, using the LMS for communication is less important.

Over recent years four courses at CQUni have benefited from the involvement of an instructional designer. The largest of these courses, and arguably most successful in terms of outcomes, used a technology-enhanced learning environment combined with insights from situated and authentic learning to increase levels of student engagement and active learning (Muldoon & Kofoed, 2009). The two main 2008 offerings of this course had an average hit count on the course site for all students between 6 and 7 times the rate for all other courses. These two offerings also had an average hit count on the discussion forum by all students that was between 50% and 60% lower than that experienced in all other courses. This would seem to indicate a course design that focused on content transmission.

Such a conclusion would be false and illustrates the limitations of relying solely on LMS usage data for drawing conclusions. This type of analysis can provide indicators of what might be interesting to look into, however, any final conclusions need to be supplemented with knowledge from other sources. For example, while the average hit rate on the discussion forum was less for this course. The average number of posts and replies by students in this course was at least double the average in other courses; suggesting that within this course students are more likely to contribute to the discussion forum when they visit it, rather than lurk or find nothing of interest. This indicates that some form of ratio between average forum hits and average posts/replies might be a useful indicator of discussion forum effectiveness. In addition, the design of this course (Muldoon & Kofoed, 2009) and its focus on situated and authentic learning led to the development of a complex and realistic setting with the course site, consisting of machinima and a company intranet. These approaches are positive indicators of a number of success factors identified by Fressen (2007) under the category of instructional design. Increased use of content transmission can actually indicate good instructional design.

Fresen (2007) identifies the level of interaction or facilitation by teaching staff as a critical success factor for web-supported learning. To test this factor we divided courses into four groups based on the number of hits on the course site by all teaching staff: high (greater than 3000 hits), medium (1000 to 3000 hits), low (100 to 1000 hits) and super low (less than 100 hits). Then for each grouping we repeated the analysis done in Figures 3 and 4. Given space limitations we show the results below for only two groups: high (Figure 6) and super low (Figure 7). Figure 6 shows that the connection between LMS usage and grade exists for students where there is a high level of staff involvement. The average hits and forum participation for students in this group tends to slightly exceed the average for FLEX students shown in Figure 4.

Figure 6 – Average hits and forum participation for students in courses with high staff involvement (greater than 3000 hits by staff, n=678 courses).
Average student hits on course site/discussion forum for high staff participation courses Average student posts/replies on discussion forums for high staff participation courses

However, as shown in Figure 6, the relationship between LMS usage and grade does not appear to exist in courses with super low staff participation. This is a pattern reminiscent of, but much more obvious, to that found with AIC students in Figure 4. Of particular interest is that the average number of replies on the course discussion forum for students in the super low group is significantly higher than the other groups. Suggesting the students are talking more about something. Examination of the topics of discussion within these forums might reveal something interesting as would further analysis and of whether or not being in a super-low or low group has an impact on level of achievement or satisfaction.

Figure 7 – Average hits and forum participation for students in courses with super low staff involvement (less than 100 hits by staff, n=849 courses).
Average student hits on course site/discussion forum for super low staff participation courses Average student posts/replies on discussion forums for super low staff participation courses

It has been argued that formal teaching qualifications will improve the quality of teaching. To investigate whether teaching qualifications impacted upon student usage of course sites we broke courses up into three groups: all courses from the education discipline (education), all courses taught by staff who had completed CQUni’s one-year higher education teaching qualification (gradcert), and all the remaining courses (all others). A similar process as was used for Figures 6 and 7 was then followed. It was found that courses in the gradcert group had almost twice as many average hits on the course site and discussion forum than the all others. However, they also had about the same level of student posts and replies on the discussion forums. Courses in the education group also had essentially the same level of student posts and replies as the all others and also had essentially the same number of average hits on the course site. However, the average hits on the forum for the courses from the education group were somewhat higher especially for students receiving the top two grades. This might suggest a higher level of checking or lurking of the discussion forum by these stronger students. There is no immediately obvious connection between teaching qualifications and LMS activity. However, this is only an initial investigation and the example of the instructional design course above reinforces the observation that LMS usage logs don’t tell the whole story.

Future work and conclusions

This paper has used three questions to frame an initial exploration of the use of LMS usage data to identify potential indicators of effective learning. It has illustrated the value of comparing usage data between different LMS within a single university and suggested that objective comparisons of LMS usage data between universities would be of value. Of particular interest would be comparing LMS usage before and after the adoption of a new LMS. The paper has also identified the value of an existing model (Malikowski et al., 2007) for comparing LMS feature adoption between different systems, and a need for that model to be extended to include some platform independent, feature specific measure of adoption and usage. It has given early indication that a different LMS or different social system can influence the level of feature adoption. The paper has identified a number of patterns that seem to indicate that the relationship between LMS activity and final student grade may be moderated by a number of factors including type of student and the level of staff interaction. The paper has offered some indication that the level of staff interaction on a course site might be an important factor. It has established that instructional design input may also be important. The paper has also reinforced the point that the analysis of LMS usage data is only useful in identifying potential interesting patterns of effective or not effective learning and needs to be supplemented with other methods, data and knowledge.

The purpose of this paper has been exploratory, to identify potentially interesting patterns that might indicate areas of future useful and fruitful analysis and research. The most obvious are the application of statistical methods to truly establish some level of relationship and significance. This is part of the broader challenge to move beyond generating and making this information available towards being able to accurately interpret this data and apply findings to practice (Dawson et al., 2008). This is the next major challenge for the indicators project.

References

Ansorge, C. and O. Bendus (2003). The pedagogical impact of course management systems on faculty, students, and institution. Web-based learning: What do we know? Where do we go? R. Benning, C. Horn and L. PytlikZillig. Greenwich, CT, Information Age Publishing: 169-190.

AUQA (2006). Report of an Audit of Central Queensland University. Melbourne, Vic, Australian Universities Quality Agency: 72.

Black, E. W., K. Dawson, et al. (2008). Data for free: Using LMS activity logs to measure community in online courses. Internet and Higher Education, Science Direct. 11: 65-70.

Byrnes, R. and A. Ellis (2006). “The prevalence and characteristics of online assessment in Australian universities.” Australasian Journal of Educational Technology 22(1): 104-125.

Campbell, J. P. and D. G. Oblinger (2007). “Academic Analytics.” Educause Article.

Caruso, J. B. (2006). Measuring student experiences with course management systems. Boulder, CO, EDCAUSE Center for Applied Research.

Coates, H., R. James, et al. (2005). “A critical examination of the effects of learning management systems on university teaching and learning.” Tertiary education and management 11(2005): 19-36.

Dawson, H. and E. McWilliam (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance, Queensland University of Technology and the University of British Columbia: 41.

Dawson, S., E. McWilliam, et al. (2008). Teaching Smarter: How mining ICT data can inform and improve learning and teaching practice. ASCILITE, Melbourne, Australia.

Dutton, W., P. Cheong, et al. (2004). “An ecology of constraints on e-learning in higher education: The case of a virtual learning environment.” Prometheus 22(2): 131-149.

Fresen, J. (2007). “A taxonomy of factors to promote quality web-supported learning.” International Journal on E-Learning 6(3): 351-362.

Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314.

Harrington, C., S. Gordon, et al. (2004). “Course Management System Utilization and Implications for Practice: A National Survey of Department Chairpersons.” Online Journal of Distance Learning Administration 7(4).

Heathcoate, L. and S. Dawson (2005). “Data Mining for Evaluation, Benchmarking and Reflective Practice in a LMS.” E-Learn 2005: World conference on E-Learning in corporate, government, healthcare and higher education.

Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado, AACE.

Jones, D. and R. Buchanan (1996). The design of an integrated online learning environment. Proceedings of ASCILITE’96, Adelaide.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Katz, R. (2003). “Balancing Technology and Tradition: The Example of Course Management Systems.” EDUCAUSE Review: 48-59.

Malikowski, S. (2008). “Factors related to breadth of use in course management systems.” Internet and Higher Education 11(2): 81-86.

Malikowski, S., M. Thompson, et al. (2006). “External factors associated with adopting a CMS in resident college courses.” Internet and Higher Education 9(3): 163-174.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

Markus, M. L. and D. Robey (1988). “Information technology and organizational change: causal structure in theory and research.” Management Science 34(5): 583-598.

Morgan, G. (2003). Faculty use of course management systems, Educause Centre for Applied Research: 97.

Muldoon, N. and J. Kofoed (2009). Second life machinima: Creating new opportunities for curriculum and instruction. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009, Honolulu, HI, USA, AACE.

OECD (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France, Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development.

Phillips, R. (2005). “Challenging the primacy of lectures: The dissonance between theory and practice in university teaching.” Journal of University Teaching and Learning Practice 2(1): 1-12.

Seifert, J. W. (2004). Data Mining: An Overview. U. C. R. Service, The Library of Congress.

Sharpe, R., G. Benfield, et al. (2006). “Implementing a university e-learning strategy: levers for change within academic schools.” ALT-J, Research in Learning Technology 14(2): 135-151.

Weaver, D., C. Spratt, et al. (2008). “Academic and student use of a learning management system: Implications for quality.” Australian Journal of Educational Technology 24(1): 30-41.

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

Woods, R., J. Baker, et al. (2004). “Hybrid structures: Faculty use and perception of web-based courseware as a supplement to face-to-face instruction.” Internet and Higher Education 7(4): 281-297.

Advertisements

12 thoughts on “The indicators project identifying effective learning: adoption, activity, grades and external factors”

  1. […] The Indicators Project The Indicators Project – ASCILITE’09 presentationIntroducing the Indicators Project: Identifying effective use of an LMSWhy do international students “break” the link between LMS activity and student grades? What does it mean?First indicators publicationThe indicators project identifying effective learning: adoption, activity, grades and external facto… […]

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s