About this page
As we work on the Indicators project we have, and will continue to, generate a range of different questions about a number of related topics. The point of this page is to collect those questions in one place so that we can track them and also, we hope, encourage other interested people to discuss the questions and suggest possible answers.
Each question will be explained in more detail on a related blog post. If you’re interested in a question, click on the question and you can reply to the blog post.
It’s still early days in the life of this site, more questions will be added in a number of different categories.
Enabling cross-LMS, cross-institutional LMS usage comparison
One of the major aims of the Indicators project, is to enable different institutions to compare and contrast usage of their LMS with different institutions, with different LMSs. This is seen to be important for a number of reasons. It also raises a number of questions, including:
- How do you effectively represent/compare usage data from a broad cross section of LMS?
- How do you define adoption of a feature?
One approach and some perceived limitations of that approach discussed here.
- What other data external to an LMS might be needed to perform effective comparisons?
- What tools and processes should be used to enable the broadest possible set of comparisons?
- What privacy and related concerns need to be addressed to enable this type of cross-institutional comparison? How can they be addressed?
Questions for new patterns
One of the aims of the project is to identify new indicators or patterns that offer insight that might help inform actions around LMSs. The following is a list of potential questions that might identify new patterns.
- What external factors impact on LMS usage? What impact do they have?
This section of a post suggests that moving beyond discipline difference to look at the norms and traditions that arise within specific institutional sub-units as a possible impact on LMS usage.
Questions arising from patterns
The following questions arise from the indicators, patterns or graphs that the project is generating. These indicators will, hopefully, illustrate areas for further research.
What supplementary research methods are useful/necessary
Analysis of the LMS usage data can only help us identify interesting patterns. Generally, such analysis will not be able to help develop good answers to the questions of why these patterns development, what impacts they had on students and teachers and how these lessons might be applied to improve LMS usage.
Patterns we’ve found interesting have generated the following questions:
- What different statistical analysis methods do we need to adopt to improve the value of the patterns?
- What comes first? Good grades mean more participation on the LMS or more participation on the LMS means good grades?
- A course designed with an instructional designer results in “good” students participating on the LMS at significantly greater levels than the “bad” students. Why? Is this a good thing or a bad thing? Can this be harnessed as a lead indicator? What impacts might this have on course design?
Using the knowledge generated
There’s almost no point to the project if the knowledge generated from the project is not used by people to improve the use of LMSs for teaching and learning (or perhaps recognise the need for a different approach). Questions related to how this knowledge might be used (or not) includes:
- Who are the different stakeholders who might benefit from this information?
Often there’s an ignorance of students and support staff (e.g. instructional designers) and a focus on management or teaching academics.
- What type of information would each of the different stakeholders find useful?
- What is the best way to make this information available to the stakeholders?
A common response from IT folk is data mining/business intelligence tools. While they are design for this type of task, these tools are not integrated into the daily practice of most stakeholders.