Getting Started with Data in Teaching and Learning

In the teaching and curriculum space, there's a lot of data available for evaluating and making decisions about Subject design and teaching practices. In fact, there's so much data available, in different forms and at different levels of detail, that it is easy to get overwhelmed or simply lost, not knowing where to start or where to go for help.

There are many different types of data that you can use for evidencing your practice, informing curriculum enhancement, or benchmarking, and there are also many different ways to gather data. To get a fuller picture of what data and methods are, you should review the Evidencing your Practice(opens in a new window) page. You might also want to visit the Peer Review @ Western(opens in a new window) page, as well as the How to Research your Teaching(opens in a new window) and Action Research in Teaching and Learning(opens in a new window) pages.

The data outlined below is quantitative data that should be readily available to you as a Subject Coordinator. But there is a much wider range of data and evidence you can draw upon.

The "Big Four" Data Sets

Assessment data. Student grade data. Student feedback data. Learning analytics. All are related in some way, but all different, telling us something different about student engagement or outcomes.

Why should you care about any of this data? Well, this data is evidence of your practice, your innovation, your impact. This data also just tells you how you're going with your teaching, assessment, and curriculum design. You're always making changes in your Subject. What are the effects of those changes?

If you think your Student feedback (on Subject or Teaching) data is telling you enough, you are missing out on a wealth of data and the opportunities that come with collecting it: the power to test, evaluate, modify, re-test, and re-evaluate your practices and design choices over time and the impact that those changes and innovations are having on your students.

Click through the dropdown elements below to dive a little deeper into each of the four data areas related to Subject and curriculum design.

Assessment Data

Assessment data reflects the outcomes of student assessments (obviously). But you might want to test the impact of changes to a particular assessment task over time.

For instance, Clause 30 of the the Assessment Policy(opens in a new window) states:

Early, formative assessment tasks should be included up to the end of week 4 or earlier in all Level 1 subjects to help identify students who are not engaging or who may need additional support. Feedback on these assessments should be provided to the students before subsequent assessments are due to be submitted. Ideally this feedback should be provided before the census date for the term, but this may not always be possible, especially for short terms.

This requirement for early formative assessments appears four times throughout the Policy (c. 4k, 14, 15, & 30).

If you are a new Subject Coordinator for a Level 1 Subject, for example, evaluating the effectiveness of this "early, formative assessment task" and any changes you make to it, is an ideal way to use Assessment-level data.

Some questions a Subject Coordinator might think about:

  1. What is the conversion rate of students completing this "early, formative assessment task" going onto submit and/or pass subsequent assessment tasks?
  2. Are more students going on to complete subsequent assessment tasks in the Subject since I introduced changes to the "early, formative assessment task"? Can a correlational case be made for this impact?

Variations of these sorts of questions can be applied in myriad ways to all sorts of assessments.

Student Grade Data

Student Grade Data are the final results for students in a Subject (again, obvious). You'll be familiar with the sorts of descriptive statistics available from this data, for example the spread or distribution of grades. A range of descriptive statistics are available: averages, ranges, standard deviations.

If you are coordinating a large Subject over a period of time, student data at this level can be illustrative of impacts of curriculum and assessment design changes over time.

You may be a newly-appointed Coordinator for a Subject with a significant FNS (Fail - Non-Submit) rate that you wish to improve through various changes to assessment tasks, innovative learning activities, enhancements to the vUWS, or some other curriculum or pedagogical improvement.

Some questions a Subject Coordinator might think about:

  1. What is the FNS rate of this Subject and how does it compare over time?
  2. Have there been any changes in the FNS rate of this Subject since I a) took over the Subject, and/or b) made specific changes to the Subject in an attempt to improve the FNS rate?

Are there other rates or ratios that can help illustrate the impact of changes you've made over time in your Subject?

Student Feedback Data

Student Feedback Data comes in two forms: qualitative (comments) and quantitative (aggregate ratings). Qualitative data in the form of student comments can be useful for exemplifying the impacts of your teaching practice and innovations. The same can be said of unsolicited student emails that you receive after the Subject is concluded--sometimes even years after the student has graduated!

Each teaching session, for each Subject you teach, you are asked to survey your students with two surveys: one for Teaching and one for the Subject. These survey also provide comparison with School and Institution wide results for the same teaching session.

Response rates can fluctuate over time, meaning some years will have stronger samples than others.

Some questions a Tutor or Subject Coordinator might think about:

  1. What is the trend of my ratings over time? Am I maintaining a high rating from one teaching session to the next?
  2. What is the trend of my differential in comparison with my School and the University?

There are challenges with using Student Feedback Data, particularly in a comparative way, because of the range of different disciplines and Subjects included in aggregate results. Different Subjects and disciplines have different teaching and curriculum design demands and require contextualisation. Alongside the fluctuation of response rates, it is thus strongly advisable to use Student Feedback Data in conjunction with other data points to illustrate claims about your impact and innovation.

Learning Analytics

Learning Analytics comes in a variety of forms. This is not assessment or grade data; rather, it is data captured about student behaviour or interactions within the Learning Management System (LMS) vUWS and other related sites or tools, for instance Panopto, Zoom and H5P.

Learning analytics can provide valuable insight into student engagement at the level of activities, modules, and folders, which can supplement other data points, such as student assessment and grade data.

Where assessment and grade data can be viewed as summative data--data that comes at the end of the learning process--learning analytics can be viewed as formative or intermediate data that is captured prior to or in the lead-up to assessment tasks.

Learning analytics isn't the same as student assessment or grade data, but it complements those data, and can provide something of "the picture in between" the summative data points.

For instance, you might insert a series of quizzes into some of your Panopto recordings (a built-in function) to try and enhance engagement and retention of knowledge. You might compare the data from this intervention and compare it to pre-intervention data from prior teaching sessions. You might then wish to see if any improved engagement with the video content is translating into improved assessment outcomes.

Some questions a Tutor or Subject Coordinator might think about:

  1. What are the most important content features of my vUWS site (video, modules, texts), and how do I track engagement with it?
  2. To what extent are students engaging in the Modules, or with H5P, Panopto, or other embedded content?
  3. What changes do I expect to see following my interventions, and how do those interventions relate to other elements of my Subject, such as assessment items?

Learning analytics data can be considered "live" data, in the sense that you can observe it week-to-week, rather than having to wait until the end of semester. You can see, for instance, the views and interactions in Panopto at any time. If engagement is low at a certain point you deem critical, you are then able to use a communications intervention to try and promote engagement.

You can learn more about Learning Analytics on the Online Teaching and Engagement Hub: Learning Analytics(opens in a new window)

Thinking Through the Intersections

It is important to think through the intersections of the above data sets as well. They are not isolated. Together, these data points help to tell the story of your teaching and curriculum activities.

These are not the only data you should be thinking about--or thinking with--but they are the most logical starting point.

Interventions and Metrics

Having read through and reflected on the above examples, you might be thinking about where you could start. The above data points are not collected in a single location. So you will need to go digging.

Student Grade Data and Student Feedback Data are probably the most accessible to you.

If you've been coordinating a Subject for a few years, these data sets would be a good place to start.

Competitive Intelligence and Analytics(opens in a new window) holds this data, so you need to request it from them. But once you've got it, some retrospective analysis is called for. If you haven't planned or evaluated interventions in your Subject before, all you can really do is look at the historical patterns. Do you have a high FNS rate? How does your SFU rating on, say, the Assessment question (SFU Item 3) compare over time and against the School and the Institution?

There are, of course, other ways we identify areas for improvement in our curriculum design or teaching practice--student comments and our own intuition are valid methods. But finding a metric to demonstrate impact is an important part of quality enhancement and evidencing your practice.

Once you've analysed and reflected on your data, identifying potential areas for improvement, you can pivot towards a more conscious and deliberate approach. What is your intervention? Why is it needed? What do you expect to see in the data?

A more conscious and deliberate approach changes the way we think about our data: we are trying to change the data that we will eventually collect; we aren't just collecting it and looking to see what happened. The intervention is the key.

Learning Futures

If you would like to explore the prospect of Learning Futures running a workshop on this topic, you can submit your interest via WesternNow: Customised professional learning workshops to support teaching and learning(opens in a new window)