Friday, January 14, 2011

LAK11 Week 1: Presentation - John Fritz


(Links to the elluminate session, and the mp3 and slides of that session)

John’s presentation and emphasis is (admittedly by him) biased toward the LMS/CMS, and much of the talk centered on students’ use of Blackboard.  First let me say that my interest and focus is NOT at the CMS/LMS level, but is more at the level of the individual courses—that is, using analytics to inform and improve instructional design, assessment, and evaluation.  It is interesting to note that one of the published articles to which John referred: Mining LMS data to develop an “early warning system” for educators: A proof of concept* ( Macfadyen and Dawson in the February 2010 Issue of Computers & Education) had the following to say in the conclusion-(end of section 4.1):
  (*this link is to BYU’s online library – authentication needed to access)
"In summary, this exploratory work has shown that some (but not all) LMS variables are useful predictors of student achievement in an LMS-supported course, but also that the predictive utility of many variables is dependent upon course site design and pedagogical goals. An important finding is that knowledge of actual course design and instructor intentions is critical in determining which variables can meaningfully represent student effort or activity, and which should be excluded. …The initial findings presented in this paper strongly indicate, however, that in order to offer instructors meaningful indicators of student performance, dashboard-style visualizations of student tracking data for a selected course must be highly customizable to reflect pedagogical intent." (emphasis mine)

This confirms to me that data at the LMS/CMS level, though possibly insightful in some ways, by definition must be so generic that it can only really serve as an attempt to measure one dimension of engagement, which of course should correlate positively with course grades.  It can not however be of much help in informing us about whether the instructional designer (in these cases – usually course instructor) was successful in her intent, or how his design might be improved.  And what about all of the fully on-line courses that may not be under the umbrella of an LMS/CMS?   

One other point that I would like to make is about John’s definition of Academic Analytics:  (By the way are Academic Analytics inclusive of Learning Analytics, or do they merely intersect – Venn Diagram anyone?  And while you’re at it, please include Educational Data Mining – is that the superset?  Or are these all synonyms for each other?) John states: "academic analytics can be used to profile and even predict students who may be at risk, by analyzing demographic and performance data of former students" and then he explains this by comparing it to what Amazon does with its “recommender system”.  I think there is a subtle difference here that we need to notice.  All of John’s presentation (and in fairness much of the literature) emphasizes using this data from the LMS to identify at risk students: i.e. those who are failing – or soon to fail if their trajectory does not change, and then intervene with these failing students in some way to help them see and adjust their trajectory.  Though that may be a worthy goal, it is NOT what Amazon does.  Amazon does not seek to identify customers who are at risk of “failing” (i.e. NOT purchasing, or NOT returning), but seeks to identify and encourage already successful customers to become “better” customers (i.e. purchase more, return more) by showing how other “successful customers” most-like them behaved.   In fact, I think if Amazon were to attempt to follow the CMA (Check My Activity) model, it would fail miserably at converting failures (non-customers) into successes (great customers). I point this out because I think we need to re-examine our assumptions if we think the Amazon model and the academic -analytics failure-intervention model is the same (or even close!) much less, whether it should be the same model.

Two more quick but interesting observations from John’s well done presentation to leave you thinking:

1)      Rated highest by students as the most useful CMS application? – an online gradebook!  Think about it – of all the CMS features, students want easily-interpretable, comparable-across-all-courses analytics!  [Of course this could also be interpreted as students are more interested in their grades than their actual learning, but I would prefer to naively apply (right or wrong) the former interpretation. :-)]

2)      I find it most insightful that John found his insights about CMA (Check My Activity) NOT from a blackboard report – but from Google Analytics dashboards! (Checkout slide #34)

No comments:

Post a Comment

blog background graphic (CC BY 2.0) courtesy Patrick Hoesly
Original T-Shirt Graphic for LAK11 Week1: Presentation post courtesy kris krüg, modified by M.R. McEwen