This is the blog for the SGC4L project, funded from the JISC Assessment and Feedback programme and led by the Physics Education Research Group at the University of Edinburgh.

As well as this blog, the project wiki contains documents and information on the progress, development and dissemination activities associated with the project.

Friday, August 3, 2012

PeerWise webinar in the JISC Assessment and Feedback series

On 23 July 2012 we gave a webinar "Enhancing engagement, feedback and performance: the SGC4L project" as part of the JISC Assessment and Feedback series. As people who have used PeerWise know, by far the best way to appreciate its potential is to use it yourself. So the webinar included an interactive session offering participants the opportunity get first hand experience of interacting with others via a specially created PeerWise course area. This was prefaced by an initial presentation outlining what PeerWise is and how to access the course space. We concluded the webinar with a presentation and discussion of how the system is being embedded within courses at Edinburgh and some of the findings from SGC4L project.  You can watch the webinar at the link below:


Wednesday, March 7, 2012

Qualitative categorization

Our two final year undergraduate project students are making great progress in categorising samples of the PeerWise questions created by students in our first year Physics class.

They've developed and refined a range of information to be captured from each question authored by a student as follows:

Clarity of Questions:  - 
0 – Unclear (including spelling & grammar that make question unclear)
1 – Clear

Feasible Distractors:
0 – None
1 – At least 2 but not all
2 – All Feasible

0 – Missing
1 – Inadequate or wrong
2 – Minimal/unclear
3 – Good/Detailed
4 – Excellent (Describes physics thoroughly, remarks on plausibility of answer, use of appropriate diagrams, perhaps explains why you would have obtained distractors)

Quality of Author Comments:
0 – None
1 – Irrelevant
2 – Relevant

0 – Obviously Incorrect
1 – Most Likely Correct

Recognised as Incorrect: 
0 – N/A or not recognised
1 – Recognised as incorrect by students (or disagreement with author)

0 – None
1 – Contextual picture but not relevant
2 – Relevant diagram or picture

0 - Potentially Plagiarised
1 – Not obviously plagiarised

Context of Question: 
0 – None (formulas, recalling info)
1 – Irrelevant or extraneous context (entertaining, imaginary)
2 – Physics (frictionless, idealized situation)
3 – Relevant real world context (applicable to daily situations cars on racetracks)

Revised Taxonomy:
1 – Remember, Recognise or Recall OR just plugging in numbers
2 – Understand, Interpret or predict (No calculation needed, understanding 3rd law for example)
3 – Apply, Implement or Calculate (1 step calculation)
4 – Analyse, differentiate or organise (multi-step calculation, higher analysis)
5 – Evaluate, Asses or Rank ( Evaluating various options and assessing their validity)
6 – Create, Combine or Produce (Asked to combine various areas of physics, need to get a structure right to solve whole problem)

This last category maps the question onto levels in the cognitive domain of Bloom's (revised) taxonomy. After doing some tests to establish an acceptable level of inter-rater reliability, we've let the two students loose on their own sets of questions.

They're in progress, but early indications are that in contrast to a recently published study (A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions, Denny and Bottomley DOI: 10.1002/bmb.20526) we're seeing that only relatively few questions inhabit the lower reaches of this scale, with most in category 3 and 4 and non-negligible numbers in the highest categories. I'll post more results when we have them.

Here's a nice example of a question classified at level 5 in the taxonomy. 

Wednesday, January 25, 2012

Getting started with biology students.

The second year biology (genetics) students are now one week into the course and, after only three lectures, there are already 22 questions mounted on the course PeerWise site. Moreover, all but the two most recent questions have attracted multiple comments. Almost all of the questions involve digestion and re-synthesis of lecture material.  Perhaps this rapid and encouraging uptake is, in part due to better scaffolding provided during introduction of the course as compared with last year. A log of the process and information for setting this task for this class (GGA), including the introductory powerpoint slides are available to view on request.

Thursday, January 19, 2012

Thinking qualitatively

We're beginning to think about how to assess the quality of the questions that students have been submitting in their courses, and we have recruited a couple of final years Honours project students in Physics to help us out.

A starting point is to generate some ideas for what sort of classification scheme we might want to use to be able to classify the questions. We need to bear in mind there are several HUNDRED questions for each course, so the classification has to be coarse enough to be done reasonably quickly, but fine enough to capture the essence of the questions across different dimensions. Here are examples of 2 such dimensions that we might want to consider:

a. A classification based on cognitive challenge of the questions. We might want to think about mapping onto the various levels of Bloom's Taxonomy: knowledge and recall at the bottom of the hierarchy and so on.

b. Some sort of measure of 'physics sophistication'. Is this question a straightforward application of a single physics principle (eg cons of energy)? etc.

There's a recent paper by Paul Denny on this topic that's going to prove to be a useful starting point for us:

Watch this space.