Leave a comment

Understanding library users to support change

I went to several sessions at LILAC which focused on ways to understand library users. I am very interested in this – my PhD research is going to focus on information behaviour because I really want to better understand the students I work with! So I was keen to see the methods and insights that other practitioners had developed. Understanding our library users is a necessary first step to making any changes to better support them.

There were two sessions that focused on reading: Elizabeth Brookbank’s talk on leisure reading in academic libraries, and Jane Secker and Elizabeth Tilly’s presentation on academic reading during the pandemic. Both gave an illuminating view on something that is usually quite hidden: we know how many books are checked out, but we don’t know if/how students are reading them – reading is usually a fairly private activity! 

Elizabeth found that 94% of students interviewed said they do read for fun! Most preferred to read in print, usually exclusively. She found some interesting things about genre and recommendation preferences, which I am planning to incorporate into my work on our leisure reading collection at Huddersfield.

Jane and Elizabeth’s presentation on academic reading also found a heavy preference for print. Which begs the question: why are most libraries pursuing a digital first strategy, when this doesn’t suit most students’ reading preferences? It is a bit more complicated than that though: students’ choices of when to read print and when to read online are context-dependent. Students may prefer online when looking for factual information, or when reading shorter extracts, but print when doing long-form, in-depth reading.

We also discussed how we support students with academic reading. Print usage trends have been going steadily downwards for a long time, but use of ebooks hasn’t had a corresponding rise, so it’s likely that students are simply reading less overall. If they come to university without much experience of academic reading, can we support them by, for example, training on how to use an ebook? And on technology to make reading online more comfortable (e.g. coloured screen overlays, read aloud software)?

One of my favourite sessions from the conference was Ellen Neirenberg’s presentation on the results of her almost-complete doctoral study into the development of students as information literate individuals. Ellen is conducting a longitudinal study into the relationship between knowledge, skills and interest (or knowing, doing and feeling), and how this changes over time. It was fascinating to hear her speak, as I am considering a longitudinal study approach for my own PhD research! One key point that I thought was very interesting was that students’ level of interest in being information literate is a higher motivating factor than their perceived need to learn. In other words, intrinsic motivation is more powerful than extrinsic motivation.

On a practical level, I really enjoyed Sheila Webber and Pam McKinney’s workshop on using Theory of Change to evaluate information literacy initiatives (and I am not just saying that because they are my PhD supervisors and might read this!). Theory of Change is an approach to planning and evaluating projects that creates links between goals, activities, and outcomes. This was a hands-on workshop where we had a brief go at completing a Theory of Change framework for a project. Using the table headings below, the idea is you start with an aspiration for the long term impact (1), then articulate the current situation (2), then work backwards through the other columns to align outcomes, activities, and enablers. I can see this approach being beneficial to think through projects before they start, and have a framework for evaluating them from the beginning.

Current situation (2)Enablers (5)Activities/ Processes (4)Outcomes (3)Longer term impact (1)
What is the current situation that has prompted this project?
What needs to change?
Who are the stakeholders and what is their involvement in the project?
What help and support is needed to make sure each process happens?What needs to happen to achieve each of the outcomes listed?
What does each stakeholder group need to do?
What are the achievable concrete outcomes that can be measured at the end of the project?
What will be different for each stakeholder (group)?
The “Blue skies thinking – what do you hope to ultimately achieve through this project?
In 5-10 years time what will be different?
Table 1: Theory of Change framework, as shared by Sheila & Pam in their workshop

I also attended several practitioner research sessions, to see what kinds of user research and evaluation were taking place at other institutions. I enjoyed Ruth Jenkins and Christine Love-Rodgers’ account of how they are evaluating online student appointments at the University of Edinburgh – we are also doing this at the moment at Huddersfield so it was interesting to compare the approach we are taking. At Huddersfield we are asking questions (or sending an online survey if the student doesn’t have time to answer questions) immediately after each appointment, to find out things like how they discovered the appointment service, how easy/difficult they found booking the appointment and joining it on Teams, etc. At Edinburgh they are trying to learn more about the actual impact of the appointments, so are sending the survey several weeks after the appointment rather than immediately. I was curious as to how this impacted their response rate, although unfortunately they didn’t have data on this – but it looked like they’d collected a decent amount of responses!

Catherine Peppard and Alan Chalkley from the Royal College of Nursing described a small-scale UX study into the effectiveness of their online support guides. Interestingly, they found a real mix of preferences: guides are produced as video guides, PDFs, and interactive tutorials, and the participants were all split between which they preferred. All said they wanted to find the information they needed quickly, but what that actually meant for them in terms of which type of guide suited was different for each person. A good reminder of the need to produce guides in multiple formats!

The final session I attended was Rebecca Maniates, from Singapore Management University, talking about evaluating her students’ responses to the One Minute Paper (OMP) exercise. Typically an OMP is completed at the end of a class, and asks two questions: what did you learn, and what questions do you have? (Wording may vary!). Rebecca found that most student responses showed surface learning, but some included more reflective comments and questions. She is using this evaluation to develop her teaching, and find ways to encourage deeper reflection.

Leave a comment