Dr Thomas Illingworth (Lecturer)
Dr Elizabeth Marsh (Senior lecturer)
Overview of the context of the project/procedure:
As an institution the university had to adjust rapidly to global lockdowns during the COVID-19 pandemic, and the provision of online learning. We therefore found ourselves in a position where hard of hearing students were less able to actively engage with live sessions due to accessibility issues. While the use of video during the remote sessions supported lip-reading students, the local internet connections for either participants or the academic meant that the increased bandwidth required for the video resulted in frustrating and challenging experiences for students to engage with the session. This is both through reduced video quality (reducing the possibility of lip reading live), and/or audio de-sync with the video. During the initial period within the pandemic education response, auto captions were not something which was common practice in teaching. Our informal student feedback (particularly from hard of hearing students) drove this change to improve accessibility.
The approach taken:
Incorporation of auto captions within Microsoft PowerPoint for live synchronous sessions delivered via Blackboard Collaborate.
What were the steps and processes that had to be put in place to implement this approach?
- Installing the auto caption plugin for Microsoft PowerPoint and enabling auto captioning during the session.
- Sharing the PowerPoint presentation via screen share rather than embedding it into Blackboard Collaborate so the live captions would be visible for the students.
What worked well?
From student feedback:
- “On the whole I would say that the university and staff did very well to try and accommodate my needs via the virtual platform, using subtitles and cameras where possible. Of course, the virtual learning was still more difficult due to technology feedback, volume and background noise.”
Are there any challenges or limitations to this approach?
- Requiring multiple screens; as you are “presenting” through PowerPoint, this occupies the whole screen, necessitating a second computer or device linked to the session to be able to monitor chat and student engagement.
- From student feedback “they could be quite inaccurate and often missed key words or whole sentences. The auto captions struggled with “technical language of the subject”.
What have you learnt from undertaking this approach? Is there anything you would do differently next time?
- Auto captioning has a limited ability to correctly transcribe the spoken material; while it is mostly accurate, there are some instances where accents and/or scientific terminology result in interesting auto captions being produced.
- Microphone quality is also critical, for this approach to work staff need to be provided with superior quality headsets with built-in microphones to produce more accurate auto captions as well as increasing audio quality.
- Sound proofing to avoid ambient noise would be ideal. However, this is not practical.
Advice for others:
- Ensure you have an adequate microphone prior to starting.
- Employ the use of a second screen.
- Practice before starting, to iron out any problems.
- Ensure you have a good internet connection.
- Try to be clear with the language used; when using technical language ensure that you do not “rush” the words as this can trip up the auto captioning.