Show simple item record

dc.creatorHoward, Samuel B.
dc.date.accessioned2018-06-01T20:40:00Z
dc.date.available2018-06-01T20:40:00Z
dc.date.created2018-05
dc.date.issued2018-05
dc.date.submittedMay 2018
dc.identifier.urihttp://hdl.handle.net/2346/73819
dc.description.abstractThis study explores the comparative usability of multimodal feedback, specifically exploring the relative efficiency, effectiveness, and satisfaction usability advantages and disadvantages of audio, screencast, and written feedback or instructor response for students, and it explores the level of satisfaction experienced by instructors when giving feedback in each mode. To conduct this investigation, this study tested four hypotheses based on preceding research. (H1) SUS (System Usability Scale) and product reaction survey results will show that students prefer screencast feedback above audio feedback and audio feedback above written feedback; (H2) screencast feedback will show the highest levels of effectiveness for students and both screencast and audio feedback will show higher levels of effectiveness than written feedback; (H3) screencast feedback will show the higher levels of efficiency for students than audio feedback but written feedback will show the highest levels of efficiency; and (H4) instructors will express higher satisfaction in giving written feedback than they do for audio and screencast feedback. To test these hypotheses, 20 students were recruited to use one of the three modes of feedback and be observed, surveyed, and retrospectively interviewed about their experiences. In addition, four instructors were also observed, surveyed, and retrospectively interviewed about their experiences giving feedback in one of the three modes. Test results supported H1a, which says that students show a preference for screencast feedback above audio feedback, but H1b was not supported, which says that students did express higher satisfaction for written feedback, at least according to SUS results. H2 showed inconclusive results regarding whether screencast and audio feedback offers users a more effective experience than written feedback, though there were indicators of unique advantages offered by each mode. H3a was supported, indicating efficiency advantages in connection with student use of screencast feedback over audio feedback, but it remains inconclusive whether screencast offers definitive efficiency advantages over written feedback. H4 was well-supported, showing that instructors, despite experiencing some satisfaction and other usability advantages when using screencast or audio feedback, still prefer written feedback. This study also discovered that both students and instructors initiate or appear to cause user errors, often as a result of being unwilling to or unaware of how to apply feedback received, and sometimes this results from lack of detail or context provided by the instructor. It also asserts that audio feedback does not appear to offer any distinct advantages or affordances over screencast feedback, though a combination of screencast and written feedback is likely to offer students a more ideal usability experience in light of the advantages of both modes. In addition, this study found that cross-purposes between instructors and students are at play in the user scenario, specifically instructor resistance to adopting multimodal feedback methods even though students would prefer that instructors do so. The primary implications for this study are as follows: (1) Given the apparent usability and learning advantages of using screencast or other multimodal forms of feedback for students and instructors, more needs to be done to investigate how to ameliorate instructor resistance to adoption. (2) Instructors need to take steps to both train their own students how to effectively use and apply feedback but also ensure the feedback they give is contextualized and actionable. (3) Instructors need to take advantage of the affordances of screencast feedback and written feedback to ensure their students receive both more abundant and contextualized feedback as well as sentence-level feedback.
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.subjectInstructor feedback
dc.subjectInstructor response
dc.subjectWritten feedback
dc.subjectWritten response
dc.subjectComparative usability
dc.subjectUsability
dc.subjectMultimodal
dc.subjectMultimodal feedback
dc.subjectAudio feedback
dc.subjectAudio response
dc.subjectScreencast feedback
dc.subjectScreencast response
dc.subjectScreencapture
dc.subjectScreen capture
dc.subjectDigital response
dc.subjectPedagogy
dc.subjectAndragogy
dc.subjectStudent experience
dc.subjectInstructor experience
dc.subjectAdoption
dc.subjectTechnology
dc.subjectComplex ecosystem
dc.subjectComposition
dc.subjectTechnical communication
dc.subjectClassroom
dc.subjectAsynchronous
dc.subjectSatisfaction
dc.subjectEfficiency
dc.subjectEffectiveness
dc.subjectEmotion
dc.subjectHeuristics
dc.subjectProduct reaction survey
dc.subjectSystem usability scale
dc.subjectRetrospective interview
dc.subjectHypothesis
dc.subjectUser testing
dc.subjectUser-centered design
dc.subjectUX design
dc.subjectLX design
dc.subjectLearning experience design
dc.subjectInstructional design
dc.titleThe comparative usability of instructor feedback given in three modes: Screencast, audio, and written
dc.typeDissertation
dc.date.updated2018-06-01T20:40:00Z
dc.type.materialtext
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
thesis.degree.disciplineTechnical Communication and Rhetoric
thesis.degree.grantorTexas Tech University
thesis.degree.departmentEnglish
dc.contributor.committeeMemberMoore, Kristen
dc.contributor.committeeMemberBaake, Ken
dc.contributor.committeeMemberJones, Keith S.
dc.rights.availabilityUnrestricted.
dc.creator.orcid0000-0002-0937-8590


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record