The data collected for the study comprised of the official documents of the MSc HQ program and the interviews of the faculty.
1.1.1.1 Documents
The documents used for the evaluation included the published articles and administrative documents of the program along with the 2015 courses syllabi.
1.1.1.2 Interviews
The faculty who volunteered to participate in the study were referred to as participant(s) in this evaluation. Preferably, the course directors were invited to participate, and if a course director was unable to participate, than he/she could nominate someone from the teaching faculty to represent their course for the program evaluation.
An email (Appendix A) was send to the selected participants, which contained “Letter of Information” (Appendix B), “Consent Form” (Appendix D) and a copy of “Program Intended Learning Outcome” (Appendix E). These documents described in details about the evaluation, and further addressed issues related to the confidentiality of the participants. A link to the Doodle pool was provided to the instructors to select the best possible time for the interview. Some participants were unable to fill the schedule places on the doodle due the time restrains and technical difficulties. So, the evaluator arranged time and place through the email correspondence. All interviews were conducted during the June and July of 2015. The semi-structured interviews with the open ended questions were designed for the evaluation. The semi-structured interview provided opportunity to the interviewer to manage the conversation and explore the areas of interest in more detail during the interview. The interviewer had an opportunity to ask questions derived from the conversation, which were not thought by the evaluator prior to the interview. Additionally, the interviews were structured on three themes, namely intended learning outcomes of the course, course activities and course assessments. To make the process smooth and credible, the evaluator followed identical protocol for each interview. At the beginning of the interviews, the evaluator describe the purpose of the interview, elaborate the reason for selecting the participant and discuss the confidentiality issues with the interviewee. A chance was provided to participants to ask questions about the evaluation and were informed that they can withdraw at any time from the evaluation. In addition, the participants were requested to sign two copies of the consent letter, one copy for the interviewee and second copy for the evaluator. After obtaining permission from the participants, the interviews were recorded. At the end of the interviews the evaluator thanked the participant for sparing their valuable time. A total of 8 interviews were conducted with an average duration of 27 minutes. …show more content…
The evaluator transcribed the interviews by a professional transcription company for analyzing. After receiving the transcribed data in the Microsoft Word, the evaluator rechecked the MS word files by simultaneously listening to the recorded interviews. No transcription errors were found in the transcribed document, except three spelling mistakes of the names, which were not corrected to preserve the identity of the individual.
1.1.2 Drawing the conclusion
In this step of evaluation the data was analyzed in two parts. In first part, alignment and misalignment between the outcomes, activities and assessments were studied. Further, course emergent outcomes, emergent activities, and emergent assessments were documented. In the second part, the ‘hidden gems’ of program were identified that were indicative of program excellence, but are were documented formally.
1.1.2.1 Part 1 Individual course evaluations
The collected data from the office of MSc HQ and the transcribed interviews were segregated course wise through the following steps (figure 2.3). Figure 2.3: Part 1: Steps of data analysis: ILO; intended learning outcomes, …show more content…
These codes were further classified as emergent and aligned after comparing the respected code with ILO, CA and CAT from step 1. If code matches with the code form step 1 than the code was labeled as aligned and depending on the nature, the codes were placed in the group of aligned learning outcome (aLO), aligned course activity (aCA) and aligned course assessment technique (aCAT). If the respected code does not match with any code from the step 1, than they were labelled as emergent and based on the nature, the codes were categorized as the emergent learning outcome (eLO), emergent course activity (eCA) and emergent course assessment technique