Strategic Analysis for Accreditation in Saudi Arabia: A Cross-Case Analysis of KAU and PSU

Quality assurance strategies and standards have long been employed—at least since the dawn of the industrial age, which introduced mass operations and mass production to meet the needs of consumers. In any workplace, plans and strategies are systematically devised and guidelines are implemented in order to ensure the conformity of operations and systems to government or privately established and recognized standards. The high expectations applied to the educational sector cause particularly great demand for standardized quality assurance strategies. This has led to the establishment of accreditation institutions—external organizations that have developed and promulgated standards for quality and have evaluated colleges and universities according to these standards (ESIB, 2006; NAPCIS, 2012). In Saudi Arabia, the quality assurance directives of NCAAA require educational institutions to establish dedicated quality assurance models. The present study was intended to assess the current state of quality assurance implementation at Saudi Arabian higher education institutions and develop a normative quality assurance model based upon the inferences obtained, to be used as a guideline for future quality assurance systems in Saudi higher education institutions. For this purpose, qualitative research was conducted using two case studies—KAU and PSU—and was cross-analyzed and presented. The present article provides a discussion of the findings presented.


Methodology:
In the document analysis, relevant documents, including government policy documents, were collected and analyzed in order to address the first research question at the macro level. At the meso level, documents relating to quality assurance arrangements at PSU and KAU were also collected, including documents from quality centers, strategic plans, rules and regulations, and university bulletins, which provided empirical evidence. Semistructured interviews with faculty members and administrative staff at both institutions were used as a second source of data, applicable to all seven research questions. The interviews were intended to answer the following research questions Results:

CATEGORY 1: Current QA mechanisms
In this section, documentation and interview data regarding current arrangements for QA are compared across the two universities. The discussion focuses on current QA processes and mechanisms, in particular in teaching and learning. A summary of the findings appears in Table 1.

Major Processes in Quality Assurance
The adoption of QA standards at PSU illustrates the importance of institutional policies that fully support QA implementation. This institution's comprehensive QA framework, the LTQF, was designed to be a key factor providing a strong foundation for and facilitating QA implementation within PSU. The LTQF was developed to assist staff with QA; in addition to providing staff with clarity on the purposes of implementing QA, reflected within the framework as "achieving excellence in learning and teaching and producing quality graduates" (PSU, 2013 p. 2), it guides the institution in monitoring and maintaining its QA system.
Furthermore, the analysis of PSU documentation also revealed that the process of QA implementation was embedded in the LTQF. In pursuing its main purpose, of establishing a high level of continuous improvement at the university, the policy outlines the five-step PIMRI process that each PQAC must fulfill on an annual basis. This cyclic process was also aimed to assist top management in managing the several steps of the process on a yearly basis. According to the PSU LTQF, a detailed PLQAP is prepared by the PQAC of each academic unit at the beginning of each semester so that the quality of learning and teaching in all programs may be monitored and evaluated in terms of learning outcomes.
The interview data indicated that both micro-and meso-level participants at PSU believed that these QA processes were consistent with NCAAA guidelines. For example, QM1 PSU discussed their implementation as follows: processes at the beginning of uh-the beginning of uh or the start of any school year, the office, the AAPC coordinates with the administrative office in providing what things are to be submitted, what things are to be accomplished within a year. And this is in conjunction with the administrative policies.
Consistent with the LTQF, the interview data revealed that PSU participants at both levels mentioned the use of evaluations and assessments as part of different processes conducted by different parties in order to receive feedback and information from a variety of sources (students, faculty, and administration). Further, it was evident that PSU participants at the micro level shared the opinion that the use of feedback and assessment was important for internal improvement.
The data also revealed that the process is aligned with the hierarchy within assessment and the interrelatedness of learning outcome data. Another meaningful finding at PSU concerned the use of data in quality decisions, namely, the utilisation of the results of multiple assessments through the development of a statistical database. This was described by several participants as contributing to the QA process. According to QM1 PSU: We generate a database for the university making sure that our statistics are updated in terms of number of students, number of graduates, how many are in a six-month period and how many-what is the percentage of our graduates getting employed in the six months after graduation. All these basic statistics for the university, it's our office that generates these.
This kind of coherent QA implementation was crucially supported at the meso level, facilitating and enhancing micro-level awareness of QA.
In contrast to the situation at PSU as sketched above, the KAU documentation shows that QA policies there seem to be divided into segments. Moreover, policies to promote QA implementation were scattered across different units and, while they reflected the guidelines issued by the NCAAA related to QA at the institutional, programmes, and course levels, did not explicitly indicate this in a central, clearly and thoroughly documented QA plan. Indeed, internal documentation of QA was generally lacking at KAU, although some departments did have certain procedures laid out. The investigation thus revealed that KAU had failed to lay strong foundations for the implementation of QA policies, and that there was an absence of a structured comprehensive procedure and practical processes to promote QA implementation.
As the interview data show, KAU participants did explain that KAU used QA processes to achieve international as well as national accreditation. Both meso-and micro-level participants in fact acknowledged that external accreditation was the driving force behind departmental efforts to implement QA. However, as the data revealed, such QA processes were aimed at compliance rather than improvement, and had not actually been implemented.
Furthermore, KAU participants were not consistent in their beliefs regarding the use of student evaluations. Such feedback was mentioned by participants in some departments as an important support for the QA process, although their departments were not currently using such tools on a regular basis, that is, such feedback was only being gathered from programmes with graduated students. For example, AC1 KAU stated: As you know, the quality assurance program is dependent on graduated students as a measure of performance. This is important because we have only one program [that meets this standard] now, and the other programs are just beginning.
The KAU data also showed that both meso-and micro-level participants reported that the use of performance indicators is among the requirements of the QA process. However, they were commenting on the general KPIs in the KAU Strategic Plan, rather than those required by the NCAAA, of which they had no detailed knowledge.
In addition, the KAU data revealed an absence of programme and course-level reporting in the university's QA procedures. QM3 KAU reflected as follows on this issue: It should be emphasized that there is no program or course's report at all in the quality assurance procedures. This report is a self-assessment carried out by the teaching staff members, through the director of the program, in order to detect defects and advantages, on the basis on which they begin making [a] development plan so as to improve a program or a course.
Finally, the data revealed that the fragmented nature of the KAU QA process was largely due to the focus on external evaluation according to NCAAA requirements and the standards of international accreditation bodies. An effective assessment system remained absent at KAU, and feedback was not used consistently to ensure the integrity of the QA process; that is, rather than enhancing the QA system, information was used solely for ensuring compliance.

Mechanisms for High-Quality Learning and Teaching
With regard to curriculum development, both PSU and KAU adopted and followed the regulations of the Ministry of Higher Education. Both universities have formal procedures for newly developed courses and programmes. Approval depends on a strictly structured process, proceeding from Department Council to College Council. In both cases, the opinions of two external experts on curriculum are required. Whereas the Academic Council makes such decisions at PSU, KAU has a permanent committee in charge of decisions regarding curriculum matters. The relevant documentation from the institutions revealed that similarly clearly defined procedures for programme amendment are followed at both universities.
However, differences exist between PSU and KAU in terms of review of curriculum. At PSU, the Department Council can conduct minor reviews. In addition, PSU requires regular periodic reviews of academic programmes every three years. Regarding the evaluation of teaching and learning, the documents indicated that PSU uses benchmarking to survey current and related programs, identifying courses offered, pedagogical and evaluation approaches, and best practices in programme structure when developing a new programme. The interview data supported this understanding, as reflected by AC1 PSU: We gather that information and inculcate [it]  In contrast, periodic reviews of KAU's programmes are carried out at variable intervals-every two, three, four, or six years, depending on the programme. KAU policy states that the results of a survey of the opinions of faculty, students, and graduates should be incorporated into the plan. Furthermore, a recent additional requirement to ensure conformance with NCAAA requirements prior to submission to the Permanent Committee is AAU approval. However, the interview data revealed that actual curriculum development at KAU varies with the interest or mood of relevant faculty and the disciplinary environment. The potential for manipulation of the process was mentioned by AC2 KAU, who stated: We applied the old content and curriculum. There have been models, but the faculties' members were not adhering to them. Courses had been [approved] in the departments' councils, and then referred to the curriculum committee at the university, agreed upon and approved by the university's council as well.
In fact it is the faculty member who designs his own course.
Thus, the findings of this study suggest that there seems to be a significant gap between PSU and KAU in terms of curriculum reform and updating. It seems that PSU regularly revises curriculum, whereas curriculum revision at KAU varies by department depending on the level of faculty interest.
In relation to ensuring the quality of teaching and learning, a comparison of PSU and KAU again reveals differences. PSU has a clear policy outlining guidelines for evaluating QA implementation and defining the tools to be used. The documentation revealed that the AAPC evaluates the academic performance of programmes by means of a Student Experience Survey administered annually to students and the deans of the respective departments. Furthermore, a Course Evaluation Survey is also administered at the end of each semester under the e-register system of the Deanship for Admission and Registration. Finally, a Programme Evaluation Survey is completed by the student at the time of graduation. These evaluation techniques form part of the cycle of improvement applied to individual courses, programs, and institutional planning, according to policy. This policy serves as a key reference point for several quality evaluation tools used in evaluating programme and course learning outcomes.
The analysis of the PSU interview data highlighted the use of benchmarking as a method for achieving internal improvement, according to participants at both meso and micro levels. At the micro level, the use of quality indicators was also confirmed, and it was revealed that these indicators are evaluated in monthly department meetings. HS1 PSU gave a lengthy discussion of the different types of indicators employed: We use several types of indicators. Uh so for instance, um one indicator is student to staff ratio. As I said earlier, we are around 20-we have such a proportion that there are 20 students for each of our faculty member….. Second thing is that uh we also assure some learning outcome-that um we make sure whether students are learning or not…. We make this evaluation using different types of methods, for instance, we have a conventional assessment method like exams, assignments, uh class participation, uh and then a formal discussion, a project, et cetera. So the second indicator is-what we are looking for is to make sure that leveling takes place…. Regarding research, they are also taking some actions as far as they can take regarding program design and delivery. Of course, we make a comprehensive assessment. Every month, we have [a] meeting. And in that meeting, we discuss the issues. So in those meetings, problems are reviewed and that's the main responsibility of management-to make sure that when problems happen, they are reviewed. So regarding management indicators, these are the things that we do.
In addition, PSU participants at both levels shared information on the use of outcomes-based assessment and emphasized its importance in enhancing students' learning. PSU participants put forward further opinions regarding the value of outcomes-based assessment, which allows them to carry out evaluation effectively, facilitating the composition of assessment assignments and tasks. From this perspective, D2 PSU reported: If you are preparing your exams or assessments based on Bloom's Taxonomy, it is something good. So we implemented that. We tried to implement that. And we found that the teachers were doing it more mechanically than with the real spirit because they had to fill certain forms. Say if they prepare an exam, they have to write, you know, whether this question is testing the knowledge or testing the-testing the analytical skills, testing uh the application side. You know, like that. They have-they have to point out and they have to balance it out.
In contrast to the above scenario encountered at PSU, the KAU data revealed that the university tended to rely on external review. No institutional policies or procedures existed for internal evaluation, although some departments used learning rubrics; thus, academic faculties received no guidance in the area of assessment. Although KAU documentation indicated that the university monitors and evaluates quality indicators including graduation rates, student persistence, satisfaction of students with academic services, and appropriateness of programmes and services, and although an institutional assessment unit was established at KAU to ensure compliance with the minimum requirements of the NCAAA, there was little evidence of any criteria being applied comprehensively at KAU, with the main exception being the fragmented publication of NCAAA standards on its website and a statement indicating adherence to these. As such, it appeared that consistent assessment strategies had not yet been developed and implemented either in individual academic departments or across the university.
Various learning assessment processes were followed at KAU to determine students' qualifications upon admission and promotion. These policies ensure that qualified students are able to graduate and excel in their respective fields, having received academic guidance from faculty on admission, qualifications, and academic standing. The assessment process begins with admission, continues on to monitor performance, and determines the promotion of students. However, the policy lacked any means of systematic verification of the results of internal evaluation and review, and as a result, the internal assessment process is diverse across disciplines and lacks organisation or coordination, as indicated by QM2 KAU, who stated:

Indeed, quality does exist [at KAU], but what we have been lacking is the documentation.
In the past people applied quality at their own discretion, and according to their own capabilities and perception.
Although the data captured some meso participants' reflections that curriculum development is a key theme to ensure effective teaching and learning, the perceptions of participants at the micro level reflected the opposite. Similarly, the absence of constructivist learning approaches was pointed out by participants at the micro level, and it was assumed that students' skills and understanding would not be reflected by their results due to the lack of constructive alignment. AC2 KAU discussed this issue as follows: When it comes to modern teaching methods that concentrate on constructivism. We have not yet employed this method, although we do believe the student must be the core of the educational process and must be a participant, listener, and commentator. They must think and conclude not only listen and receive. We have curriculum problems here [in that the curriculum has not been changed] for a very long time, for five years the same curriculum has remained.
Turning to student evaluations, KAU participants at both levels mentioned that they help ensure good QA in teaching and learning. Data from the micro level indicated that student evaluation was a common mechanism for assessing teaching and learning, but that misinterpretation was rife due to the absence of a coordinated QA system. When participants were asked to share their experiences related to students' learning assessment, they reported the use of tools focusing on alumni and student satisfaction. QM KAU illustrated the nature of the misunderstanding in the following:

We have done that through questionnaires and evaluation forms on the website and see to what extent the students, [to what extent] we see satisfaction [with their] degrees.
The KAU participants cannot be blamed for the lack of institutional procedures by which the quality of evaluation processes may be ensured; the evaluation process has clearly been detrimentally affected by the absence of an internal QA system, as reflected in the information to the facts provided by AC KAU, who elaborated on faculty responses to QA as follows: The missing link to strengthen quality was the existence of documentation and a system that enables us to prepare programs and courses' specifications, to undertake assessments, to measure graduates' outcomes … do you see? Thus, the quality in the higher education depends mainly on a system and the awareness of the procedures to be followed within the institutions.
Finally, participants from some departments mentioned that a feedback mechanism was being used but not on a regular basis. Thus, rather than leading to any meaningful improvement, the use of feedback served merely as a symbol of compliance. Finally, the inadequate implementation of a QA mechanism for measuring learning was revealed by the data, as assessment of students' learning for QA purposes was not mentioned by the majority of participants.

CATEGORY3: Parties involved in QA
This section considers the data on whom quality assurance was implemented by and more broadly who was involved in it at the case universities. A summary of the findings appears in Table 15.

Structure of the Quality Assurance System
The analysis revealed that an integrated set of organizational structures supporting QA was present at PSU. At the university level, the documentation revealed a consolidated policy describing the main structural components of the QA system. In addition to the TLC-SC, which oversees teaching and learning review and improvement processes, there was the AAPC, the first centre established in compliance with NCAAA requirements in 2005. The AAPC is staffed by three full-time employees, with the aim of providing coordination and support for QA processes. Its task is to foster QA processes across the entire organization. There was long-established awareness of the AAPC's role and engagement with it at the meso level, as reflected by QM1 PSU's description of the centre's function:

Since the establishment of this center, we look at these progress reports and uh identify certain areas that need to assistance, for example if there is difficulty in implementing or achieving the goal, we discuss it with the department chair or the director and together with the management and find some solutions to address the issues.
The micro-level participants also elaborated on the roles of the AAPC, confirming its supportive function, as for example AC1 PSU: The center provides adequate support services. Of course, we clearly understand that when it comes to quality assurance-they have a critical role to play. They play their role and they have their responsibility for quality assurance development.
At the departmental level, the PQACs are another significant structural component of QA at PSU. These departmental QA committees each consist of a number of academics, along with the relevant department chair. Among other tasks, PQACs implement QA within their academic units. Reflecting on the value of PQACs in supporting QA implementation, QM3 PSU, described their function as follows:

So they are the ones who manage the quality for their program so it consists of three to five faculty members. So this is an additional task for them so this is uh-because AAPC support the institution so we can-because we don't have expertise for each program so our strategy is to ___ each department should come up with their committee members so they're the one who is managing the quality for their program. So it's very important that there's a staff teaching staff that will be involved in quality assurance.
AC2 PSU, also at the micro level, expressed employees' commitment to QA, stating that: Over here, every month, we have a meeting-a department meeting where everyone will come and discuss progress and we'll discuss everything.
The interview data thus revealed that most PSU participants, at both levels, understood the roles and responsibilities of the QA structure. The AAPC was found to promote and strengthen QA, to play an important role facilitating the process of QA implementation at PSU, and to create and encourage a good quality environment within all departments by training academic and administrative staff and fostering a quality culture across departments.
In contrast, at KAU, the AAU, established in response to the national QA standards drafted in 2005 to function for QA at the university level, has only one full-time professor in charge to coordinate activities for external accreditation. The AAU falls under the Vice-Dean for Development, who is accountable to the VPD. Reviewed KAU documentation revealed that the AAU played important roles at departmental level, namely organising QA implementation and managing quality processes.
It seems that KAU's QA structure lacks the required adequacy and efficiency to effectively promote QA. In keeping with this view, QM3 KAU pointed out the lack of QA committees at departmental level, which he regarded as a result of inefficiency on the part of the head of the AAU unit: There is a big load on the chief of the quality unit and the chief of the strategic planning unit, so we can say that there is no cooperation….
At the micro level, the KAU participants were well aware of the units that existed to ensure accreditation, but had the impression that they were not concerned with academic performance. AC4 KAU shared his views regarding these units, mentioning the lack of an adequate QA mechanism within KAU, which led to the focus on meeting accreditation and filling out forms:

I do not think that there is that level of participation they talk about with many people involved, no, there are Steering Committees that do the work and they may need somebody or some information, but it is not a proper process. The problem is that there are people who are involved and know what is going on and there are those who are not involved.
It should be borne in mind that during the period of data collection by this study, KAU was in the process of reframing its QA structure. A range of units were being set up to supervise departments. The Administration of Assessment and Evaluation Department (AAED) was established to take responsibility for monitoring QA functions. The AAED is directly controlled by the VPD, and is divided into five units, namely the Performance Indicator and Benchmarking Unit; the Consulting, Research, and Scientific Services Unit; the Designing and Reviewing Electronic Questionnaires Unit; the Analysing Data and Reports Unit; and the Administrative, Financial Affairs, and Workshops Unit. The AAED has developed an Evaluation and Quality Assurance of University Performance (EQAUP) as one QA measure likely to be effective.

Stakeholder Involvement
At the meso and micro level at PSU, multiple stakeholders were identified as involved in QA through various committees, departments, and leadership roles. Although some initial uncertainty and resistance to QA was reported among stakeholders, the data suggested that implementation of QA was ultimately achieved through cooperation among multiple stakeholders on its formulation, assessment, and implementation within departments and across the institution. For example, AC3 PSU said:

We are involved. If I would know this thing-that ___ like this one so we are involved in all this ___ like I said the workshops were conducted. Faculty was involved in that. Feedback was given. And then implementation comes, all are involved. One person, one team cannot achieve this kind of a target. All the staff-not teachers, but our secretary and staff [too], they were involved because it's the quality. And quality is for everyone. It's not for like one department. Quality means the process. Everyone should be involved in it. So staff was fully involved.
In contrast to this scenario at PSU, KAU reportedly relied on the professional expertise of external stakeholders to determine quality standards, leading to centralization of QA strategies. As such, it was not surprising to discover that participants saw themselves as having limited capacity to participate in QA activities. Participant QM1 KAU mentioned the appointment of relevant people to limited terms as one reason for this limitation of involvement: Committees in the departments aren't permanent; rather they are formed just in case there is a need. For example, during the completion of National Commission models of the educational curricula and programs, there was a decision by the university Vice-President for Development that each department will have work to accomplish the duties so that Head of Department can conduct follow-up work.
Another major finding was related to the degree and nature of student participation in QA. At PSU, students were considered to be vital stakeholders, according to the document analysis. Their involvement was clear from their integration into the QA governance structure and their involvement in assessing quality in a range of areas through the USC. According to PSU documents, the USC is embedded in university governance and considered the official representative body of students, with officers chosen by student representatives from all departments to ensure appropriate follow-up to student issues where required. In that PSU students contribute positively in this way in judging the quality of their learning, they are a vital part of QA implementation at PSU.
In contrast, KAU's governance structure makes no provision for a student council. There is a Department of Student Affairs, whose function is to manage the miscellaneous affairs of all students not already managed by other university units; these include sport, food, housing, financial awards, and psychological and educational needs.

Discussion
The adoption of QA standards at PSU illustrates the importance of institutional policies that fully support QA implementation. The analysis emphasized the ongoingness of QA efforts at PSU, in contrast to the case at KAU. PSU has developed a systematic procedure to monitor the implementation of QA through regular evaluation processes, which help PSU promote continuous improvement. In contrast, although KAU did formulate a general strategy, the effectiveness of internal QA evaluation at the institutional level at KAU was not corroborated by the data, which instead revealed the fragmented nature of KAU's QA policy.
Furthermore, meso-level participants at PSU strongly approved the adoption and scope of the QA system. They had substantial knowledge regarding the QA process. This high level of knowledge and approval was shared at the micro level, with some participants perceiving QA as an opportunity to grow academically, with the support of management. This corresponds to previous statements in the literature (Cardoso et al., 2011;Laughton, 2003). As such, the support of the micro level is playing a significant role in enhancing the QA implementation.
The PSU data further revealed that, conformed to NCAAA standards, the process was aligned with the hierarchy within assessment and the interrelatedness of learning outcome data. Furthermore, one significant findings at PSU related to the data used in quality decisions, to analyze which multiple assessments were utilized through the development of a statistical database. The results indicated good practice in using the data in continuous improvement. However, there was no effective assessment system at KAU, and feedback was not consistently used to ensure the effectiveness of the QA process. Rather than enhancing the QA system, information was used solely to ensure compliance.
With regard to ensuring the quality of teaching and learning, a comparison of PSU and KAU reveals that implementation of QA in this area also differed, in that PSU had a clear policy defining tools and guidelines for evaluating QA implementation. This policy serves as a key reference point for several quality evaluation tools applied to programs and learning outcomes at KAU; however, no institutional policies or procedures were developed for internal evaluation (although some departments did use learning rubrics), and thus, academic faculties received no guidance in the area of assessment. What is more, KAU documentation indicated that the university monitored and evaluated quality indicators including graduation rate, student persistence, satisfaction of students with academic services, and appropriateness of programs and services. Although an institutional assessment unit was established at KAU to meet the minimum requirements of the NCAAA, there was little evidence that such criteria were being applied comprehensively at KAU, with the exception of the fragmented publication of NCAAA standards on its website and a statement indicating adherence to these. Thus, at KAU, consistent assessment strategies had not yet been developed and implemented in academic departments or across the university.
The interview data analysis between the two universities shows further that QA procedures at PSU were well recognized and incorporated into work at both meso and micro levels, with participants who were confident in their shared QA insights. Furthermore, participants at both levels acknowledged QA mechanisms in teaching and learning, which clearly played a fundamental role in QA implementation. In contrast, KAU participants expressed the opinion that QA in teaching and learning is merely equivalent to curriculum development, and no effective QA monitoring system was reported by the KAU participants.
With regard to curriculum development, both PSU and KAU abided by the regulations of the Ministry of Higher Education (MOHE) although MOHE regulation is not consistent with the modern methods required by NCAAA. Both universities have formal procedures set for the development of new courses and programs. However, the findings of this study suggest a significant gap between PSU and KAU in terms of curriculum reform. It seems that PSU regularly revises its curriculum, whereas curriculum revision at KAU is conducted ad hoc depending upon department and faculty interest. Furthermore, at KAU but not PSU, the lack of a QA mechanism for learning assessment was revealed by the data, as assessment of students' learning was not mentioned by the majority of participants. Finally, participants from some but not all departments indicated the use of a feedback mechanism, showing that it was not being used on a regular basis. All in all, rather than leading to improvement, the use of feedback was merely a kind of symbolic compliance.
Noticeably, there were gaps between PSU and KAU regarding the stipulations of QA policies and related practices. At PSU, alongside clarity of process, a central finding was that participants' awareness of the QA mechanisms indicated significant consistency between written internal QA arrangements and actual practices. Establishing strong QA protocols can raise employee motivation and morale; if people in the institution are made aware of the institution's commitment towards maintaining high standards of quality, they will work to the best of their ability to help the institution meet those standards (Brown, 2011).
At KAU, however, the implementation of QA appeared to be merely symbolic. This is in keeping with institutional theory, in that universities can respond to institutional pressures symbolically rather than making genuine, substantive responses.
In addition, it is noticeable that under NCAAA requirements, new programs at both universities are not included at all in the practice of QA, which is only limited to established programs that have already graduated students. This observation corresponds to institutional theory as well, in that it assumes organizations might decouple specific structural elements from the organization's major practices to achieve legitimacy and survival (Meyer & Rowan, 1991).
However, for all their academic programs, universities need to set standards with reference to the enforceability of continuous improvement, which becomes especially critical when the respective professions judge the quality of graduates. Accordingly, to implement an improvement-oriented approach, QA mechanisms should be in place to ensure the acceptance of standards by the relevant professions well in advance of the program's enrollment stage; this will ultimately ensure that the products and services offered to the public are also of high quality and in conformity with predefined standards agreed upon by the professions.
Both PSU and KAU have adopted governance structures in accordance with the regulations of the Higher Education Council. However, while at PSU the data revealed a certain amount of independence in structuring the Board of Trustees, at KAU an objective, transparent approach did not exist. According to the analyzed documents, while the University Council is the highest decision-making body at Saudi public universities, in most cases, the Minister of Higher Education, who by law chairs the University Council, delegates the authority to the president of the university. This is a potential indicator that governance might not be exercised at arm's length, which raises a serious concern in relation to management structure and ensuring that the university is in fact being managed appropriately. That is, maintaining transparency and avoiding conflict of interest should have particular importance for the higher education system in Saudi Arabia. This might hinder the implementation of quality assurance, and there is a need to change the regulations at the macro level in this regard.
Another major critical finding regarding governance structure relates to student participation. At PSU, the document analysis confirmed that students have been regarded as crucial stakeholders, and their involvement formalized by integrating them in the governance structure. With regard to QA in particular, it was found that students have been involved in assessing the level of quality achievement in different areas through the University Student Council (USC), which is embedded in university governance and considered to be the official representative body of the students, with officers chosen by student representatives from various departments. At KAU, in contrast, although there is a Department of Student Affairs with the aim of managing student services provided by the University, such as housing, food, sports, financial awards, and special educational or counseling services, there was an absence of any student council or similar representative body. This comparative study has provided intriguing indications that student involvement in QA implementation at PSU has been critical to QA, since students play an irreplaceable role in judging the quality of their learning process.
The analysis of the PSU and KAU data suggest that QA structure and management personnel both play a crucial role in institutional QA. The establishment of a QA center was essential for both universities; but a basic difference between the two was that AACP staff and departmental QA committees at PSU work in collaboration to promote QA, while in contrast, KAU lacks an effective delivery vehicle to implement QA standards, largely due to the absence of proper policies and adequate human resources, resulting in the inability of the AAU head to do more than focus on formal accreditation.
The analysis revealed that both universities used a top-down approach to implementing QA. However, the extent of stakeholder involvement at PSU was high, with multiple stakeholders reportedly getting involved in QA implementation at departmental and institutional levels, whereas in contrast, KAU appeared to have limited involvement and capacity for QA implementation, and instead relied on the professional expertise of external stakeholders in determining quality standards and assessing compliance.
Furthermore, the data also revealed that participants at both levels at both universities perceived infrastructural limitations to QA implementation. While financial support was reported in the case of PSU, the issues of workload and poor working conditions remained problematic for participants at KAU. In addition, the data suggested that at KAU not only staff but also program quality (cf. Almstada, 2014), student quality (cf. Al Dawood (2007) and Alnassar and Dow (2013) on the very low capabilities of first-year Saudi university students), and increasing student enrollment were considered to be key factors hindering QA implementation.
Another notable difference in KAU participants' responses as compared to PSU, was the suggestion that a systemic approach and the integration of technology could enhance QA implementation. Here, as overall, the comparison between the two cases confirmed that quality assurance implementation remains a cultural matter (Harvey & Stensaker, 2008) and that culture change in institutions is needed to achieve good QA.
Finally, the study found that the type of the university, public or private, may play a significant role in the implementation of quality assurance as a result of seeking legitimacy supported by institutional theory. The age of the university was also found to be crucial, as PSU is considered relatively new and KAU is not; this is consistent with the argument that new universities have a more positive view of the self-evaluation process and consequently are more adaptable in compliance with external demands in this regard (Rosa et al., 2006). The study results support the findings of other studies that demographic variables of participants impact on perceptions of quality (Papadimitriou et al., 2008;Rosa et al., 2006;Stensaker et al., 2011).
These factors seem to be largely interrelated, requiring collaborative and integrated action from all stakeholders. For instance, faculty resistance and infrastructure development may be addressed more effectively when management and leadership are committed to providing the necessary funding and professional training in support of QA implementation.