Conducting a program evaluation is an essential process in any organization and should be taken seriously as a critical component in the organizational internal processes. To achieve this, internal evaluators will need support from the organization and the role of the evaluator will have to be included into the decision making processes of the organization (Fitzpatrick, Sanders, & Worthen, 2010). The evaluation process should be viewed as a problem-solving process, as a learning process, and not as a criticism process (Laureate Education, 2010a).
To effectively conduct a program evaluation, the evaluator needs to do a contextual analysis incorporating political, ethical, and human factors that are present in program evaluations (Fitzpatrick et al., 2010) and to do this it is necessary that the evaluator takes the time to learn the context where the program is taking place. Analyzing the context means, as Dr. Bledsoe mentioned (2010b) “trying to figure out exactly what is going on in the organization and trying to get a sense of the contextual factors that affect the program”. Knowing what is going on, stakeholders’ expectations, participants’ needs, organizational culture, political factors, the environmental context, different implementation settings, and observing the program in operation will be key factors to incorporate in the context analysis (Laureate Education, 2010b).
Once the different factors affecting the program evaluation are analyzed and the evaluator has a better understanding of the program evaluation context, in order to keep the objectivity required to provide accurate evaluation results, the evaluator would need to use a “framework approach to ethics” as suggested by Schweiger (2007, p.396) to base the evaluation practice on a set of standards and values such as the Program Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation.
Additionally, there are ethical concerns the evaluator needs to keep in mind to make sure the program meets its purpose. The American Evaluation Association in its Guidelines Principles states that “evaluators will usually have to go beyond analysis of particular stakeholder interests and consider welfare of society as a whole” (As cited in Fitzpatrick et al., 2010).
Other strategies and then increase the validity of the evaluation, learn how to negotiate with the involved parties, clarify to the stakeholders that good and bad aspects of the program might be encountered as a result of the evaluation, always protect the credibility of the evaluation (Fitzpatrick et al., 2010), disclose conflict of interests, use quality assurance processes, and “keep records to demonstrate that proper evaluation methods were used” as indicated by Mohan and Sullivan (2006, p. 13) .
Different stakeholders of the program will have different interests on the evaluation results and this is why the evaluator should start by planning the evaluation with a logic model that will keep the process organized to achieve maximum results (Molloy, 2006). An organized and visual model will help the evaluator in defining the evaluation questions and determining for each of them what will be the instruments to collect the data that will answer that particular question. This logic model will help as well in identifying all the stakeholders who might have some interest in the program or could be impacted by the program. As indicated by Fitzpatrick et al, (2010) “generally, the single most important source of evaluation questions is the programs’ stakeholders” (p. 316). Evaluation should be an inclusive process that involves critical stakeholders in the different phases of the process, from identifying the concerns, hopes, and fears of the program to planning stages and in analysing the data. Stakeholders’ involvement increases validity of the study, fairness, and it is more likely that results will be used due to an increased credibility in the process.
Additionally, keeping a constant communication with the stakeholders ensures the different methods to gather data and the different sources of information will remain reliable, valid, and will not impose any ethical or organizational breach . Negotiating with the stakeholders always protect the credibility of the evaluation (Fitzpatrick et al., 2010).
Other important factor to take into account in a program evaluation process is the biases and values that will affect the analysis and interpretation of the data. To mitigate biases, stakeholders’ involvement is necessary to give room to different opinions and ideas, not only during the planning stage of the evaluation process but also in the analysis of data and reporting of findings (Fitzpatrick et al., 2010). Develop a comprehensive evaluation criteria with measurement techniques and instruments for reliability and validity (Vassallo, 2004b). The evaluation criteria should include the seven qualifications identified by Vassallo (2004a) such as validity, directness, objectivity, adequacy, quantitativeness, practicality, and reliability. If those qualifications are the foundation throughout the evaluation process then biases could be under control.
Finally, in the reporting phase it is important to develop a strategy to specify the reporting methods that will be more effective for the different stakeholders and what pieces of information and results are relevant for each stakeholder. Because delivering the critical information to different users will increase the likelihood of the evaluation results being used in a post evaluation phase.
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson.
Laureate Education (Producer). (2010a). Challenges in program evaluation [Video file]. Baltimore, MD: Author.
Laureate Education (Producer). (2010b). Contextual factors [Video file]. Baltimore, MD: Author.
Mohan, R., & Sullivan, K. (2006). Managing the politics of evaluation to achieve impact. New Directions for Evaluation, 112, 7–23. doi: 10.1002/ev.204 Retrieved from the Walden Library databases.
Molloy, L. (2006). Strategic program planning: A recipe for success. Camping Magazine, 79(2), 1–6. Retrieved from the Walden Library databases.
Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30(4), 394–399. Retrieved from the Walden Library databases.
Vassallo, P. (2004a). Getting started with evaluation reports: Answering the questions. ETC: A Review of General Semantics, 61(2), 277–286. Retrieved from the Walden Library databases.
Vassallo, P. (2004b). Getting started with evaluation reports: Creating the structure. ETC: A Review of General Semantics, 61(3), 398–403. Retrieved from the Walden Library databases.