The terms monitoring and evaluation are often used interchangeably, but they are two distinct sets of organizational activities that are related but not identical. Monitoring is a systematic collection and analysis of information during the life time of a project (Shapiro, n.d.). The main aim of monitoring is to improve the effectiveness and efficiency of an organization or project in terms of its activities. The benchmark for monitoring is the targets set by the organization or project at the beginning and the actual activities planned to be carries out at various stages. Monitoring enables the organization to (1) remain on track and (2) to identify when things are not going according …show more content…
Thirdly is extra-tester reliability which means that the evaluator’s conclusions should not be influenced by peripheral conditions. Thus to say that the outcome of an evaluation should have no bearing on the evaluation object.
Validity
According to Hughes and Niewenhuis (2005) validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. Just as with reliability, there are three categories of validity:
1. Face validity: implies a match between what is being evaluated and how it is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it (Hughes and Niewenhuis, 2005).
2. Content validity: This means that what you are evaluating is actually relevant, meaningful and appropriate and there is a match between what the project is setting out to do and what is being evaluated (Hughes and Niewenhuis, 2005).
3. Predictive validity: An evaluation system has predictive validity if the results are still likely to hold true even under conditions that are different from the test conditions (Hughes and Niewenhuis, 2005). …show more content…
However Hughes and Niewenhuis, (2005) argues that some ‘subjectivist’ methodologies to evaluation would differ.
Transferability
Although each evaluation should be designed around a particular project, a good evaluation system is one that could be adapted for similar projects or could be extended easily to new activities of a project. That is, if the project progresses and changes over a period of time in response to need, it would be useful if the project team would not have to rethink the whole evaluation system. Transferability is therefore about the shelf-life (robustness) of the evaluation and also about maximizing its usefulness (Hughes and Niewenhuis, 2005).
Credibility
The term credibility refers to the idea that people actually have to accept and believe in your evaluation system. The evaluation thus ought to be authentic, honest, transparent and ethical. Hughes and Niewenhuis (2005) oulined three points that need to be adhered to to ensure credibility. Thus, none of your stakeholders should (1) questions the rigour of the evaluation process, (2) doubt the result of the evaluation report, or (3) challenge the validity. According to Hughes and Niewenhuis (2005), if any of these points are bridged, then the evaluation system loses its credibility and is not worth using