Program evaluation syllabus public policy




















How does it differ from social research? The role of valuing in evaluation. Key Evaluation types and approaches. The roles of internal and external evaluation. Key evaluation design considerations. Fostering the use and influence of evaluations. Using evaluation models to inform practice — social programming, valuing, knowledge construction, use and influence, and practice. The role of program logic and program theory in evaluation: What is program logic and when is it helpful?

How to facilitate a program logic mapping session. Program theory for complicated and complex social programs. From initiation to final evaluation brief or plan. Day 3 Designing an evaluation plan: Key components of an evaluation plan. The importance of identifying key audience information needs, and evaluation scope. Budgeting for an evaluation.

Defining a limited set of key evaluation questions KEQs. Managing an evaluation: some principles and tips for practice. Types of Evaluation: Process and Outcome evaluation approaches. The emerging role of Developmental evaluation. The roles of experimental, quasi experimental, non-experimental and qualitative designs, including case studies.

Topics to be reviewed will include interviews, Most Significant Change technique MSC , photographs and other visual methods, focus groups, nominal group technique and Delphi technique, Q sort, surveys, and goal attainment scaling.

The role of mixed methods in evaluation — Strategies to integrate qual and quant methods. Analysis principles and approaches will be emphasised in Day 4 and in Day 5 in relation to major methods addressed.

Day 5 Reporting an evaluation and supporting use and influence. The 1, 5, 25 rule. Considerations in generating valid claims. Issues of rigour and relevance. The role of program evaluation standards and core competencies in evaluation. Disseminating evaluation findings and influencing the use of evaluation. Practical tips to enhance the value of evaluation. Building a culture of evaluative thinking into public sector agencies — the role of monitoring and evaluation frameworks. Course format:.

This course is run in a classroom. No equipment is required. Recommended Background:. Recommended Texts:. Patton, M. Utilization-focused evaluation 4th ed. Thousand Oaks, CA: Sage. Torres, R. Evaluation strategies for communicating and reporting: Enhancing learning in organizations 2 nd ed.

Evaluation should be practical and feasible and conducted within the confines of resources, time, and political context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness.

Many different questions can be part of a program evaluation, depending on how long the program has been in existence, who is asking the question, and why the information is needed.

All of these are appropriate evaluation questions and might be asked with the intention of documenting program progress, demonstrating accountability to funders and policymakers, or identifying ways to make the program better.

Increasingly, public health programs are accountable to funders, legislators, and the general public. Many programs do this by creating, monitoring, and reporting results for a small set of markers and milestones of program progress. Linking program performance to program budget is the final step in accountability. The early steps in the program evaluation approach such as logic modeling clarify these relationships, making the link between budget and performance easier and more apparent.

While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes. Surveillance is the continuous monitoring or routine data collection on various factors e. Surveillance systems have existing resources and infrastructure. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes.

There are limits, however, to how useful surveillance data can be for evaluators. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation. In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously.

Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. Evaluators can also use qualitative methods e. Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model.

Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting. Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.

Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved.

While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.

Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries.

Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.

You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results. The steps in the CDC Framework are informed by a set of standards for evaluation.

The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand? Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved?

Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number. Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates?

Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4.

Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time.

Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation. Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation.

Good evaluation requires a combination of skills that are rarely found in one person. The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise. An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts.

This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants.

The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks.

However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.

Seek additional evaluation expertise in programs within the health department, through external partners e. You can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view. Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs.

Be sure to check all references carefully before you enter into a contract with any consultant. To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access. Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.



0コメント

  • 1000 / 1000