Why Do You Need a Program Evaluation System (PES)?

October 4, 2022

A sound Program Evaluation System (PES) marshals the data, software, people, and decision-making processes that enable fast, well-informed, and broadly supported program decisions. A PES brings structure and a common vocabulary to program evaluation. It also allows an institution to increase enrollment through investment in the right new and existing programs while reducing costs.

Without a robust PES, new programs are more likely to fail, and cutting programs doesn’t yield the expected financial improvements. Why? The common weaknesses are poor data, unclear evaluation criteria, and an ill-defined program evaluation process.

While anecdotes, opinions, the needs of powerful constituents, or rules of thumb may be a good starting point for program ideas, they are not a sound basis for program decisions. Perhaps, at some point in the past, relatively uninformed decisions might have led to good results; today, declining enrollment, aggressive competition, and limited funds doom most ill-informed decisions.

In theory, using poor data might at least be faster and less labor-intensive. In practice, it slows things down. When every program proposal covers different topics using unique, unvetted data, it is challenging for decision-makers to evaluate them. To make things worse, programs are evaluated by several people who may not share a common definition of relevant data or a good program – and there may be no agreed-upon schedule for the evaluation process.

In contrast, a healthy PES includes the best available data, by program, on markets, margins, academic standards, and mission alignment. To start with, all the data should be for the actual market or markets served by the institution, not states, metropolitan areas, or other artificial boundaries. Most higher-education institutions find this framework self-evident, which leads to a new and consistent vocabulary for discussing and evaluating the market for an academic program.

The PES defines the data to be used. The PES market data should include student demand, employer needs, competitive intensity, and degree level. Student demand should consist of current indicators such as Google search volume, enrollment data, and more detailed historical data from IPEDS. Employer needs should include data from the Bureau of Labor Statistics, job postings, and the American Community Survey. Each data source should be understood – including its flaws – by the faculty and administrators involved in the evaluation process.

Competitive intensity should not just identify competing schools (there are almost always several competitors for every program in every market); it should reveal whether the market is saturated. For example, is the Google competition index too high? Is the median number of completions at competing institutions growing or shrinking?

A PES reveals the economic health of each program, including revenue, cost, and instructional margin (before overheads). It should benchmark costs across the institution’s programs and include benchmarks for the same programs at other institutions. Interestingly, this data often kills rules of thumb that dictate the closure of small programs. While some small programs lose money, many others are contribution positive – if they were cut, revenue would fall faster than cost, leaving the school worse off. These arbitrary rules don’t lead to good outcomes; institutions need to run the numbers as part of their PES.

A PES should include data on the academic performance of each program (e.g., graduation rates) and a description of the program’s alignment with the institutional mission. Programs also significantly impact DEI, so their enrollment, DFW rates, retention rates, and graduation rates should be tracked by ethnicity, gender, Pell status, and academic program. The PES must also document the program’s relationship with the institution’s mission, which requires thoughtful input from the faculty leading each program.

The PES must be easy for experienced users and approachable for relative novices. In addition to raw data (e.g., the number of job postings), each metric should be compared to other programs using percentiles (much like a grade curve), which makes it much easier to interpret the information. Color codes should give casual users a sense of the data and its implications. Critical data on each program should be summarized on a single page to avoid sending folks through dozens of pages, documents, or systems to find the information they need. The provost should have a summary view that shows a few key metrics for all programs, so issues and opportunities (e.g., spikes in student demand) can be quickly identified and addressed.

But a PES is more than data and software. It defines the process, participants, and schedule for program decisions and implementation. Well-structured workshops enable administrators and faculty to use the data, discuss it, and reach a consensus on decisions. The process is data-informed, fast, transparent, and strengthens campus relationships. Following the workshop, we provide templates and timelines for implementation, so the decisions in the workshop are approved and implemented.

To be useful, decisions must not only be made; they must be implemented. The PES should set up and track progress on initiatives, such as new program development. Program changes are usually a team sport that should include marketing, admissions, academics, fundraising, and institutional leadership. As a result, the PES should document who needs to be involved in each initiative, their responsibilities, and a schedule. The PES requires a budget or business plan and estimates any required investment. It includes regular progress reviews with institutional leadership. As the initiative matures, the PES continues to track the progress of each program using the dashboards and processes described above.

Two decades ago, customer data was difficult to gather, scattered across systems, or lost. Then, Customer Relationship Management (CRM) systems began allowing organizations to better manage this crucial asset – their customer portfolios; today, CRMs are nearly ubiquitous. Similarly, program evaluation is too important to leave to opinion, politics, poor data, and ill-defined processes. A PES can now enable a higher-education institution to gather disparate data to evaluate and enhance its program portfolio. Several of the largest and most successful participants in higher education now use a PES to guide their program decisions. Many other colleges and universities are adopting a PES to spur growth. One day, they all will.

Robert Atkins

CEO AND FOUNDER OF GRAY DECISION INTELLIGENCE

Bob led Gray DI’s entry into the education industry and the development of Gray DI’s proprietary industry databases and service offerings. He has worked directly with many of Gray DI’s education clients, consulting with CEOs and CMOs on business strategy, pricing, location selection, curricular efficiency, and program strategy.

About Gray DI

Gray DI provides data, software and facilitated processes that power higher-education decisions. Our data and AI insights inform program choices, optimize finances, and fuel growth in a challenging market – one data-informed decision at a time.

Related Posts
Subscribe to Our Blog

Don’t miss our latest research and insights

Related Posts