AUDIO START: [0:00:00]
Mark Keleher: Good afternoon, everyone, and thank you for joining Gray’s webcast on understanding program economics. All of the data and information we will share this afternoon is derived from Gray’s Program Economics Project. This project has helped colleges and universities compare financial results to understand the revenue, cost, and contribution margin by program. Before we introduce Dr. Massey and pass the baton over to our CEO Bob Atkins, a couple of quick housekeeping items, please feel free to share any and all questions with us in the chat window on the left-hand side of the page. We will be sure to answer all questions at the end of today's webcast. A copy of the presentation, as well as the recording, will be delivered to all registrants via e-mail upon the conclusion of the this afternoon's webcast. Without further ado, Bob?
Bob Atkins: Good afternoon, this is Bob Atkins. Just a brief intro of myself. I'm the CEO and founder here at Gray Associates. I've been doing consulting and financial analysis on businesses and colleges for 20 or 30 years now, so delighted to be able to share with you today some new work and a new system we're launching. It was inspired really by work that I’ve seen at Bill Massey’s. Bill?
Bill Massey: Thanks, Bob. It's a great pleasure to be here. I think this is an important program today and I’m delighted to be a part of it. As the bio said on the slide, I’ve been involved in this area now for actually, probably, 35 years starting with my work at Stanford as CEO, and particularly in the costing space some of you may have seen my re-engineering at the university, How to be Mission-Centered Markets, Margin Market Conscious, published a couple of years ago by the Johns Hopkins press. So, let's go.
Program Sustainability: An Integrated View
We're really going to talk today about program evaluation, but we all start, or many of us start, from the motion of traditional program review. And this concentrates on academic capacity as a general rule -- the traditional kinds of program reviews that we are familiar with faculty driven, they bring in teams from outside to evaluate, so forth -- focusing on academic capacity, educational quality and other things that universities know are important for delivery of programs. Now, what we're going to do today is work with Gray’s Program Evaluation Service which then -- which also brings in data from the market place and that, of course, is very important part -- it should be an important part of program review, although typically it has not been the program evaluation service itself, of course it does involve that, but it's really something that has not been part of normal program review. Now, today, we're going to go a step further and we're going to add program profitability, and that's looking at costs, revenues and margins and it's those three things; it’s the academic side, the program evaluation, market side and the profitability side, which is going to make, better decisions on what programs to grow, what to sustain and what to -- and where to intervene. And, in fact, all three kinds of that are necessary for effective budgeting and program management. Okay, let's, move on.
Higher Education Consulting
I know that there are lots of concerns about adding profitability analysis or economic analysis generally to the mix of program review and market considerations, and the concerns generally come along the lines of, “Hey, you know, looking at that kind of profitability data is going to undermine academic values.” That simply is not true and I can say that both based on a lot of experience personally but also on some pretty darn strong economic principles, some of which I have published in the past. Looking at the right hand side of this slide we see what the game is for not-for-profit enterprises and I presume most of us here are not-for-profit universities, if there are some profit makers, that's great; but I’m speaking for the moment primarily to the not-for-profit or traditional universities. Our goal is to maximize what I might call mission attainment. That is mission being the thing that -- “Why are we here? What's in our value proposition as far as this university is concerned? What are our intrinsic values?” Or, put another way, “What are our academic values?”
Now, we can't maximize that without some concern about other kinds of factors, in particular, market demand and also program profitability. And in this case the profitability analysis is a means to an end on the theory that the principles and the models that we work with works through all of that and shows how money really comes into the picture without hurting academic values. If you were a for-profit enterprise, on the other hand, you probably are going to be trying to maximize, at the end of the day, program profitability. And that is “Money isn't everything,” it's the main thing, I think, it might not be the only thing, but it's certainly the main thing as long as you’re subject to profitability requirements. You just have to do that again. You have to do this subject to market demand. And so, okay let's go forward and see the next slide.
We're going to go back to the not-for-profit end of things. All the programs that we do should further the university's mission. But, you know, some need to make money so that others can operate at a loss. And I didn't put all of this in the slide but, of course, we have to make a level of profitability overall, or enough level of positive margin, as I guess I prefer to put it, so that we can fund necessary reserves and sustain our operations over time, have contingencies and so on. We’ll ignore that for the moment, but basically some programs need to make money while others operate at a loss. And why is it that some programs probably -- well, and indeed should -- operate at a loss? That's because it is these cross subsidies that enable us not-for-profit universities to assert our academic values. If we were for profit, basically we would be doing what the market told us to do. If market is not supporting an English program, for example, hey, no English program; and so on. And same for fine arts, and so on down the line.
However, because we are not-for-profit, and we do have academic values, we're going to want to run some of those money boozy programs, even though the market is not currently so interested in them. And we have to have a cross subsidy, the sources of the cross subsidies in order to do that. I think it goes without saying in the third bullet that if we're going to make these kinds of judgments about which are to be our profitability -- profitable programs -- and which aren't, we do have good estimates of the margins for all the programs on that's what we're all about today. So, move forward again, Bob.
And just a little background about the estimating program margins, first of all I guess I would say I’ve doing work in this area for decades and -- but it's only recently that university data systems can support good measurement of these kinds of things. And the game changer in this whole process is that finally our data systems have gotten to the point where we can use timetable and student registration data to make estimates of costs, revenues and margins for individual courses and indeed for individual students as well. And you can't make judgments about what courses and programs are going to -- are making money and what aren't without having data at the course level.
And then it's these course-level estimates that they're aggregated up to the program level. I would say that interest in this space is growing fast, there are other players but there are significant first-mover advantages, both I think for the programs we're talking about now, but also for universities to start making these decisions -- using these kinds of data to make decisions. And with that, let me turn to Bob and Mark who will describe Gray’s approach and how it meshes with their PES programs.
Bob Atkins: Great. Thank you very much, Bill. You know, first question I guess we should answer is, “Why bother with all this stuff about programs?” And the answer to that is it's one of the few things you can do that can both improve enrollment in revenue and reduce costs. Let's just take a couple of examples.
If you're looking at new programs you do a really good job and you address the market and find programs that actually do have a market appeal and you're able to teach effectively, the opportunities for growth are really very substantial. This is a real life example of this guy -- he lowered the numbers here from the actual -- the school, they launched ten programs using the approaches we're going to discuss. They grew their enrollment in those ten programs by some 578 students over the course of just two-and-a-half years. They subsequently launched a range of other programs as well, all of which are at or above the plan that they established for them. So this business of looking at markets and understanding where the opportunity is can be -- have a very significant impact on your top line.
Now, the bottom line is a whole other question. To do that you've got to have reasonable productivity numbers and one of the challenges, or really opportunities in a way, that we see in higher ed is in this long tale of small programs. If you look at U.S. higher ed in general, we've got 48 percent of programs in the U.S. producing just seven percent of all completions. That is 48 percent of academic programs have less than ten completions and produce less than ten percent of all the completions in the U.S., so we've got a very large number of relatively unproductive programs.
Now many of you may have heard of the recent thing that's happening in the University of Wisconsin system where they're imposing an arbitrary cutoff of five students -- five complete a year, and below that you will have to eliminate the programs. We are very definitely not believers in that kind of arbitrary approach, even for a small program you need to look at the sort of data were talking about because -- this is an example -- that program may have few, if any, incremental courses. So when you cut the program, you may find you actually haven't saved any money at all. It may have cost yourself a few students, in which case you're actually negative.
And then there are a whole host of other issues academic and otherwise that might cause you to want to keep a small program. So yes, we should be looking at our less productive programs but there's no across-the-board rule you can use to pick a program to cut. You're very likely to make substantial mistakes both from a mission and an economic standpoint if you do that.
But there are folks out there running some amazingly efficient portfolios. This is Western Governors, and you can see here that 52 percent of their programs produce 96 percent of their completion. So they've got a very concentrated portfolio and they had very, very few programs at the small end of the chart. Just 17 percent of their programs have fewer than ten completions, and if you dig into that, you'll find most of those are new programs that are just starting up. So you can be very, very efficient. I don't think many are going to be as efficient as Western Governors. I had the benefit of starting from a clean sheet of paper just a little over 15 years ago, which is a help. And, you know, of course, it's also online, so it can scale programs to a much higher level than you can with an on-ground program. So just a note, though, in terms of what -- how good you can get it this, I believe, is best practice and it's very substantially different from the norm.
Bill Massey: I would add that, Bob, Western Governors probably -- and I think very highly of them, I know many of the people involved -- that they have a somewhat narrower mission that I think many institutions do across the spectrum of traditional universities; and the broader the mission, the more likely you are to have some programs that are in the non profitable side of a legend.
Bob Atkins: Right. So let's take a look and we'll start -- because program economics are partly a function of the market in which a program operates, I’d like to take a moment and just walk through how you look at a market to identify niches where you can enter and be successful or, for that matter, evaluate the market for your current programs to help you decide whether you should start a new program, intervene on a current program, or for that matter, just sustain one that's doing well.
We think about four major dimensions when we evaluate markets. The first question, and the one in which most people have very little information is actually student demand. The second is employment; when your students are done, are they're going to be able to get a job at a reasonable wage and pay down their student debt? Third is competitive intensity. Is this market already saturated and who specifically are you up against and are you able to compete against those folks? And finally degree fit, which is are you the type of institution that grants the type of degree that's required in this market? So if you’re a two-year college, not very helpful if the market requires a PhD. Now, take a look at that in a little bit more depth.
Program Scoring Rubrics: Student Demand
What we would suggest you do is collect the data and score it, and of course we built systems to do this for you. And there are a few key attributes of scoring programs. One is you’ve obviously got to be able to set thresholds. Think of this as a grade curve and, you know, the threshold is “at what point level am I going to give a student an A?” And in this case, you can see at the top, we have said if they have a 300,000 or more Google searches we’re going to give that a score of eight. How did we pick that level? And when you're wading through market data that's a very important question. What's the context for this number? Because I certainly don't know whether 300,000 is a lot of searches or a few, and it varies depending on the specific market trends.
So what we do is we percentile the Google search finds across all programs and IPEDS. As you can see here, at the 98th percentile there 297,000 Google searches a month across the 1,400, you know, and that would put it in the top two percent of all 1,400 IPEDs programs we track. So that's a very high number and that's an appropriate number which -- to give it a pretty heavy score. The scores also help you express your priorities, so for example, you may want to be in larger programs, in which case you'd give it a heavy weight as we've done here, to the sheer size and volume of Google searches. You may want to be looking for some faster growing programs, in which case you would start to put your weighting more on the change in search volumes.
But more importantly, in terms of weighting, school place very different weights on student demand, versus employment, versus competition and degree fit, depending on who they are and what their situation is. If their primary issue is student enrollment, then you can intend to see more weight on student demands. In many of our for-profit clients who are subject to gainful employment, for a moment there employment became a critical issue. And that was the primary driver. But that -- the setting of drivers is really up to you in the mission of your school and what your strategic challenges are at that moment, so that you pick programs that can help you in those areas.
Of course, if you have the data and you have a scoring system, the world's your oyster. That's kind of our lever, if you will, and once you do that you can score every program under the sun. And this is an illustrative list of the top programs; this list runs on for some 1,350 other programs, but you can literally stack them all and see which programs are most attractive in your market. This isn't the end, you don't pick Nutrition Science is because it happens to be the top scoring program. At this point you're able to begin a dialogue that says, “Okay, is that an appropriate program for us? What would it take to launch that program? Do we have the relevant faculty?” And so forth. But at least it gives you a sense of what the most attractive programs might be. And you can pick from that list the ones that are right for you.
Program Scorecard: Medical Assistant
Now, obviously you’re going to need a little bit more detail other than the sheer ranking in order to decide whether you want to offer a program. And we would suggest you create scorecards -- and this is an example of the scorecard. And, you know, it’s a lot of data on one page. It's got student demand date in the upper left, it's got competitive data in the gray, job data over on the right there with the blue highlighting, and then these degree-fit data towards the bottom right-hand corner of the page. And mercifully enough I’m not going to walk you through every number on this page.
And in fact, one of the key points here is that it is difficult to interpret this many numbers at once. So one of the other things we suggest is to color code it. So as we do here, green is good, pink is bad, and the colors are based on the percentile rank of that particular metric versus the metrics the other 1,400 programs in the system. So as an example here, if you look at enquiries, there are 571,000 inquiries for this program, that is indeed a lot. It puts it the 98th percentile, the dark green color.
In contrast, we could look down and see that growth rates in Google search are 5.8 percent and that's actually the bottom 40 percent of all programs in the system; that's because there are some very small programs growing very rapidly that skew that growth percentage. But you can then -- once you know that green is good and pink is not so great, you can really scan all these numbers fairly quickly, and it gives your faculty both the more quantitative as well as the less quantitative folks a relatively easy way to look a program and understand the numbers and what they imply for the health of the market for that program.
Program Scorecard: Competition
Now, in addition to that, we find people really enjoy knowing exactly who the competition is in the local market for a program. And that's fairly easy to pull out of IPEDs. And, I would suggest, you know, if you’re looking at programs, that’s something you definitely want to do, and you can see who else offers the program, and IPEDs also gives an indication of who offers it online as well. And we find people look at this -- they'll pick a comparator, and here they think they can do as well as if they're thinking about a new program, and then that’s a help when they're sizing it. And when you're thinking about what the potential is of that program for them.
Program Sustainability: An Integrated View
That's a little bit about the market; now let’s transition over and really focus in on the economics. Again, the economics are partly a function of the market. How competitive it is will often dictate how many students you get; so the two are inter-linked, but they are distinct.
First question when we start down this path is, “What really is a program?” And a program is to us, it’s a sum of all the courses taken by the students in that major. So some of those courses are going to be courses in the department in which the major's house. Other courses are going to be elsewhere.
And let's use nursing as a case study here. I think this because I’ve heard a lot about nursing; we've done work with it. I've never actually seen the profitability, the nursing program until quite recently, when we built it. And my impression was that nursing was going to be very expensive to teach and most likely unprofitable.
Distinct Financial Entities
Now, so we started by saying, “Okay, well what courses do nurses take?” And as I mentioned you find that many and most of the courses they take are outside the department. But most of the credit hours, by far the majority of credit hours, were actually inside the department and in this case into your college in two or three core nursing courses.
When we look at the cost of delivering those programs, they're all over the place. And much to my surprise the cost of delivering the nursing program to direct-instructional cross are very reasonable, towards the lower end of the range that we're seeing here, and some of the smaller liberal arts programs are actually quite expensive to teach. Now, in this case it all averages out so the difference between the average departmental course is, you know, not as great as the difference on this chart. This is almost $300, no best of the worst -- I’m sorry, $200 from the best to the worst -- but, you know average, it averages down to $53 per courses in department and $97 for courses out of department. So, you know, even after all that averaging, the non-departmental courses can almost be twice as expensive to teach.
Bill Massey: I might add, Bob, that this is a striking case, but that's not necessarily typical. I mean, I’ve seen numbers like this be very all over the board. [0:21:51.6: inaudible] in major courses are more important. And also, it's worth noting that when you have an in-major course, it may not even be a name in the specific department that you’re talking about. In physics, for example. Certainly, dense math courses are in-major in a certain sense, but they're not in the department. There's no substitute for just going in and seeing what students are actually taken, including in places where they have options; which math course they want to pick, and then see what those courses cost.
Bob Atkins: And, you know, the other thing we found, Bill, is that the revenue on these vary a lot too. And that's really a function of the individual student, obviously, and to some extent, the additional fees that may be charged to take certain courses.
But revenue per credit hour is bit all over the map. And some of those bottom numbers, the $380, they’ve got to be driven by the individual student who was the only student who took those who was paying, you know, quite a bit more than the average for those courses. Now, in our case, most of them you'll see hovering around $174 a credit hour, that's the standard for the school. And then the nursing courses come in higher, in part because fees. Bill?
Bill Massey: It’s especially important in public institutions, where you've got in-state versus out-of-state revenues, you know, so the revenues vary a lot.
Bob Atkins: Well, you know, you've got all kinds of student segments that may have different price points that may also cluster in certain programs. Think of athletes as an example, where they're often heavily discounted by the institution and may tend to go to courses and programs altogether. And as a result, they could skew the economics of those courses pretty substantially.
Now, net that all out, and you've got contribution margins that are really all over the place, from a high of 337, down to a low of -41, by course. Again, here the departmental courses are actually higher contribution than nursing than the non-departmental. So I was quite surprised as we worked through this to find it on a variable cost basis, and a variable margin basis, nursing actually does pretty well.
Now, let's, take a little bit more of a look at how you figure all that out.
Program Economics: Methodology
And so now we'll talk about methodology. And, Bill, I think you had a couple of thoughts on why we might want to use direct variable economics as a starting point?
Bill Massey: Right. The general rule of thumb is that you want to use direct variable economics when you're looking at changes in operating -- in operations -- because you really want to get as close to the action as you can get. You want to see what's happening. If you substitute this course for that course, or change the program requirements or do something of that sort, you really want to know what's happening at the core base to the extent that you can. When you get into broader questions about whether you should have a program or not, or what the price point should be for the program, then you need to deal with the higher levels to direct --sorry, the direct shared costs, the departmental and school level, and then of course the indirect costs that are allocated rather than -- in general, the closer you are to operations, the more you want to deal with direct variable economics. So how do you do that, Bob?
Bob Atkins: Well, you start with students, and as Bill mentioned, that these days the registrar's databases and such can give you really good detail on students, what they're paying, and you should net out from that any institutional scholarships. And the reason I say “institutional” is if they're getting a scholarship from someplace else, to you that’s tuition revenue. So you don't want to subtract that, but you do want to deduct any institutional scholarships from the tuition for each student.
Then you take that student tuition, let's assume that they're paying $10,000 for a semester; just an arbitrary number. And that student -- make it $9,000, it’s easier -- and they're taking three courses and courses are three credit hours each. Each course would be assigned $3,000 and $3,300 in costs, and then each of those credit our would inherit a cost of $1,100, approximately. So that means that, you know, for each student sitting in that course, the courses can inherit some $3,300 in revenue.
Now, the mirror image of that is that we're going to have the instructor cost coming in there and that's going to be allocated out across all the courses that instructor teaches. So if they're teaching one course and they cost $3,000 a semester in that course of three credit hours than that course can inherit a $1,000 per credit hour cost. And that's going to link up with the revenue and that's going to follow that student through into their major. If you will, the major's going to inherit the revenue and cost associated with each student and each course that they take.
Now when you do that, you can then calculate your total contribution by program. And what was interesting -- we're doing nursing here -- is it actually came out in the top ten for this particular school with $150,000 this semester, I believe it is, so very attractive it looks like. And, you know, what this begins to tell you if you're thinking about cutting, nursing would be an expensive place to cut because it’s exactly driving positive contribution margins. And so, by the way, it turns out most -- almost every other program in the institution, so there's an average of a 60 percent contribution margin in this particular school. So at the direct -- revenue minus direct instructional cost level, most programs look pretty good.
Now, given everything that's going to make money at some level, then how do you decide if something is doing well or not? And what we would suggest is you normalized the data -- that is, you get everything on an apples to apples basis by taking everything down, the student costs and revenue per student credit hour. And then you can begin to compare one program versus another in an appropriate way. And here again you can see we've used color coding so that we can understand that a contribution per student credit hour -- that's what SCH stands for -- we got $182 and that actually puts this particular program in the top ten percent of all programs offered by this institution. And revenue and cost per student credit hour are also in the top ten percent. Bill?
Bill Massey: Let me interject here, if I can. I just want to add something. Stepping back just for a moment and looking at the larger picture, it is possible to do some broad allocations of costs and revenues to programs and get numbers that look something like this without going into the kind of economic models that we're talking about here. And that is not nearly as good. I’ve seen models where people are allocating costs and revenues basically on a per student credit hour basis and you could get numbers that sort of look halfway reasonable.
The trouble is, when you start doing analytics of the kind that Bob is talking about now, if you've allocated to begin with on the basis of headcount or credit hours or something and you start doing analytics, what do you get back? You get the same -- basically the same numbers you put in in a different form. What we're doing here is we are going right down to the grassroots, we have that nexus between course and student with costs identified at that level, and then aggregating up. And that is a huge, huge difference.
Bob Atkins: Thanks, Bill. And, you know, across here you could see not just the unit cost but of course you can keep track of your total revenue instructional costs and contribution as well. It also gives you just a sense of the scale and scope of the program, in this case students take a total of 22 courses, there are 165 students in those courses, there some 2,800 credit hours and almost 37 instructors required to deliver all the courses associated with this program.
But the real key point here, you better do it at a unit level, a student credit hour level. And that allows you to compare one program to another within your institution. Importantly, it also allows you to bring in benchmark data from other institutions through either the community cost study, benchmarking study, run out of Johnson County Community College or using the Delaware Cost Study, both of which track contribution per student credit hour.
So once you get into there, you're going to want to find out, “Well, why did I hit those numbers?” And one of the issues is that, of course, your department is being affected by work done in other departments. And this is just an example of a drill down that you'd want to be able to do, is see where nursing students were taking their courses, and how much money was associated with that. And how -- what the margins are in the courses that the students are taking out of department.
And then you get into instructors and courses. So, first question is, “Which courses are my students taking that are generating relatively better or poorer results?” And all the way at the bottom of the page here you can see Art 130 is our record holder on the stage with $325 in contributions per student credit hour. So while we may all look at art courses and think they're expensive, this one seems to do very well for itself, thank you. Health 120, on the other hand, has something going on we want to get after. It's actually losing $41 per credit hour on a variable margin basis, and that’s before allocation of any overhead.
So you might want to take a look at that and figure out whether we're just running too few students to that program or, for some reason, the cost structure is out of whack. In this case we can -- you can see that if it costs $214, costs are considerably higher than any other program on the sheet. So that'd be a place to start but that could be driven by the cost of the instructor or the number of students in the program -- in the course, excuse me.
And that of course leads you down to want to see exactly what's going on at the instructor level and you can get into this and learn whether you've got a problem in terms of the cost per instructor, for the volume of students that are taking that program. Just not advertising that cost over enough credit hours.
Program Economics: Methodology
Now once you're done with all that, you get to another thorny issue, which is the allocation of overheads.
One of the interesting things we found is that, again, at a contribution margin level, most programs are contribution positive. And that actually can be a problem for you when you're trying to implement these sort of cost cuts, especially based on class size; what will happen is people say, “Well, you know, I looked at my contribution and troubles okay leaving here, we made money after I instructed this.” Well, there's a whole lot of stuff you have to do in the university besides teach. Admissions, for example, without which there wouldn't be any students in that class, as well financial aid and dozens of other overhead functions that need to be taken care of to run a university properly. So we need to expect courses and programs to run at relatively high margins in order to pay for the rest of what has to happen at an institution.
So, the way we thought about doing that was actually to enable schools to do an allocation by cost center across one of several cost allocators. In this case, we're only showing percentage of cost allocated on a per-student basis or a percentage of cost allocated per instructor. You could actually have several other allocators; one might be per program for certain types of costs and the other might be, say, per employee for certain costs. And then you have a broad choice about how much of that cost really should be directly allocated to programs and how much is really truly not program related and shouldn't be allocated down in the program level.
All those are leverage you can push in here or pull and you can set an allocator -- let's take a look at advising and counseling -- that's obviously something that's driven by the number of students. So in this case we've allocated 100 percent of that cost to advising and counseling. On the other hand, if we think about something like retirement costs, those really have nothing to do with students; they're all per instructor. So we've allocated those 50 percent per instructor. And then there are a bunch of other non-instructional folks that would get retirement costs. We haven't allocated it all out in that case to the instructors.
And I think there's lots of room here for interpretation and institution evaluation as you set these parameters, but it gives you an easy way to go in; set them up, and then if you look at the numbers and you find that that particular allocation methodology creates unintended consequences it’s relatively easy to come in here and adjust that. So once you've got your variable costs calculated, this is the next step you can go to and let's see what happens when you do that. And we'll go back to our nursing program.
So here's contribution margin by program. It's the top ten after allocations, and you'll notice the conspicuous absence of nursing on this chart now. So it's fallen out of the top ten. Spoiler alert, it's going out of the top 20 and top 50, as well. What happened? So let's take a look at indirect costs for this program.
What you see here is the indirect costs, in this case, is actually pretty much in line. There are a number of programs that are more expensive as well as ones that are less, but the $17,000 number for nursing here doesn't appear to be excessive.
Now, when we get to departmental cost it's a whole other kettle of fish. Department costs by program for nursing are several times the cost of any other program. At $159,000, they're three times the cost of health sciences and over ten times the cost of the science program. So something's going on in there that you want to dig into. One answer, by the way, is that nursing requires its own dean and that dean has to be a PhD level nurse, regardless of overall program size.
Second, in this case, the department really only offers three programs, so it’s relatively little in terms of number of nursing programs to advertise that cost over. And in other institutions where you can have undergraduate and graduate nurses in nursing programs, you know, you can advertise that more effectively across the full spectrum of nursing programs.
But in this case, all that cost lands squarely on top of two or three programs, and it really drives the cost of nursing through the roof. And at the end of the day, nursing in this institution actually loses money on an allocated basis. So it is important to know the contribution. But once that's done, you do need to go in and look at your allocators. And make sure that, especially with department costs, which do vary closely with number of programs and size of programs, that the individual program is contribution positive after departmental costs.
Now, “How can you use this data?” is a whole other question. Let's assume that you have good market data and you have good economic data. How can you use that to make decisions?
Integrated Report Cards
First is just to create routine reporting for your institution on program viability. And this is an example from Regis -- they're kind enough to share with us -- that incorporates internal metrics assessment of internal trends such a starts program, actual credit hours, they have budget credit hours by program as well. It looks at resource efficiency and looks at revenue variances, compares that to the overall institution and its growth rate and then looks at college contribution margin net.
Now at this point, they didn't have contribution margin all the way down to the program level, but recently they've implemented that. So these will all be updated in time with contribution margin down to the program level. Again, here it was only at the college level. And then it ranks programs on tuition net generated by the programs, you can see its comparative size and revenue. And then it starts to look at some success indicators on the six-year graduation rate, the retention rate of students, and total completions as a measure of size. And then it again ranks it relative to other programs. So each of these numbers is put back into a context compared to other programs within the institution.
Now, as you might imagine, they are a client of ours and so one of the other things they've added here is relevant to the market and demand for the program. And you can see the scores that this program received in Gray's ranking systems and how -- what percentile that put them in on each of these scores. So we’re looking at a very effective program here. It's one of the seven largest. It's in a very attractive market's, it’s getting 99 percentile or higher ranks on all four major metrics, and is in a school that strongly contribution positive. But that, if you will, 360 degree perspective I think is really important when you're looking at programs and if you can automate these things then you can actually afford to look through and examine your programs on a more frequent basis, which I think is important in a market that's beginning to change a the rate of other more commercial markets, if you will.
I think that there was a time in higher education when the velocity of change in programs offered was relatively low. That's not the case anymore, and so it's going to become more and more important to look at your portfolio on a regular basis. Schools, I think, are beginning to be required to look at this is frequently as annual in the southern accrediting body. So, you know, it's coming at you and I think whether it's required by your creditor, I think it's a good practice to look at them at least every three years now, where frankly every five or even every ten may have been okay ten or 20 years ago.
And last but not least, I should mention on this page is a discussion of mission and institution fit. This is not, you know, an optional section or unimportant section just because it doesn't have numbers in it. This is where the faculty and the administrators have the opportunity to really discuss the program and to explain its importance to the institution independent of the numbers. So it may also identify and flag particular issues around faculty and other things that don't show up in the numbers above. So this is a quarter of the space on the page, but not a quarter of the overall importance.
Program Portfolio Strategy: Workshop
Now the second half, once you have the data, is acting to create a process in which people could use that data on dhere we would suggest workshops and what we do when we run this with people is bring the data together, get the leading administrators and faculty members in the institution, the deans, the provost and such, into a room for two days to review the entire academic portfolio and decide what programs to start, stop, sustain, or grow. We find it's more effective if you start the first day focused on what programs to start, and I’ll talk about that a little bit more in a moment. As you do that day, if you need to take some time to ground folks, if it's possible for the chancellor or president to be there and set the importance and goals for the day, that would be a very positive step. In many cases, those folks participate throughout the workshop as well.
We then provide a summary of the approach we're going to use and we share the initial scoring. Importantly, we give the group the opportunity to discuss and refine the scoring system itself. We do that for two reasons; one, they often come up with good refinements that are very well much worth incorporating, the second part is that people are much more prone to trust and use a scoring system into which they’ve had some input. So it really changes the dynamic in the room when people have a chance to modify the scoring system.
Then we re-score everything and break up into small groups to discuss which of the scored programs are worth starting. People look at the overall ranking and they go through the scorecards and competitive data, and then importantly apply their judgment to decide whether an individual program is something the university should consider. Importantly in this process, if people have an idea and they bring it to the workshop, those ideas are almost always in the data set. We have all 1,400 IPEDS programs in there, so they can explore that and get data for the program that they're interested in as well, not just work off the list, if you will. And then folks get back together and read out on the programs they’ve recommended, review it is a group, discuss and prioritize those new programs and ideas, and select a short list, if you will, of programs that are worth pursuing.
Now, and I say, “short,” you really don't want to come out of the first day of a workshop like this if you're looking to add five successful programs with five nominees, because they're -- an election is going to get done after this, then we will shorten that list. So, my rule of thumb is if you want to come out of the exercise with double the number of programs that you're able to implement, because they will get wind of the way as they move through the analytical process and go up for board approval and so forth. You've got to have some -- you’ve got to have enough in there that you'll have enough left over when you're done filtering. Day two, we turn to current programs. We do it in that order, because by focusing on start, we set an optimistic tone. We make it clear that this is not a witch-hunt, just looking for cuts, and third, people have the opportunity to look at a whole range of programs that are really very good. And that helps to put their context to the evaluation of current programs.
And on the second day, we follow very similar methodology; folks break out, they use the ranking and scoring to evaluate the current programs, and they select programs to stop or, as Bill said, to intervene on, as well as programs to grow. And the vast majority of courses are in the middle, they are sustained, no action required. What we find different about this -- and I look at, you know, the University of Wisconsin as a bit of a contrast -- when you're done with this, people have made decisions; there's consensus, both in the administration and in the faculty, about what programs to stop and start and that's often very difficult to get at it.
It tends to be a very political process, often creates a lot of heat and consternation in the faculty, especially around stops. And we found when you trust the faculty and you give them the responsibility to find those programs that need to stop, they're willing to do it, even in unionized shops. The real firefight comes when you try and take over that responsibility from them and don't trust them to make mature decisions or don't give them the data with which to make those decisions.
So our experience is you can get cuts made this way, you can get cut made in unionized workforces, and the best way to do that is to invite the faculty in and ask them to act as stewards of the institution and given the data they need in order to make good decisions. To give you an idea, we worked with one school recently that was able to reach agreement on 80 programs to stop. Now I will tell you, they had a particularly large and weedy program mix but they needed to for financial reasons and they were able to do that and reach consensus on it. So you can reach -- you can get to end of job on cost cutting if you need to in the room.
Now, there’s still cross checks you have to do; some programs when you stop them there's actually no savings, because you're going to have to teach all the courses anyway. Others, you know, there's lots of savings. In my experience, there's really not that much money in stopping programs, but it can free up money that you need to invest in new programs to grow. And here again the consensus of the faculty supporting -- any administration supporting new programs to start -- really helps to get those off to a good launch. So I think the main message here is you need data and participation in a well-facilitated process and you can get a lot of these tough decisions done and done in a couple of days.
Bill Massey: Can I [0:46:35.7: inaudible]?
Bob Atkins: Sure.
Bill Massey: Can I just quickly intervene? I want to reimport -- hello?
Bob Atkins: Yep, you’re there.
Bill Massey: I just want to reimport about what you just said. That it's tremendously important to involve the faculty in my experience, too, which is fairly extensive. I have rarely found a situation where if you work with faculty they won't respond favorably and effectively. In fact, they like it, they like data. What happens -- the pushback comes when you're trying to disempower them, so what we have to do is universities, I think almost more than anything, is to bring faculty into informed serious conversations about the relationship between the economics of the academics. And when you do that -- and it if requires data in order to do it -- when you do that all kinds of good things begin to happen and they will only get better as people get more experience in working with these kinds of data. Thanks, sorry to interrupt.
Bob Atkins: Oh, not at all, Bill. It’s a really important point because done wrong -- this is an enormously divisive set of decisions that have to be made and can really have a long-term negative effect on the institution, and that's just not necessary. You know, there are turnaround situations in business where you may have to make that kind of draconian decision, but at the extreme that exists in academia as well, in our experience, even at the extremes it's not necessary. And not only that, you tend to make poor decisions. Faculty know a lot about the programs and to try and drive these decisions without their input is not wise.
Program Portfolio Assessment
So once you got individual program decisions made, the last step in this process, I think, is to step back and look at the overall portfolio across all four dimensions. So now we're talking about looking at dozens or even hundreds of programs in some institutions and say, “Where do they stand?” And we've begun to put together some framework for this. We would welcome any thoughts others have, or framework that you may have created to evaluate your program portfolios. But let me show you a couple of charts here on the way to look at, in particular, program markets and programmed economics in an integrated way across an entire portfolio. So, I’m a consultant, so I have to have a two by two matrix.
In this case, we've taken that market attractiveness score out of the program evaluation, and that is our x-axis, or horizontal axis here, and the media is about a ten in this particular scoring system. On the y-axis, we've taken change in market share -- so that's not market share, that's whether the school is gaining or losing market share -- and that creates four quadrants. In the upper right is where you really want to be, you want to have -- you’re in an attractive market, and you're gaining share in that market. A less desirable thing -- this can be an attractive market -- but to be losing share in that marketplace.
And then, you know, every school's going to have some weak markets they participate in. If they're gaining shares in those markets, that actually could be quite attractive, especially if others are beginning to back away because they don't understand how to succeed in those markets as they weakened. So this can be a good place, but it may not be depending on what the driver’s the weakness. So if you recall, we assess market attractiveness, for example, we look at employment in student demand. If the weakness in the market is employment, then you may be doing your students a disservice if you're bringing more and more students in, but they're no jobs on the other side. So you have to be careful over on that side of the spectrum.
And finally, weak markets losing share, these are where some of your easier decisions will lie. If you've got a small program in a weak market and it’s losing market share, that becomes a relatively obvious candidate for a cut, unless, of course, it's critical to your mission. So if you're a religiously affiliated school and you have a ministry program, those are generally small markets with limited employment opportunities and yet you may have to keep that small program because it's a central part of your mission.
Now let's start to layer programs on to this and see what we have. This is an actual school and this is the one for which we did the profitability analysis. So now we've added a couple of dimensions. The circle size is proportional to the number of students enrolled in the program and the color is proportional to the program's rank in profitability, with dark green being in the top ten percent and the pink in the bottom ten percent of programs. And what you can begin to see here is -- first of all, most of their programs are sustaining share, but they've got a fair number here on the left hand side of the axis that are in unattractive markets. In one of them there at the bottom, losing shared a pretty good clip; 30 percent over a one year period. So that program may be attractive now but very likely as it continues to shrink, it's likely to end up contribution negative.
Now let’s layer in the rest of their portfolio, and you can see the overall this averages out to be a fairly healthy portfolio. You've got some very large programs and very attractive markets over on the far right that are strongly contribution positive. And then you look at that all those pink dots and you realize that a lot of the money out of the programs is going to subsidize other programs that are not as fiscally attractive. We've also got some here, they're real question marks for me, the pink dot between ten and 20 on the ranking scale, negative 20 percent, where, in a good market, it's a decent sized program. But it's declining 20 percent a year.
I wouldn't be surprised if you took that program and reversed its share loss if it might not also become more financially attractive. And we have a cluster up here that would be question marks to me; they're attractive markets they're gaining share. They haven't figured out how to make money in those. And that's dangerous stuff if you're growing a program in which you're losing money so you have to be very, very careful up here when you see a pink dot.
So that's just one frame work for doing this, but I think one of the challenges facing institutions is thinking through how they can map out their overall portfolio and get a sense of how healthy they are on absolute basis as well relative to other similar institutions. And we find, you know, every institutions profile is really significantly different; we've seen schools where 75 percent of programs were in weak markets. We've seen other schools where 75 percent of their programs were in very attractive markets. And long term, I think those schools face very different outcomes. So that's it for our presentation today; Bill, any last comments?
Bill Massey: No, I think that we pretty much covered it. I guess the only thing would be if you -- the listeners out there -- feel a bit overwhelmed, it's not surprising. I mean, this is all really new and advanced stuff. But it is a game changer, it’s absolutely game changing, and it will be hard to assimilate at the beginning, but it's worth while. And the more you do it, the better you are at it, the better your teammates are at it, and it’s all a matter of getting the data at that nexus between students and courses, and then aggregating up to programs and licking it off market data. And I think you'd be surprised at how it changes your thinking.
Bob Atkins: Thank you, Bill. And let me pause -- I'll take any questions people have.
Mark Keleher: Yes actually, Bob, we did have a couple of questions. First question came on slide 24, the revenue per student credit hour. Is the revenue per student based on student residence status? For example, resident, non-resident, domestic, international, et cetera?
Bob Atkins: Yeah, it would be. Actually, all the way down to the specific level of saying, “Okay, what was the actual tuition net of scholarships paid by this individual student?” So if they were out of state, that tuition would be quite a bit higher, obviously, than in state, assuming they weren’t -- didn't get a scholarship. So it's on a per-student basis, so it would incorporate all those nuances. Importantly, it also allows you to pull groups of students together and track the profitability of segments. So we didn't show that today. But the system does enable that segment-level tracking.
Mark Keleher: Perfect, thanks for that, Bob. We do have a couple of other questions but then if you have any additional questions, please feel free to enter them into the chat window on the left hand side of the page. The second question that came on slide 30, and someone asked, “What external benchmarking do you use for cost per credit hour, class size, et cetera?”
Bob Atkins: Two answers -- well, three answers to that. First, over time we're going to build our own, and the reason we're going to do that is to make sure that the data is really all apples to apples. But there are two other external sources for this. One is, if you’re a two-year college, the Johnson County Community College cost study exists and it's relatively inexpensive to join. The cost -- the real cost of it is actually pulling the data together. So then we work together for clients, we're actually going to make those submittals and we'll collect and refine that data in a way that's consistent with that cost benchmarking database for two-year colleges.
Now the other one for four year colleges is the Delaware cost study. Again, it's very inexpensive to join, just a couple thousand dollars; the difficulty is pulling the data together and that's one of the reasons we designed our system in a way that would be consistent with the Delaware cost study so we can get benchmark data for you and have it be an apples-to-apples comparison. But that's really the two or three ways; you may also find there's a small group of schools that you can compare with. There are some cost benchmarks you can get out of a creditor databases as well, but they're pretty primitive. They're not on a program level.
Mark Keleher: Perfect. Thanks for that, Bob. We’ve got a couple minutes left here and a handful of questions as well. Thank you very much for all of your questions. This is probably for you, Bob. “How do you come up with new program ideas? Any recommended insights to start with?”
Bob Atkins: Yeah this is really where the market ranking comes in so, Mark, I don't know if you can scroll back and show them, but that ranking chart where we've taken every program in IPEDS and assigned a market score to it, that's a pretty provocative list of potential new program ideas. Typically, a school is only going to be offering two -- if it’s a big school they might have two out of three of all of the programs on the list like this, which means they're going to be a third of them that are still available to them as potential new programs. So that's one way, for what I would call programs that have been around for a while but are still large and attractive.
Now, the challenges for programs that haven't been around for a while and don't fit the taxonomy of these databases so, you know, the numbering structure in IPEDS was created -- last updated in 2010 and maybe a modest update in 2012, so their whole program has come into existence since that numbering system was created. As a result, you've got to kind of do a different cut at those and look at macro-trends, look at data like what NASA is spending money on, or what venture capitalists are investing in, what leading-edge institutions new programs are offering and that sort of thing.
So we do that, we do a big scan every year or two, to check and see what's out there on the horizon that's not yet in the databases. And you have to be willing to accept when you do that, chances are you're not going to the same level of data that you would have for a more developed program and, you know, much less ability to compare it to other programs with like-for-like data.
The other place to look though, I should mention, is into your appointment database to see what’s trending in terms of skills, and jobs, and whether, you know -- there are a number of those databases, we much prefer burning last as a source, but that's a good place to go look for what employers are looking for, which is another stimulus for new academic programs.
Mark Keleher: Perfect, thanks for that, Bob. Another question, and this is for you as well, Bob, “Do we have difficulty evaluating market demands for potential new programs? Burning Glass doesn't find much in our rural area, and BLS is too broad. How do we evaluate the market demand in this case?”
Bob Atkins: We have two other data sets, I actually don't think either Burning Glass or IPEDS are particularly good -- or BLS -- at this sort of work. You know, student demand is student demand; you’ve got to see what students actually want. If you're operating on IPEDS data, its most recent data release is 2017 and the people made their decision about their major, you know, four or six years ago, if it’s a four-year program, so we're looking at market decisions in IPEDS that were made 2013 or before.
So you want something that’s much more current than that. So we had two things that address that; one is data from Google search lines and we tracked the 200 top largest programs in the U.S. for that at the county level, which should help you out in a rural area. And the other one is inquiry volumes. And that’s data that comes from aggregators and agencies across the web. We found a pretty reliable indicator and we do things like a location selection and predictive models. It turns out to be a very powerful variable in those predictions. So, interesting data; it’s down to the -- it tells you what program the student’s interested in, what degree level they want, whether they want to take it online or on ground, and we have that all the way down to a census tract. So the census tract’s about half the size of the Zip Code. You can clearly see what students are looking for in your area. Always remember, in addition to looking for programs, many students look for schools. And any programmatic data is going miss those students. So you do need to cross check it IPEDS in terms of total volume of students, because you're not going to find the students who are picking a college first when you're looking at that demand data and evaluating by program -- collecting it by program. Perfect. Dr. Massey, this question is actually for you. You mentioned an article at the beginning of the webcast, can you share the name of that article with our audience one more time, please?
Bill Massey: Remind me what article that was?
Bob Atkins: It may have been your book on evaluating economic programs.
Bill Massey: Okay, well, I have written a good deal about that. I’m still not quite recalling the article. The whole reengineering book is about the economics -- most of it is about the economics of programs. There’s an article -- I could send this out to people -- there's an article about the economics of programs that I wrote some years ago and maybe the best thing to do is if I pull together one or two of these things and you could send them out?
Bob Atkins: And I think Bill is referring specifically to Re-engineering the University, which you publish some years ago. Good book, we’ve read it, and actually another book in the office, so keep an eye out for it.
Mark Keleher: A couple more questions and the slides will be available for download at the completion of the afternoon’s webcast. Bob, this one is for you. “How do you price your services to assess an institution’s academic program portfolio?”
Bob Atkins: Let me see -- I have a clever answer, but I’ll skip that. We base our pricing on school size, so it's higher for bigger schools on the assumption that they can, you know, afford a higher price. And also because the work is somewhat more complex to do that; they have more programs, they cover bigger markets and often the meetings -- the workshops -- are larger, which is a little more complicated for us to facilitate. So we do it based on the size of school.
Mark Keleher: Perfect, thanks for that, Bob. We had a slide that we were talking about, Google search volumes; a member of the audience is interested if Google is the only search engine used for volume evaluation.
Bob Atkins: It is the only one that we use. This is kind of expensive to get and Google is such -- has such high market share for this sort of thing that while the others would be interesting, I don't think they would change the results materially. Though I can't, you know -- in the future, we may be involved in some of the others, perhaps, but for now we just use Google because it's the big dog and it's got enough market share to be pretty representative of the rest of the world.
Mark Keleher: Okay, perfect, thanks for that, Bob. This one might be for Bob and Dr. Massey, I’d be interested in hearing from both sides here; “How do you distinguish in terms of expenses between undergrad versus grad programs within the same department? Do you measure the dean's salary and then split between the two? What about faculty teaching? [1:04:22.2: inaudible], it’s kind of a loaded -- there’s three questions there, do you want me to repeat?
Bob Atkins: Yeah, that's okay. Let's start at the bottom with how you look at faculty costs. That's real pretty straightforward, you look at the wages and benefits for that individual faculty member. And then what we do is we divide that by the total number of credit hours that faculty member is teaching. And then those credit hours, you know, if there are three credit hours for one course and three for another, than half the cost is going to go to one of those courses and half the cost is going to go to the other, irrespective, at the moment, of degree level. If you have a belief that we should be doing something different than that, I’d really be interested in your opinion. So please feel free to send us an e-mail. But not knowing anything more at this point, I would say we would allocate it based on number credit hours taught and then the cost, of course, is based on the number of credit hours in each course. Now, as for departmental overheads, at the moment, most of those are getting allocated on a first-student basis. We will start to allocate -- you know, some of those could be allocated by instructor, and I think that’s a bit of a judgment call by the institutions. And as you saw on the table we were using, that’s something, you know, that we could make that judgment call with you in real time and adjust all the numbers accordingly. If that -- please let me know if that didn't answer your question, I’d be happy to take another crack at it.
Mark Keleher: Perfect. A couple other questions, “Over what time period do you recommend reviewing contribution margin results? Quarterly? Annually?”
Bob Atkins: Personally, my guess would be by semester. Bill, what do you think? The reason I say “by semester” is I think the results may vary a lot from one semester to another. In the data we're looking at, their fall semester was very different in terms of the outcomes of the contribution margins than their spring semester. So I would like to have data by semester. You know, there's going to be a cost to do that. So if you could only do that once a year, that's probably -- you could get by with that. But when you do run it, you want to run it for both semesters and look at them separately, but on an annual basis, if that makes sense.
Personally, I’d prefer to have it, you know, shortly after the end of the semester.
Mark Keleher: All right, I think that’s all the questions we have this afternoon. I really appreciate everyone's questions. And upon completion this afternoon, there will be -- the slides will be available for download on the post-meeting survey. You want to close us out, Bob?
Bob Atkins: Sure, well thank you all very much. It's a pleasure to have this opportunity to speak with all of you. I love the questions. Richest set of questions we've gotten on any of our webinars, so that’s very much appreciated. If you have any others, please feel free to e-mail me, or call me. You're welcome to do that, and or call Bill, or e-mail him, and we’d be glad to answer them. Thank you very much, and thank you, Dr. Massey, for joining us today. It is much appreciated.
AUDIO END [1:07:26.4]