AUDIO START: [0:00:00]
Good afternoon everyone and thank you for joining Gray’s four-part webcast series on Best Practices and in Program Portfolio Assessment. Gray combines over 40 critical program metrics with our methodology to help institutions assess the viability of academic programs by location. Running an effective program portfolio assessment process will be the focus of this afternoons webcast.
Before I pass the baton over to a senior partner at Gray, Mary Upchurch, a couple of quick housekeeping items. Please feel free to share any questions with us in the chat window on the left-hand side of the page and all questions will be answered towards the end of this afternoons webcast. We will also email a copy of the presentation as well as the recording to all registrants. Without further ado, Mary.
Mary: Thank you Mark and I too would like to add my welcome to you to the fourth webinar in the Gray series dealing on Best Practices and Program Portfolio Evaluation. In the prior three we’ve talked about the elements of an assessments, the what’s if you will and today we are going to be discussing the how, as in how does one actually go about doing a full program portfolio assessment? We’ll share Gray’s approach and learning and what matters, who to include and how to integrate data into an analytical decision process. Today, we’re focussing on running the effective program portfolio evaluation process itself. We’ve broken it down into two sections, what is to be considered at the start and how to reach conclusions that will position an institution to take action. I will close with what view are the critical takeaway points, as you consider your own portfolio evaluation effort.
But first, if we set aside the excitement of finding new programs to launch, taking a look at your current program portfolio, maybe something that is eagerly anticipated by some and less uncomfortable for others and for many it can get be getting a root canal, to be avoided at all costs until it can’t be ignored. The first part of the process is to recognize that there maybe barriers. Often, they come a form of the un’s. Unacknowledged assumptions, unclear expectations or undefined objectives. Checking for and recognizing these, will help you position the evaluation process for success and make it more credible to your institution.
Before You Begin
Before you begin however, it’s usually helpful to make sure there’s nothing major blocking the path to a decision-making process or otherwise your only output maybe one of frustration. Institutional leadership should want to undertake this work and be ready to explain it’s positive values. There should also be thought given, in advance, as to how any program action or evaluation will be mainstreamed, either back into an existing process, whether that’s a new program launch or an academic program review. Once you have objectives and directions settled, then it’s time to consider how you are evaluating your organizations assumptions and recognize that there are possibly many that you may not be aware of. Take a step back to see how you think about your programs. Are there silent or loud beliefs that are held? These can be things such as you all have always better then what you could add or replace it with. Is the hope for growth really only going to come from adding new programs? Do you think you can’t afford to stop or sunset a current program? Do you believe looking at your portfolio everything three to five years is actually timely enough? It’s important to recognize that programs have lifecycles and if you are ready to identify where each of your programs are. Everyone of these areas will impact how a program option may be considered and what you’re open to. No process itself is going to overcome unidentified assumptions. A reminder, undertaking a process like this one is helpful, doing it systematically however creates strengths and very critically it is not something to be done behind closed doors.
Where do you start?
We have to first looking at all opportunities and when doing any evolution at Gray this means casting the net wide first and considering the 1,400 plus potential program areas, which we know sounds overwhelming. Whether your current program portfolio is small or large, single or multi campus or possibly even multi state, looking at 40 plus data elements for even 50 programs will give you 2,000 or more datapoints to a simply and assess. Expand that to two or three markets and you’re not doing this 4,000 to 6,000 datapoints and my guess is that you and your team have already gotten a headache just contemplating the thought.
Actually, where can you start?
First, is recognizing that there are a great number of potential programs for consideration. Factor in that everybody has their favorite, whether it’s a new program or a current program and each of us has that magic bullet datapoint or the infallible data source we rely on and your challenge is already set. The reality is, is that no one datapoint, no single or for that matter multiple data sources are going to give you the answer. An affective program portfolio evolution occupies two dimensions. An examination of and contribution from each are critical to yielding quality recommendations and choices. Inside the four walls deals with the inside institutional knowledge that encompasses everything from academic quality and integrity, skilled and inspiring facility, resources that are available for use, whether those are facilities, digital physical infrastructure, financial capacity and the ability to handle costs. Looking outside the four walls speaks to where market interest lye for you. The speed of evolution and the speed of the field of study and what your communities may need, whether those are key and understanding a new programs opportunities from employment purposes or the relevant of your current programs, all of these factors combined are critical for your consideration. Putting these together in a process that leverages the knowledge from both is key.
First: Identify the Market You Serve
Start by identifying the markets you serve. Confirm or figure out what your market is and then ask, “do we want it to be more?”. Schools often serve more than one market, state or even a region and often times we don’t think of it in those terms. Start by asking your colleagues about their understanding of which markets you actually believe you’re in. You maybe surprised. Gray undertakes a GEO analysis of enrolled students to find out where an institution is drawing its students, both in terms of distance and area and then we look at both onground, if there’s onground campuses involved, as well as online presence to get a sense of how wide range in markets might be or if there are any key clusters of markets because you consider where you want to approach perspective students. Whichever approach you use, once you decide it does represent your target geographic market, then the program data analysis should be set to match it and you will be able to gage the outside the four walls information on a common comparative footing.
Second: Confirm Strategy
Next, ask yourself about your institution mission and stated strategy. Have there been any changes that require a revision? If so, make it explicit because it will affect how your programs maybe selected for consideration. Understanding the direction your strategy points you, will make it easier to spot drift in the portfolio. Both your current program portfolio as well as you think about new opportunities. For example, declaring a commitment to excellence and education disciplines and engineering and health sciences, would flag a status certificate program in cosmetology as unfit for the institution.
Third: Decide What is Important
Take a moment to consider what’s important for you. Stop and step back and think about what the most important factors are for your institution and identifying a new program and in assessing the vitality of your current portfolio. It’s important to consider these within the context of your capabilities. For example, is your school’s reputation such that by simply offering new programs, students will want to come to you? Or do you need others out there in the market already promoting the program so that you can join in and leverage that awareness that’s being created? Are the needs of the employers in your market a major driver of what you offer? Understanding what may contribute to creating or sustaining a successful program will give you a functional, comparative foundation for choice.
Fourth: Walk Through the Data
Then, take the plunge and step into the data. We have found that creating options to set customized thresholds and scoring, allows your institution to look at programs individually and comparatively, reflective of the areas that are greatest importance to you. Now, we’re not going to spend time today talking about the data itself, as the previous three webinars stepped through that. If you’re interested you can certainly download that material. We have constructed an at a glance view to help you sort through those 2,000 plus datapoints. What’s in front of you here is an extract from scorecard and as the color coding often times is of help. Not surprisingly pink or if it’s looking a little darker red on your screen, would signal something that’s not quite up to the standard that you were seeking. Green would indicate that it would be good. Blue would indicate that it’s fine.
From Data to Evaluation: Process
As I said earlier, data is only half the equation and it’s a theater element into the evolution process itself. We have worked with many institutions to help create and craft a successful process that works with them and their teams, to identify which programs should be considered for start, stop, sustain or grow. At the heart, at the Program Scan Workshop, designed to extract the best thinking from the leadership team and to create an alignment across the team in it’s program recommendations. The workshop uses facts and data and critically it incorporates the judgement of key stakeholders, focussing on identifying the best new programs, not just good enough and the best opportunities for growth and consideration. It respects academic and institutional knowledge and it’s focussed on earning the understanding and buy in of the key stakeholder community. It provides a workplan that positions the organization to move forward in a next steps mode and it is an efficient process. As one of our clients said, in the next two days we will do a year’s worth of work, that was declared to their team and we are pleased to say they found that that was a true statement.
Why Does it Work?
It sounds spectacular right? Perhaps is spectacular is suspicious. Why does something as simple sounding as a workshop actually work, when other processes may have failed for you? It is because we marry the knowledge and experience from inside the four walls with the data and market knowledge from outside the four walls to allow you to actually evaluate progress and potential on multiple levels. It works because it is fully transparent and I cannot underscore that enough. The parallel rather then serial approach to looking at all programs rather than just one at a time, keeps everyone on the same page. It works because everyone has the benefit of examining the quantitative and the qualitative factors, participating in the assessment and the formulation of recommendations for further consideration. There is both accountability and alignment. The process respects each team members discipline. For example, often times folks that are in the liberal arts area feel that at a disadvantage when so much quantitative data is brought to bear. In truth, core critical to what is required by many employers are critical skills such as critical thinking, those are demanded by employers and they are a factor. It is a point of analysis, it is a point of discussion, it is not just an opinion and it is surface and discussed and utilized in the workshop. Program succeed or fail on many levels and the workshop is structured to engage each stakeholder, asking for his and her experience and input. It is designed to create alignment.
Where Do You Want to Spend Your Energy?
The data’s background, no process per say can ignore organizational dynamics. The workshop approach does not escape that either but it does offer a clear path to chose between competing internally or competing externally, for everyone who may want what you offer as an institution. It is simply a question of where to best spend energy and time, for the success of your school and it’s mission.
Cooperative and Transparent Process
It’s not just the how but the who that makes this process successful. Workshop participants should include the range of individuals, our leaderships in the areas that I’ve got noted here. You can see on the slide from the academic areas, enrollment, marketing, finance, operations, institutional effectiveness, student services and others that are key in your organization. It is important to think broadly in this space because each person bring a wealth of knowledge with them. Over the course of the two-day workshop, the strength of your institution intellectual capital gets tapped and it’s brought to bear for the team.
From Data to Evaluation: Workshop Agenda
At this point to you might asking yourself, “Alright, but two days, what are we going to possible do for two days together?” Well, here’s some information on that. This is a sample agenda which would be customized for each institution based on specific objectives, sizing and areas of emphasises. But as a general outline, we approach the process with a deep grounding in the data, small and large team discussions and focus on recommendations that give you a roadmap for next steps. The first day concentrates on data knowledge, data awareness and adjustment and evaluation and scoring, along with a start with selection or evaluation of new program potential. On the second day often, we shift to evaluating current programs, whether they’re onground or online and determining the vitality assessment related to those.
Collaboration and Decision Approach
An important part of the process in determining up front how the team is going to make decisions. I can again emphasise enough, not only transparency but expectations are important. People process information differently, they do it a different pace and speed. Some absorb numbers like a sponge, others have an organized logic stream that they apply. Clarifying decision process gets everyone an opportunity to participate and it gives every workshop participant a voice. There are many, many decision models to chose from. There’s only five that you can see reflected here. One that have found to be very successful is a red, yellow, green approach, which allows individuals if there are choices or options being laid, to be able to either declare support by registering green. Declare their support with some reservations with yellow or not being capable of supporting what’s on the table. The critical distinction here is that if you signal that you are yellow, you have an obligation to explain to the group what area element factor is not totally there for you and often times groups use that as an opportunity to talk through or improve the evaluation of their undertaking. If a participant signal red, it is definitely and inability to support what’s currently on the table but it is not the power of no because it carries with it an obligation to define what is out of alignment for them and give the group and opportunity to talk it through. Often times, when the its done, the group has landed a better combination of decisions or sections and everyone in the team feels that they’ve been heard and they own the outcome.
Collaboration and Decision Approach
With the information exchanged that can also included and incorporate other data such as graduation rates, program persistence data, cost data that you would like to add. The team is able to reach conclusions about what actions should be taken. We create a summary scorecard for the team based on it’s discussion and evaluation over the course of the two days, which can then be used for future reference and updated and refreshed so that you have a continuous evaluation cycle and it’s supported as you move forward.
The Workshop is Just the Start…
The workshop output is really the beginning. It serves as a platform for your next steps, final decisions and any action chosen to be undertaken or executed in concert with your intuitions regular process. You will find that it can take many directions depending upon what your team would recommend. The targeted outcome remains grounded in your strategy but work doesn’t stop there. Additional input, monitoring and review are key to a healthy portfolio as you move forward.
Use Skills Data
There are a wide range of input options and information that you can consider. I’m going to talk through a few here. One as an example are skills data, that can be used to either update or inform course content and outcomes. Whether you’re looking at a current program that may just be a little bit tired or considering a new program and trying to get a sense of what would be involved with offering that program itself.
Follow a Program’s Market Health
It’s also important to follow a program’s market health. Once great always great, maybe. It’s critical to watch the external data to track a programs health within the markets that you have designated as your target markets. This happens to be a scorecard extract, for those of you who participated in the other three webinars you would recognize it but it has a series of using metrics that move either monthly or quarterly and, in a few instances, annually. In each instance it’s relative and relevant to your local markets and is a very useful tool as you want to track your programs.
Know and Track Your Competitive Environment
Competition, you’re not alone and most program areas you need to watch what’s happening with others to find out if you’re maintaining your pace, if you’re perhaps outstripping your competitors, which would fantastic or are you able to remain competitive with moving forward in the market. There are many ways to measure competitive comparisons, this just references several, including how many semester hours might be involved for a program, tuition and fees since those are changing, in state out of state so that you can see comparisons and then look at who you’re positioned. This can also include share and a wide range of other factors.
Use or Create a Program Dashboard
Then finally, use or build a portfolio metrics set that takes into account those metrics that are most important to your institution. Make it part of a programs review process. On this path you’ll be better able to realize what it is your intentions are. Here there are some examples of some new metrics that might be of use, others that might be very standard, you already apply. For those institutions that are already utilize this, this is great way to refresh data as you move forward. For those that are trying to institute, this is systemic and institutionalize assessment process creating a dashboard is possibly one of the more helpful tools that can be applied to help you realize your goal of a vibrant institution.
As you might imagine we could spend a great deal more time diving deeper but let me summarize the most important factors from our viewpoint for you to consider in managing your program portfolio evaluation process. The most important takeaway is also the simplest. You do have the ability to consistently, comprehensively and cooperatively evaluate programs based on market conditions. Our advice is, don’t ignore and of the big market elements when you’re looking at them and to remember that every data element has some flaws or limitations. On the flip side, embrace institutional knowledge but don’t allow it to dictate either. Transparency of process is key to securing alignment. Don’t also make data king in that process, understand it and put it into context for better analysis but the more clarity and consistency of understanding that’s in the stakeholder community will ensure that better and richer and more accurate the dialog will be for your institution. Data can inform and should but it’s the management and academic judgement that is require to reach informed choices.
Thank you for you time. I think Mark we have a few minutes for questions.
Mark: Yes, absolutely, thanks Mary. We have had a couple questions come through throughout the presentation this afternoon and if anyone has any additional questions, please feel free to enter them into the chat window on the left-hand side of the screen and we will be sure to ask them. First of all, Mary, what is the biggest challenge to a successful program portfolio assessment process?
Mary: The biggest challenge, they probably are several challenges and of course it’s situational to an institution but overall, I think having the team’s ability to work with external market data, understand it is not being presented as the deciding factor and instead taking it for an input that can be combined with a team’s best judgement. Often times I think we’re all kind of custom to having something presented in a way with data and information to support a market view and out of that you can reach only a single conclusion set but I have to say that you can find program data that for a market does not look very strong but for an institution might be an outstanding program, one that they can stay in for a long time and which they’re highly successful. It requires but the synthases and the discussion around that data to reach that kind of conclusion. Avoiding a knee jerk reaction to whether something looks good or bad, is probably the biggest challenge and being willing to dig in and really exchange perspective insight and knowledge with stakeholder, fellow stakeholders is the most critical and also the most rewarding aspect of the process.
Mark: Perfect, thanks for that Mary. From your experience, do faculty resist using market data to evaluate academic programs?
Mary: No, I think sometimes the pre-concept and possible misconception. Again, as I said at the very beginning, everybody has their favorite data, data source and they’re always testing anything new that’s brought in to measure up against it. I don’t think that faculty members are any different in that. I think that often times they can be incredibly clear eyed and they bring the in-depth knowledge of what it takes to consider pursuing a new program. In the course of out workshops, we try to remind everyone in the institution, new programs are exciting and they are critical to make sure you keep pace with change and the world. However, it is far easier to grow a very successful program then it is to undertake a new program. Often times – sometimes institution isn’t aware that a market is actually even better then they realize for a program they’re trying to be successful. The academic scene both receptive to that conversation and very realistic when the decision of the puts and takes are put onto the table about what would be involved in each area.
Mark: Great, thanks Mary. We have two more questions but if anybody has any additional thoughts, feel free to enter in the chat window on the left-hand side of the page. Mary, what decision making process or structure produces the best outcomes? You had talked about earlier with the red, green, yellow methodology, just trying to understand from your experience what is the best decision-making process or structure?
Mary: Well, the one that’s not the best is where the person in charge decides and they are going to decide no matter what, in which event we don’t need a workshop, we can just provide some data and then they can have decided, that one’s definitely not one of the best. The others, depending on the nature of the institution how choices are made. It really is guided by that. Let me also just clarify. The decision process, when I’m talking about it in this context, is only for that two day workshop, it is not the upend, normal, faculty councils regular processes, work for finance, any of that, it is just to be brought to bear to employ, sifting through large amount of information, needing to narrow down the field where additional energy is going to be invested for a evaluation purposes because there’s quite a bit more that has to be assessed before a program is either launched or even considered for launch or considered for expansion. I think that’s one of the reasons why I mentioned red, yellow, green. It’s been greeted gustiness’s in a number of clients I have worked with biggest concern is people may not speak up. What we have found very consistently is people do speak up because it’s not an obstruction, it’s a way to talk through factors to be considered. It’s very important that all points of view are brought to the table because people have institutional knowledge and people have expertise in their areas. I would say red, yellow, green is probably one of the most effective but it’s not the only one.
Mark: Right, thanks for that Mary. We have one other question that has come in, unless anybody else has any additional questions this will be the last question for this afternoons webcast. Mary, how often do you recommend that institutions review their academic program portfolio?
Mary: Two-part answer to that. After undertaking a program workshop or whatever other process your institution choses to do to scan the horizons for new program opportunities, the list of identified possible programs for further evaluation should be a meaningful thought, depending upon your capacity to actually undertake and launch new programs that would mean you might be anywhere from a possibility of five to twenty because some of those are going to fall out. They will either prove out not to be as good as the looked. They may prove to be impossible to find faculty for, there could be a wide range of reasons but none the less it’s going to be a funnel size that would be larger than what you could normally deliver in one year. That’s one look, although you might want to look a refresh look every 12 to 18 months, just to see if something has shifted within the programs you’re considering or something new pops up. That’s not a full-blown workshop structure that we’re talking about here. For a current program portfolio, for a heavy review, a complete review, we would suggest every other year, for some institutions possibly every 36 months. If you’re employing the more institutionalized dashboard, then you start to see as a matter of routine, if there are changes, either positive changes or areas that may require some additional focus, they would surface naturally there. Markey conditions do move, it’s rare that they more terrifically fast for an existing program. 24 months is a good measure but it extends as long as 36 months.
Closing and Upcoming Gray’s Webcast
Mark: Great, thanks Mary. We have not had any additional questions come in. I want to thank everyone for questions that did come in and I will turn it over to Mary to close this afternoon.
Mary: Well, let me extend my thanks to everyone the bridge. As a brief reminder, Gray’s next webcast is going to be on May 24th at 2 o’clock Eastern 11 a.m. Pacific, it’s going to be on Demand, Trends and Higher Education. Of course, we hope you join us. Thank you for your time today and if you have any questions, our contact information is listed in the slide, my number and my email as well and I’d be happy to respond back to you directly.
AUDIO END [0:32:05]
Posted by Mark Keleher on May 17, 2018 2:16:00 PM