Learning Experience Leader

75// Evaluation and the Judgement of "What Is" against "What Should Be" with Dr. David Williams

December 28, 2021 Greg Williams
Learning Experience Leader
75// Evaluation and the Judgement of "What Is" against "What Should Be" with Dr. David Williams
Show Notes Transcript

Today’s guest is my father Dr. David Williams. Dr. Williams is an emeritus professor from the Department of Instructional Psychology and Technology at Brigham Young University where he conducted and studied evaluations of teaching and learning in various settings. He currently serves as a missionary with his wife Denise for the Church of Jesus Christ of Latter-day saints.

Today we discuss: 

  • Everything Evaluation, what it is, and all of the moving parts associated with it
  • The difference between evaluation, measurement, and assessment 
  • Dr. Williams’ 10 part evaluation model, along with multiple examples 

Resources

Join the conversation on the LX Leader LinkedIn page: https://www.linkedin.com/company/learning-experience-leader-podcast 


Support the show
Dr. David Williams:

What I felt, in my 40 years basically working in the field of evaluation, I felt that the main lesson I was trying to learn is, how do you help human agents be better evaluators? Because just the act of living from day to day, we are continually evaluating? Do I want to do this? Or do I want to do that? How much of this do I want to do? How much of that do I want to do? Who do I want to involve? All suddenly, they have different answers to those same questions as asking about myself. So what do I do about that? So it's we're trying to do this together, taking what we do as human beings and translating it into better ways to evaluate using this 10 step model that I've talked about. That in a lot of ways seems overwhelming, but what I've tried to do is help people see you're already doing it, you are already evaluating. So it's not like you've got to start something new. It's just how can you do what you're already doing better?

Greg Williams:

From the beautiful state of Utah in the United States. Hello, and welcome. I'm Greg Williams, and you're listening to the learning experience leader podcast, a project devoted to design leadership and the psychology of this podcast helps you expand your perspective of learning design through conversation with innovative professionals and scholars across the world. Today's guest is Dr. David Williams. And Dr. Williams is an emeritus professor from the Department of instructional psychology and technology at Brigham Young University, where he conducted and studied evaluations of teaching and learning in various settings. He currently serves as a missionary with his wife, Denise, the Church of Jesus Christ of Latter Day Saints. Today, we discuss everything evaluation, what it is, and all the moving parts associated with the difference between evaluation, measurement and assessment. Dr. Williams has 10 part evaluation model, along with multiple examples to bring it to life, and much more. This is a special episode for me because Dr. Williams is my father, I grew up learning about evaluation from him. But it wasn't until much later, I actually got a sense of what it was. So I hope that you enjoy this conversation. And you can access all sorts of helpful resources in the show notes listed for this episode. So with that, let's get started. Dr. Williams, it's wonderful to have you here on the show today.

Dr. David Williams:

Thank you. Glad to be hear.

Greg Williams:

I remember hearing you talking because, you know, so growing up, you said I'm, I'm an evaluator, and you know, I struggled to understand what that meant. It wasn't till I was pretty much an adult. And you and I went on a road trip. And I asked you and my mind was sort of blown as like, wow, evaluation is really cool. And I remember learning a little bit with you about how did you get interested in this? What got you sort of into the field and what what kept you into the field as it relates to all of this stuff? Well,

Dr. David Williams:

I got into the field because I was looking for a way to pay my expenses, so I could prepare to go to medical school. And I, in order to do that I was working as an undergraduate assistant to a graduate student who was working for some faculty at the university I was attending. And they basically said, you're going to get paid this pittance that we're paying him to take this class from fellow named Adrian Belmond friends. And I thought, okay, I guess I can do that. And as I took the class I, at first, because of the criteria that I was working from, which was I have to do this in order to get paid. So I guess I'll do it. I began to listen to some of what he was saying. And what he was saying was he was talking about how in our everyday lives as human agents, we are making so many choices. And we don't really understand how we do that. It's It's part and parcel of what we do. It's like It's like we were a fish in water and not realizing that the water we're swimming in is evaluation. And as I've thought about that, I thought, I want to know more about that because just just producing things or working on things, without thinking through why I'm doing it and whether it makes any difference. Just isn't very interesting to me. So because of that interest, I said to him, Okay, I've been working for this one professor all semesters, and I took your class I could get paid by the professor, but now I'd like to work for you So he hired me the next semester and, and we did all kinds of interesting things where we basically helped people realize that they were sort of acting blind in whatever they were working on, they were not paying much attention to how well they were doing or how well their products were, were becoming or what the issues were to the other stakeholders besides themselves. And I thought, for people, you know, if people are doing this as part of living as human beings, why aren't they doing it better? And why are they kind of ignorant of what it is that they're doing? And so I, I just decided I want to study this more, I want to learn more about this and see if I can begin to improve how I do it, and maybe be able to share with other people how they can improve what they're doing. I discovered there was this whole field called in, you know, educational evaluation, or just more generally, quality assurance, and that people had formal careers doing this. But even they needed more information about how to do this well, and how can you keep doing it? And so I ended up talking to a lot of these professional evaluators and just asking, how did you develop as an evaluator from your childhood? Up through where you are now? And a lot of them were surprised by the questions, you know, they hadn't actually asked themselves those questions. And as we, as we work through these interviews, and I ended up publishing several of them in the evaluation journal, which we'll put, we can put in your podcast notes. I just began to realize people are all kind of learning how to live by learning how to evaluate and how to make decisions and how to make choices and how to live with the implications of their choices, and how to do those in the context of other people around them, making those choices. And that's really what it means to be human. And so to me, that got real exciting, because I wasn't just being a technologist, carrying out some technological task, I was learning how to be a better human, and how to maybe invite other people to be better humans as well, by just introducing them to some of these ideas. So that's, that's pretty much how it worked. I just kind of fell into the career. And then I, because of the people I met along the way, I kind of expanded though, beyond a career and beyond my profession to kind of be my life. And that's why growing up, you probably heard so much from me about evaluation issues, and, and yet still didn't quite understand what it was. Because it's it's not a simple thing, unless you simplify it like I've been trying to do. Because I'm discovering it. I'm still discovering, you know, when you asked me to do this podcast, I did an immediate evaluation of myself and said, Well, I've been retired for about five years, do I ever know anything? I remember anything from? So yeah, it's, it's something that we are going to do all throughout our lives. And personally, I don't think we'll ever stop doing it.

Greg Williams:

So you and I have had many a conversation about what evaluation is, and why do we do it all the time, maybe don't realize it and how important it can be, but what for you what is evaluation?

Dr. David Williams:

Okay, the simplest way for me to describe it is comparing what is to what should be. So if we have ideals in life, we have ideals and all kinds of settings, or we are coming up with them or we are negotiating them with other people. And then we have life, and real products and real people make those products and develop relationships and so on. So whenever we're evaluating those things that we care about, we're comparing them to what some ideal that we have. It could be an ideal just internal to us, or it could be ideal that our society is or some combination of those. So if your mind could just consider a visual that I think maybe you can put in the show notes. Just imagine a triangle and across the top of the drying call, the very point of it, there's a line horizontal line, and then down from that horizontal line or two boxes hanging down from strings kind of like a, a pan balance. Okay, like a scale or something. Yeah, scale. So the one on the left it says, determine what is. And the one on the right says determine what should be. And you're comparing basically like you would with if you were using a pan balance. How does what is panned out compared to what should be?

Greg Williams:

As, as I'm thinking about this? I know there's a common lumping together of measurement and evaluation, or gathering metrics or achieving or obtaining data. They're not the same thing, measurement and evaluation, I would assume because of the way we talked about that. Can you maybe shed light on that before we dig into evaluation some more?

Dr. David Williams:

Sure. Yeah, measurement is helping us with what is side of it, of that, and melons. So when you're measuring things, you're seeing how many of this this thing is, are there? Or how much of this thing is, is there? And that you're not really judging the quality of anything there? You're not comparing it to what should be you're just trying to understand what is.

Greg Williams:

What are the facts? What's happening? There's no values associated? It's strictly? Yeah, like you said, what is I kind of think about, we want to I heard this described is I could measure my height against a bunch of other people's heights, and I have data that come from that. But then when I say, is my measurement good to make the basketball team, that becomes an evaluation when I take that data and lay it against that criteria of what should be like what you said on that, that scale, and now comparing what is my height, which is a piece of data or a measurement, to what should be, which is like an ideal state that someone somewhere has described that basketball players heights should be Is that a valid example?

Dr. David Williams:

Yeah, that's a great example. That that applies in so many other situations. But yeah, but in, in this little diagram, I'm wanting to show you the over a number six and seven, about 10 steps is where you would use different kinds of methods to collect and analyze data. And that's where measurement plays the biggest role over there.

Greg Williams:

Okay, so measurement, in many ways is kind of a subset of the overall evaluation process. Is that? Yeah, yeah, looking at it,

Dr. David Williams:

just trying to help you understand what is so that you can compare it to what should be,

Greg Williams:

okay. Well, let's talk through this model, I'll definitely include it in the show notes. So whether you're, you want to look at it as you listen along, or check it out afterwards, that it'll sort of anchor our conversation there. So we have this sort of on fulcrum, this triangle, right? The scale, and on two sides of it are hanging, what is versus what should be, in the first of these items over on starts on the right, so you recommend starting with thinking about really the ideal state or what should be? Is that right?

Dr. David Williams:

Yeah, I think a lot of times people just assume what the what should be, and they jump right into describing what is and then evaluating against undefined and unclear criteria. So to me, it's really important, not only to the evaluation process, but also to clarify what are we evaluating here? And what should be, you know, if I say, well, let's, let's evaluate how well my refrigerator works. Well, which part of the refrigerator We're particularly concerned about? Or if you're talking about the freezer, then we're going to have different very different criteria than if we're talking about the vegetable section. Right? So clarifying what should be you have to think that through as you're defining what it is you're going to evaluate.

Greg Williams:

Okay, so we start with these pieces of what should be, what are some of the elements in evaluation when trying to line that out that you've seen to be really important and helpful here?

Dr. David Williams:

Okay, so the first item for me is to understand the context or the background for the evaluation. And that has a lot of different things that we can talk about greater depth, like who is evaluating? Is this an internal or an external evaluation? Is this a professional evaluation or is this more of informal evaluation, that kind of thing? Oh, that's the first question is what brought this need for evaluation about out? And what are some of the historical issues? Who are some of the players that are calling for the evaluation, that sort of thing. So it gives you a way to start thinking, Okay, now that's, that's what the criteria are going to be, they're going to focus in on this or that element. So that's the first one. And then the second one. So I'm thinking of these, if we're looking at this diagram, as a clock, I'm thinking over at about two o'clock, is the first question about what is the context or background and then just below it, who are the stakeholders? Who are the people that care about? As we look at the context and background, that they care about this evaluation, and care about the thing being evaluated. And that is a super important element that we could have entire courses on. One of the concerns that I'm having recently is that I've, I've heard that the, some of the people and some of the professional evaluation organizations are deciding that society at large, as they define it should be the sum of the key stakeholders in every evaluation they do. And so they're bringing in various values that they hold, dear. And I think that all professional evaluators should hold the ear. And saying, We've got to apply these to every evaluation or the evaluations aren't going to be sanctioned by us as a professional organization. So that that kind of thing, you know, it's all kinds of things to be discussed, and politics gets involved and power gets involved. All kinds of things go into these context, background and stakeholder questions.

Greg Williams:

Sounds like a critical part of that stakeholder. The context is, you've touched on a little bit of just considering what values are on the table here that are going to be a really important part of this evaluation. And by values, I guess, what are the things that matter most? And what language are those things being communicated? as being important or less important to the particular stakeholders or this context? Is that accurate? Yes,

Dr. David Williams:

that's a good summary.

Greg Williams:

Okay, so we've got sort of the table in which this scale is standing on, which is that evaluation context with the background and the stakeholders. So then we move down the clock a little bit towards about three or four o'clock. What are those items?

Dr. David Williams:

Well, these are and again, doing everything in a drawing like this or talking about it linearly, like we are, kind of isn't fair to what really goes on, because all of these are happening at the same time, right? We could talk about that more later. But the main point, and number three is what is the thing being evaluated? In evaluation, we call that the evaluate. And so it could be that thing like a program, it could be materials, it could be people who are supposed to be producing materials and products and various kinds. It could be projections into the future on how things are going to be. It could be examination of things in the past, for example, historians, they do almost all their evaluation of things that already

Greg Williams:

already happened,

Dr. David Williams:

already happened, or, or are no more, but we're. So what is the thing being evaluated? That's that's a big question. And again, that has to be all discussed within the context of who are the stakeholders that care about that thing? And that thing might be different for different ones? All right. Let's say you're developing a test. And the test for the subject matter, folks could be very different for than what the test is the test experiences for the people taking the test, right? So anyway, the evaluation is going to be different for each stakeholder, because they're going to bring to bear different values that highlight what they care about that value. And that leads to the fourth statement, which is what are the criteria for judging the evaluation. So criteria are basically concrete ways if you can get them but often we don't get very concrete ways, but concrete ways of translating the values that the stakeholders care about, and making judgments of the value and

Greg Williams:

So I'm going to stop us for a second just think through some of the we talked about a few different types of examples. But if we were to look at the model so far from basketball team, because we mentioned that and refrigerator because we mentioned that too. So let's say we're evaluating Whether Greg should be on NBA basketball team, not very hard. But, but one we will make nevertheless,

Dr. David Williams:

Depends on how you defined the evaluand.

Greg Williams:

That's true. Right? So based on my limited knowledge of what is true right now or what is fact for NBA, I think it'd be easy evaluation. But so to determine what should be we start with the context, right? And background. So we learn all about what is the NBA? What do they value? You know, what are the teams valuing? Who are the stakeholders, so you have not only the teams and the coaches and general managers, you have the fans, you have the teammates, you have, I don't know that the sponsors who like Nike and other things who want to get in on on the action, you've got a lot of stakeholders to the NBA, right?

Dr. David Williams:

Then you've got society at large, who, you know, like in football, you've got people kneeling for the to the flag and all kinds of things, right? They're saying, well, there's a lot more going on in the NFL, and there's in the NBA, I guess.

Greg Williams:

Yeah, at least at that particular piece. Yeah. So there's, there's a whole ton there, which, you know, depending on the type of evaluation doing you could dig into that for a long time, or, or not, I mean, because when we get to criteria and the value, and this, this could make it a swift, Swift evaluation, because the value in itself, would that be me? Like, should Greg go into the NBA? Or would that be the NBA? Or?

Dr. David Williams:

Well, I think the way you are starting to ask that question, that would be Greg, okay, as a basketball player. And you'd be asking how well does he match up to what the criteria are for being a successful basketball player in the NBA?

Greg Williams:

Right? And so all right, if Greg's the evaluator, and then the thing being evaluated in this situation, the criteria, that could be things like, well, you know, height isn't the only thing that matters, but it's important skill on all of these different things. So there's probably a number of skills that break down that, like, a scout would be looking for if they came and watch me playing, you know, in my driveway. Because I don't play anywhere else. And there's, there's a bunch of other criteria that they would be sort of looking for right to determine if, if I match the determined what should be an ideal, ideal basketball player.

Dr. David Williams:

Right. An example just pops into my mind and that movie Moneyball. Oh, yeah. The whole theme of that seems to be the Brad Pitt character is redefining the criteria for what it means to be a successful baseball team. Yeah. And he's fighting all the his own scouts and the fans and a whole bunch of people to go after this quiet guy that has all these statistics at his disposal, who's saying, if you really want to win in the long run, these are the criteria that matter the most. And so therefore, the evaluation is at any given player, so much as the combination of players that get the most people on base.

Greg Williams:

That's right. Yeah. Cuz they're looking for who who gets on first base is like one of the most important criteria for who they should get on the team. Right? Right. And that's very different than who's the most, you know, snazzy player who, you know, is getting a lot of attention from the press. Yeah, I think that's a great example of how he's tweaking the criteria. And it's really hard to shift a vision of the ideal state of the game of baseball now, there's still the goal of winning games, right and winning the World Series, that's still a critical piece of what success looks like. Yeah,

Dr. David Williams:

it really juxtaposes all these stakeholders who have different values and who have different background and context and therefore are defining what is to be evaluated here.

Greg Williams:

That's a great example. There's a lot in there, but I'll run the refrigerator example a little quicker. So the refrigerator, what should be well, you just want it to work. But breaking down what work means is, could be exhaustive, but the context and background is food has to be refrigerated to be edible, right. There's electrical electricity you've got to have and other things stakeholders, you've got not only you but anyone in your home, or anyone that's visiting. And I guess you also have the manufacturer and the store and the installer and all those other pieces. But ultimately, the value and I would assume is the refrigerator saying is the refrigerator meeting is the existence of this refrigerator meeting expectations of what it should be doing. So the criteria would be food is cold lettuce is crisp, you know, certain energy or wattage is being used, because there's ENERGY STAR versus not Right, and there's greenhouse gases and all that type of thing. Are there parts that are recyclable or not? Once this refrigerator dies? How many years is it going to leave? Survive? So I guess depending on those stakeholders, they all have their their criteria me as a consumer, my list is pretty short. But others might have much more significant criteria for like how easy it is to ship it or to store it at Home Depot, or, or whatever. Am I getting carried away with that? Or is that? No, those

Dr. David Williams:

are all great examples that I guess the key point from that is, you're redefining and refining the definition of the value and with each of those statements, because you're saying, Okay, from this stakeholders point of view, this is, you know, let's say the manufacturer, the main evaluation is, how many of these refrigerators do we get? Do we have to take back because they don't work? Right? Right? Or how much and to the stockholders it's going to be? How much money are we making by selling these refrigerators? And can we afford to invest in another version of this very simple example would be our own refrigerator here at our house. To me, it was working just fine. Kept the food, the right temperatures, and all that. But my wife discovered a little puddle in front of the refrigerator, and she said, we got all these visitors coming. And I'm worried that the water dispenser is broken, that it's not going to work. And so it wasn't the whole refrigerator that she cared about. I mean, she cared about it, but her worry was, what about cold water for guests. So to me, that was not very important, we have plenty very cold water coming out of our tap, we don't have refrigerated water, and especially if it means I've got to go do a bunch of work to find somebody fix it, right. And so what we did is we just pulled the free refrigerator out and turned the water off, and decided we'd watch and see what would happen. So we're gathering some data. And then a few days later, we turn the water back on and push the refrigerator back in and we're watching now, the guests still haven't come. But we also haven't gotten to the all the expensive, calling somebody and ask him are just waiting to watch and see maybe that water spilled from somebody just, you know, spilling some water somewhere inside the fridge and it drained out at that point, or we don't know. Anyway, that's just kind of a narrow little look at the refrigerator, about two consumers to stakeholders who have slightly different criteria. And you can then expand that into our entire relationship and our marriage, right. But that's the way it is with all these things. You can look at him in a big scale and small scale, lots of things in between, you can look at sub parts of an evaluation and decide the water dispenser. That's the part of the evaluation that we want to make the new evaluate. We don't care about the rest of the fruit that matters right now. Anyway, gotten down like that.

Greg Williams:

And it branches off to other evaluations like, what does it mean to successfully host guests at your home? Yeah, and yeah, you know, hosting guests as an evaluation? Well, maybe a sub part of that is they can drink cold water at my house. And if that really matters a lot to you, then you'll probably get a lot more upset with little puddles on your floor, then if you're like, you know, everybody in our house has water bottles or whatever, you know, we drink out of the hose here. I don't know. Yeah. Okay. Well, I think illustrating up to up to now we've talked about the sort of the first four steps and we've kind of mentioned a few of the other ones, but not formally. So let's keep going around the model. We're down at six o'clock at the bottom of the clock. What's what's next?

Dr. David Williams:

So now based on one through four, you're going to come up with questions, what questions will answer how well the event evaluation meets the criteria. So this is sometimes this is where you actually start in a formal evaluation because somebody contacts you and says, we need to evaluate this product that we're, that we're developing and, or have developed. And we want to know this and this and this. So they've already got the questions. But then you have to kind of work through that with a little bit and say, okay, whose questions are these? And what are their values? And what criteria for success will or do these questions imply? Because people don't always think about those things. They just have questions,

Greg Williams:

right? Kind of have to work backwards a little bit to dig to get those first four items.

Dr. David Williams:

Exactly. And sometimes that's the point where you say, you know, if you would modify your context and your background just a little bit, look at these these things. Colors over here that these questions don't address their criteria and their definition of evaluation. But are those stakeholders important to you? So, the questions are a great place to start. But I didn't in this model, put them at number one, because I, I feel like it's, it's when you're starting those ask those questions that maybe you're going to get pushed back to say, Oh, we've got to answer these other things first. So, yeah, with I don't know, if you want to work through your example, one of those examples of that?

Greg Williams:

Yeah, well, I mean, so that has me thinking about, I guess, for you to the basketball again, if I just started out with a question out of the blue like, hey, you know, am I a good fit for the NBA? I could start with that question of evaluation. Right? And, you know, am I good enough to be in there? But that's, if I haven't thought about what what is the NBA value? Who are the stakeholders for the NBA? You know, am I the actual thing being evaluated, evaluated here? What's the criteria? That question may or may not make a lot of sense, by the time I kind of go back to those items, I think about a typical training request in the corporate world usually comes in the form of a question, Hey, can you build this for us? Because we've determined it's necessary? Or can you do X, Y, or Z? And a lot of the initial scoping questions, I think, share a lot in common with these evaluation questions, which is, is this really do you really need an elearning? Here? Because it sounds like what you want is this outcome. You know, in other words, this, these three pieces of criteria are really important to you, right? You want sales reps to sell more, or whatever. And you've decided, you've made the evaluation that what is needed for that successful outcome is a course. But what about these other criteria? You know, sales reps don't have any time. So they can't take a two hour elearning? Right? Have you thought about that? So I think it's very common, in my experience, that requests come right in at this fifth stage, and I have to kind of work backwards a little bit with the stakeholder.

Dr. David Williams:

Exactly. And that, that can be problematic, because A, people may not understand that you have to do all that groundwork first. And they get frustrated, and they say, Look, I just have these questions, can you just go get the answers? Yeah, that's what I want. So in a lot of ways, being an evaluator, or being an instructional designer who evaluates or whatever means you have to be an educator and a kind of a coach, where you're, you're backing off, and you're, you're able to help people see, oh, I can't really answer these questions, or I don't even really know if these are the questions I want to answer. Until I spend a little bit of time thinking about these other things. Or maybe a lot of time, you know, we've through this interview, we've talked so far a lot about stuff that probably most people don't think have anything to do with evaluation, because for them, it means you collect data and and make a decision. Right? And or maybe you make a decision and then collect data to support the decision you already want to make. And, you know, you don't, it can all start with a question. But why do we have to do all this other stuff. And I guess that's one of the main things that I've tried to emphasize in my career, as I've been learning about all this myself is, you know, evaluation is a lot more than just gathering data and making decisions. It's all this other stuff that people don't think they want to spend the time and money, which is money, time is money. So they don't think people want to spend that much energy trying to do these other things. And so they kind of slip through all that really fast and get on over to the methods and the analysis strategies, which, you know, when you think about evaluation, I think a lot of people think about those, you know, it's testing or it's measuring, or, you know, assessing this and that. And that's true that those are part of it. So that, you know, we could skip on over to six and seven if you want. Yeah. Six is what methods should be used to answer these questions that we've finally come up with. And then seven is how do you collect and analyze the data? So that's what we're at about seven o'clock, our little thing and it's, it's definitely over on the determine what is side of the fulcrum that we're looking at. So there are lots of methods and a lot of my career has been spent helping people realize that most methods are qualitative. And most qualitative methods have been given a bad rap for many years in social sciences. But really Seeing and hearing and trying to understand things from the perspectives of the people that are impacted, is vital. And then sometimes some of those methods can be converted into quantitative methods. So we can say, Okay, we've decided these are the things that are key to our qualitative, observing and listening to people, and, and so on. Now, we're going to formalize those into some more quantitative measures. And that's where you have to be really good at measurement, because that's kind of tricky. Translating, you know, into that step. But if you want to go get data from a lot of people on a lot of issues, you're gonna have to move to quantitative at some point because you can't observe and listen to everybody, right? In a lot of context. So that's where all those discussions about quant and qual and and in different approaches to evaluation, there are many evaluation theorists who have written lots and lots of books that are very helpful. I think that, but they they sometimes overemphasize the method to the detriment of the evaluation, because they haven't addressed those one through five issues very clearly.

Greg Williams:

Yeah, so I'm going to extrapolate this a little bit to like maybe a tendency that I've, I've had myself, maybe other instructional designers or learning designers could identify with this, but sometimes, I feel like okay, well, there's been a gap, you know, people need to learn something. So I'm given a question or request. So already, we've skipped to number five, has the context, the background and all the stakeholders, the criteria for success? What's actually going to be evaluated when we're done with all this, I've just jumped to, we need a training. And so I jumped to cool. All right, well, what do we need in the training? So I'm kind of wanting to know what should be but sometimes the stakeholder says, well, we need this. So it kind of there's an information dump. And I make it look nice in a presentation. And then I think, Oh, well, the last step to add is the E for evaluation. So let me slap a survey on the end, they can give it a smiley face or a frowny with maybe a box to fill in how they feel. And that will give me some that's kind of my that's a quantitative method to determine how good it was. And then, and then I'm done. And I might focus on my evaluation and all that process, in my mind evaluation might just be that little smiley sheet at the end. And again, I'm kind of being a little facetious, facetious here. But to be honest, you know, I feel the draw and inclination to do this all the time, because that's fast, and relatively cheap and easy. But it does skip over the whole point of what's the outcome that we're trying to drive? And is this intervention helping to drive that outcome?

Dr. David Williams:

Yeah, for sure. Of course, we've never seen anybody do what you just described?

Greg Williams:

Not at all. Not at all. But I mean, I think in all of this, I've had previous guests on the show talking about measurement and evaluation. One, one guest, her name is Bonnie Barris for Dr. Bedford, she talked about how we think about Addie, that first A Yeah, we talked about it being analysis. But it could also stand for alignment, which speaks to a lot of what you've been describing alignment of who are my stakeholders? And what's the learning environment that's taking this place? And are we aligned on what success looks like? or product managers might call a definition of Done? Before we get going? And that's all a part of what should be analysis? And I would imagine, from your perspective, that's simply a part of the evaluation process.

Dr. David Williams:

Exactly. Yeah. Right. On one hand, I'm thinking, what's wrong with relying on the key stakeholders who have asked for the evaluation? What's wrong with just relying on them? Having understood the context, the background, who all the stakeholders are, what the value? And really is? What's the criteria for judging it? Because that's what's happening when they come and say, We want answers to these questions. You know, there's they've done some for version of one through four to come up with those. But they, they probably aren't great. Without being explicit about it. They they've kind of don't really know what the question should be, because they haven't systematically thought that through. That's why I think it's important for people to just kind of pause and say, you know, when we're doing the analysis at the beginning, or which is really a form of evaluation itself. I mean, every step of the ADDIE model could be a full blown evaluation. It's true. How are they going to want to spend Time and money to do that, right?

Greg Williams:

Well, there is a, there's a movement in the last, I don't know, decade or two. And I did an episode about this around agile development, or agile project management, which really speaks to let's do iterations as fast as possible with always learning in that model. Megan Torrance, she's the author of the book that I talked to you about this. She has the ADDIE model broken down with little e's in between each each of the AD D eyes, right. And so you're always evaluating before, during and after developmental evaluation is a term that you've taught me about in the past that I think really aligns to what you're describing?

Dr. David Williams:

Yeah, yeah. Michael Patten. Or yeah, that's right. It the way we're talking about it today, it sounds very ponderous. And like, it's gonna take forever. I can't imagine you doing all 10 of these steps, you know, many times, hmm, rapidly throughout the design and development and marketing and distribution processes. Right.

Greg Williams:

Right. Yeah. And I think that's really what design thinking and agile project management is really about is, or in another book called The lean, lean startup. It's the you build. And you build, learn and grow, you're constantly iterating or prototyping as a way of learning your way forward. And in my mind, that's what we're talking about with this model. So. So you we've talked about focusing on the what should be buy those first four steps down to five. And then as you are getting that really clear, on the ideal state, it's really getting a handle on what's what is happening right now. What, where are we at right now, as it relates to what should be in the evaluation criteria. So you use different methods for that, whether it's talking to people or observation or other types of data analysis and collection from systems or whatnot. And that gets us up to the final steps that we have up at the 12 o'clock position where that evaluation ultimately is being made? If I'm not mistaken.

Dr. David Williams:

Exactly. So yeah, step eight is evaluation. How does what is compared to what should be? And that's where we're finally doing. The reason we're doing all this and then reporting or deciding what recommendations does this study yield? Some evaluation therapists say, we should never do that. We shouldn't give recommendations, we should just present the data to people and let them decide. But if it's an internal evaluation, you really have no choice, but to also figure out okay, what are we going to do about this? And then the final step is a term that we use called meta evaluation, which is evaluating the evaluation. It's asking how well did the evaluation? How well was it conducted? Is it trustworthy? Can we can we move forward with the results of this evaluation with confidence? Or is there some big gaping hole in that we really shouldn't care about any of the results or in any of the recommendations? Because it's really not? It's not what it should be? So yeah, those are kind of and then the idea is that you go back through that cycle, from one through 10, multiple times in the life of a particular value and or a group of stakeholders who are redefining and evaluating over time.

Greg Williams:

I like your Moneyball example, because I feel like that tension in the Brad Pitt character with everybody else's, he's essentially performing a meta evaluation on how things have been done in the past. And I think the term from Dr. Beth Wilkins that she talked to me about, there's these sort of deviance, positive deviance and organizations that do this. That's what Brad Pitt was doing in Moneyball, right of saying, Well, the way we've always done, it might actually be wrong. The way you've been evaluating successful baseball players might not be the thing that gets us to the one piece of criteria we all really care about, and that's winning a World Series. So let's do some other types of let's evaluate differently, much, there's a lot of tension in that. And there's a lot of attention or tension in that when anytime someone deviates from the norm, and is trying to positively influence outcomes, but it's different than what people are used to doing. I feel like, Yeah, I mean, the role of values in all of this is so critical. And going back to what you were saying with data collection, and kind of the tendency of a lot of the valuation theorists that you've studied and learned from, do you think perhaps, that the focus on those collection pieces is because of this idea that you've talked to me a lot about of like, subjective versus objective, like we need to do an objective evaluation and provide just the facts so people can make a good, informed decision like is that actually possible?

Dr. David Williams:

Well, I don't believe it is, I think anything that's human is going to have subjectivity to it. Even if you were to create a robot to do all this for you, you're setting up with a robot would bring in your subjectivity into the way the robot functions. So I think it's all a matter of controlling for intersubjectivity. Right? So you have different subjects, different stakeholders who have different subjectivities. And the idea is to look for ways to counterbalance those. And that's, that goes back into number two of who are the stakeholders? And number three, what is the evaluation? And what are number four? What are the criteria for them, and going back and forth in that cycle, so that you get to a point where you say, okay, we can go forward with these questions. Number five, because we've worked it out as a group of stakeholders. So that the job of the evaluator is to help the stakeholders think all these things through so you as an instructional designer who's been given a certain task by the people that are asking you to do the work for them, you have, I think I have a moral responsibility to say, Well, let's think about who are all the people impacted by this. And let's, let's kind of spend some time working through exactly what the evaluation is going to be, that we're evaluating, and what criteria we're going to use to judge it by, before we spend a lot of money developing fancy measures, and all that kind of thing to do the evaluation. And so involves a lot of personal skills, you know, interpersonal skills, where you're, you're willing to listen and you're willing to think things through. And that kind of brings it this whole thing to me into a kind of a major focus that my academic life and evaluation has had, which is trying to help people see that they already are doing evaluation, in whatever they do, whether it's a profession or in their personal life. They're constantly evaluating, that's what we do as humans. And this gets into all kinds of philosophy about human agency and so on. But what I felt, in my 40 years basically working in the field of evaluation, I felt that the main lesson I was trying to learn is, how do you help human agents be better evaluators? Because just the act of living from day to day, we are continually evaluating? Do I want to do this? Or do I want to do that? How much of this do I want to do? How much of that do I want to do? Who do I want to involve? All? Suddenly, they have different answers to those same questions I was asking about myself. So what do I do about that? So it's, we're trying to do this together. So it seems like taking what we do as human beings and translating into better ways to evaluate using this 10 step model that I've talked about that in a lot of ways seems overwhelming, but what I've tried to do is help people see you're already doing it. Yeah, already evaluating. So it's not like you've got to start something new. It's just how can you do what you're already doing better? Yeah, that's what better evaluation is, is it stepping back and saying, Well, am I doing this as well as I could do it? And can I? What if I tweak this? What if I tweak that? And I think it's a lifelong pursuit? You know, I don't think anybody is going to learn how to do this in graduate school and have it down, Pat, you know?

Greg Williams:

I'm glad you mentioned that, because I think there's, it can sound overwhelming. We talked through all these steps. It's like, wow, that's that's a lot, especially when we look at that example of like, Should Greg join the NBA? It's like, well, that's a really fast, easy evaluation. But that's because we have that, you know, Daniel Kahneman talks about systems one in two, and we have enough context and experience with the NBA to immediately know, Greg is not going to make it. He's not gonna, he's not going to start on the next team, just because we, we already have a snap judgment, sense of a lot of the stakeholders, the NBA's criteria. I guess, if you don't know how well I play, and it's not a snap judgment, you'd have to collect a little data by just watching me play for about 30 seconds to make your decision. You know, so you might have to collect a little data there. You know, and then the reporting on that, you know, is as simple as saying, No, you shouldn't, right. And we do go through all those steps. But oftentimes, they're automated into processes and automatic thought, thought patterns that we have. And they're not careful, thoughtful system to sort of plotting evaluations, right. And I think what I think what you've emphasized here is weak. can get a lot by taking some time to minute evaluate what what we're doing, whether we're automatically or or whether we're we've made concrete decisions that we can go back and review to, to learn from those.

Dr. David Williams:

Exactly. I guess that brings up this distinction between formal and informal evaluation. You know, I think most evaluation is and should be informal. In other words, we don't write proposals, and we don't get extra funding and all that kind of thing for most of what we do. If I were becoming an instructional designer, for example, I would want background and all of these things, but I wouldn't. With every activity I engage in in my work, I would not say we've got to have a formal evaluation here before we can do anything. We've got to formally evaluate the need, we've got a formula evaluate the process that we're using to, to answer the need, we've got a formula evaluate how well it's being implemented, we've got a formula evaluate how well it worked. If you do that, nobody's gonna find it, nobody's going to support you. But if you say instead, I'm going to keep these kinds of questions in mind while I'm doing my work, you know, is this is this and I guess that that fits with the model that you're talking about? With the agile approach, you know, that you're just on on a constant basis? You're just saying? What are the answers to these 10 questions that we've discussed? All right. Do I already know that man? And the answer is like an MBA question. Yeah. You know, as soon as you look at the context and background, you say, it's pretty clear that I'm not going to be in the NBA. But let me go through these questions real quickly. I can, I've got that decided. On the other hand, if you're, this brings up another distinction that we make an evaluation between formative summative and developmental evaluation. With summative evaluation, that's what we're talking about, when we decide if Greg is good enough for the NBA or not, you know, it's an inner out kind of decision as summonin decision, he's not going to make it in the NBA or he might, you know, and then formative would be to say, he wants to be in the NBA, he's, he's got these skills, and he might be able to succeed in, in the NBA, let's, let's kind of give him some training, and then evaluate how, how he's doing and give them some more training and, and eventually, we might finally decide, it's worth having a scout look at him, you know. Or we might finally make the similar decision. Now, he's just not going to do it. He actually doesn't even want to be in the NBA. So let's forget about that. Finally got into her criteria. He had no desire for it. So anyway, those those are important distinctions to make sometimes people think all evaluation is summative. You know, it's just yes or no. When in fact, it might be. We want to try it out. And we want to see if we can form something here that that isn't there yet. But it might get that way. I think it's especially applicable with products that we develop, we can say, well, we know we want a product that meets these criteria. But is this the product? Or can we come up with a better one over time.

Greg Williams:

And I think of some terms, some listeners may be familiar with like a formative assessment versus a summative assessment, you know, a quiz gives you an idea of whether you're on the right track or not a test tells you whether you're going to pass or not. But those are assessments, and we talked about measurement and metrics, but the term assessment often can be used synonymously with evaluation, but I get the sense that they those are also not the same type of thing.

Dr. David Williams:

Well, they can be. I mean, you can if you've if you address these other questions and coming up with the assessment, sure. If it's just this is the standard assessment, we apply it to everything independent of who who cares and and what they care about that No. Work.

Greg Williams:

So some of the, you know, tension around, you know, high stakes testing for kids in school, right, it, it comes down to some of these things around what is the ideal future? What's the ideal use of public education? Some of those things, they're all laying under the surface because we don't, we don't see a lot of heated deep conversations about, you know, criteria of successful public education necessarily. Instead, it's about can you believe it, they had to take all these tests and because of that they were unable to do so. Y or Z. But those assessments are kind of the, the tip of the iceberg, so to speak, of, of how the evaluation is taking place.

Dr. David Williams:

Yeah, yeah, I agree with that. I think that there's, again, all this stuff under the surfaces. What are those steps that we talked about more than three or four?

Greg Williams:

Hmm. Yeah, in that criteria, in my mind is sort of like, that's one reason I feel like when the Common Core Standards were rolled out at the time, I just become a second grade teacher. So that was pretty fresh. windows were rolled out here in the United States. And I think one reason those were so contentious is it was essentially, the federal government saying this is the criteria for successful K 12. Education across all states, all people in all groups, and many folks felt like, well, that's not how our state or our city or our town sees success, like, that's not the criteria that matters the most to us. Yeah, right. It's hard to have that conversation. When, you know, there's there's so many other things playing a factor into it. But then everyone has to take the test to determine what is right to determine whether everybody lines up to that criteria of success for those kids.

Dr. David Williams:

Right? Of course, you get into issues of validity and reliability of the test at that point and say, was this test really designed to address this particular kind of stakeholder or learner? And so there's been lots of debates on that.

Greg Williams:

I think that's such a helpful thing to think about is like, no, Malcolm Gladwell. In his podcast he explored, I think it was the LSAT test, or maybe the AC T, I can't remember. But essentially, you know, how much weight or value many schools, our society at large has placed on those tests to indicate the value of an individual to be college worthy or not, right. And sometimes those quantitative pieces can be prioritized, because there's so much easier to move around, it's so much easier to have just a score than it is a big report of somebody, you know, humanity.

Dr. David Williams:

Yeah, easier. But thanks for filling.

Greg Williams:

Yeah, it's definitely not saying quantitative is evil, but it's, it can be difficult finding a balance. But um, but yeah,

Dr. David Williams:

it can be used for evil.

Greg Williams:

It can be certainly can I mean, in qualitative, too, right? If you sort of say, let's, let's do this deep phenomenological study of one individual, and then let's generalize this to everybody in the world. That can also be wielded. Effectively, I would assume? Yes, for sure. Okay, so we've talked about a lot of things. And you've, we've mentioned formal versus informal has been a big part of your focus in your academic life, summative and formative developmental. With all these different pieces. I'm wondering, as you've interacted with a lot of instructional designers, you know, at the beginning of your career, and then throughout, has there been a any common threads or themes that you've hoped to help instructional designers and your students know or appreciate about evaluation itself?

Dr. David Williams:

Yeah, I think I've kind of mentioned these already. But I think it's essential that people realize they're already doing evaluation, it's not a new thing. They've been doing it since they were born basically. Little babies are looking at people's faces and getting feedback on whether what they're doing is pleasing to them or not, right. So we we've been, as, as humans, we evaluate. And so I think it's important for instructional designers to realize that they are doing that. And so are their clients, their colleagues, their bosses, whomever you know. And so they need to take into account their evaluations may or may not align with those of the people around them. But if they choose, they could do all of their evaluation better, and they could design better and work with others better by paying attention to these different steps that we've talked about. So to me, that's the main thing. Getting into the details of how to do that. I think it should be part of the formal curriculum and preparing instructional designers but it also should be part of ongoing professional development for anybody that wants to be improving what they're doing. So meta evaluating your evaluations, both informal and formal, throughout Your life throughout your career is a way to basically begin to enhance who you are as as human working with other humans who evaluate constantly.

Greg Williams:

Okay, so this is really good, because we've touched on a lot here. I think one thing that might be helpful is to just talk through another example, maybe more on the formal side. I could, I'm trying to think of, I've got an example here that we could look at. And maybe in some ways, this could be a mini meta evaluation of a project that I did over the last year or two. Okay. Okay, so, um, one project that I did was creating like a, it was a new, a new hire orientation, products training, right? So pretty common thing in many organizations, when you're a new person hired, you come in, and, you know, in your first couple days, or whatever, you, you learn about the company, and, and you learn about what their product is to just kind of get you up to speed before you move on to your specific team and whatnot. So that's what I did I, I helped create a learning experience that would facilitate those types of objectives. Okay. So I don't know, where where do we start to take us through the model? It'd be?

Dr. David Williams:

Yeah, I would start with number one, which is, you know, you've given a little bit of the context or background for for the issue, you know, you're, you have a common task that most outfits have, and you've produced something that's supposed to address a need. But I guess, if I were kind of coaching you along in this, I'd begin to ask you. So what else about your personal background is at play here? And what else is in the background or the context of the company you're working for? That would allow you to get a clearer idea of why you're doing that, and why you're doing it the way you are? Why you think that that's something that's common? Those kinds of issues?

Greg Williams:

Yeah, yeah, I think that's so as I'm thinking about it, there's, that's the main thing was, hey, we had at the time, a live facilitator who had come in and this when I was hired to the company, this person came in and explained some things of the product and kind of gave us some hands on practice, it was really great. But it was one person that had to do that on a pretty regular basis. And as we were bringing in lots of new people, one of the main asks was, how could we? How could we standardize this and make it scalable across the organization, and also take some of the pressure off this individual to do this on a regular cadence? Right. So that was sort of the original ask, but this was also my first big project at this particular organization. I saw it also as an opportunity of helping build the culture of learning at the company, and also help various stakeholders get aligned about the customer journey and what the product could do. So those also kind of became things that seemed important to me about, about doing this project, quote, unquote, right, I guess.

Dr. David Williams:

Okay. So, as you're describing it, they're kind of addressing the second question of who are the stakeholders? It sounds like you are one of them. Whereas people might have thought, oh, yeah, the stakeholders are the people who are going to take the learning experience that you developed. Or maybe people would think, who are the other stakeholders? Are they the ones that expect you to do this kind of thing on a repeated basis with different training needs are? And so how would you answer that? Who are the main stakeholders that you are trying to address in this?

Greg Williams:

Yeah. Yeah. So I mean, one thing that's interesting about this is this, determining the stakeholders is a critical first step in general project management practices, as well as design practices and evaluation. So there's an interesting overlap. And all of this around the beginning of a project is thoroughly thinking through your stakeholders. So of course, the learners so someone who's newly hired to the company, then there's those building this so that would be myself and two of my peers. But then we thought about, well, we've got our support team and our sales team, there are other folks whose new hires when they leave this, this experience, they're gonna go on and work with those trainers. So those teams were important stakeholders because they're already training them. So how could we make their life easier? What value can we provide for them in this experience that would lighten their load was important. Plus, just the general leadership of the company if that those first few Moments of your experience in a new organization are very impressionable. And we wanted to make sure we were putting off the correct tone of what the company was all about, you know, and that it aligned with the values that our leaders were trying to, you know, lead from for the whole organization. So that, you know, members of the executive team, I considered as important stakeholders, as well as, of course, just my manager and, and his manager, that type of thing to

Dr. David Williams:

Quite a few people.

Greg Williams:

Yeah, and not all the same. And in the previous episode I did last time, I talked about this, the importance of gathering the stakeholders, but also thinking through, you know, some are going to be more interested or invested in what you're doing than others, some have more power influence in the organization, or to the success or failure of your project than others. And so considering those different factors, and where they kind of lay on a two by two grid can be a helpful exercise. I didn't create a grid specifically for this project. But I certainly thought through some of those pieces.

Dr. David Williams:

Okay. This kind of brings up a question we haven't talked about too much yet. But who are the evaluators? Is this something just internal that you're doing, as you are trying to do the project? And evaluate your process as you're doing it? Or are there others that would view a little more external, maybe outside of your project team? Sounds like there are, you know, some of those people you listed off aren't actually developing anything. They're their clients, or their users or their people who want their clients and users to use what you're creating, right?

Greg Williams:

Definitely, I, one stakeholder I didn't mention is the HR or the people team, you know, they are kind of overseeing the orientation of new employees, who have just been recruited and hired, which is all, you know, sort of owned by that group. And I wanted to create something that this people team was excited to ask new hires to do, because if they weren't bought in, then we could create the most amazing thing in the world, but it would never actually make it to be to the learner. Right? If if the people team said, this is, you know, we could use those two hours or whatever amount of time it's going to take on something else, then it didn't matter. So that was an important stakeholder to consider, even though they weren't building the training. They weren't taking the training. They weren't approving the training, formally, but they were somewhat tangentially related to delivering the training, right. And that that was a key stakeholder I wanted to have bought in going into it. But in terms of who's evaluating this, I think this is something I fall in a lot. And nobody was explicitly like, coming to me with a request that, hey, here's, here's the criteria that we want you to build this for. And here's what success looks like, you know, even pushing for that. Because sometimes, especially in a startup space, people are more interested in output than outcomes. Right? And it's like, success means you did a thing, you, you got people to sit in a seat and you deliver training to them, and now they're gone. And I don't think people would explicitly say that, right? It's one of those things that happens automatically. And we don't really meta evaluate on that much. But, um, the main, the main evaluator stakeholder for this, I felt was actually myself, because this was my first project. I wanted to have that report in hand, for if and when some person in the future was to say, Wait, why are we doing this program again? And I would be able to say, well, here's, here's why, and here's how it's going. And what do you think and how can we improve it? Right.

Dr. David Williams:

Okay, that's good. So I guess a key issue out of that, for me is it sounds like all of the evaluators, and this are internal, they're internal to the organization don't have, you're not going to go out and hire a professional evaluator, for example, to come in and, and ask these questions and do all this work. Right? Correct. And I think, difficult situation, I think, formal evaluation where you're going to hire somebody externally. There's got to be some pretty big reasons for doing that, like worrying about conflicts of interest and that sort of thing. And especially in a case like this, where nobody was even asking you internally to do it, it's just that you had learned about the importance of evaluation as part of design and development and you wanted to try that out because you, you are making a moral choice that having if you're going to do something with higher, higher quality than not, right,

Greg Williams:

right. Yeah, it's definitely one of my values. And I guess this is where that informal piece comes in. One of my values is integrity. Where if I say I'm doing a thing that I want to do a good job. And if I don't know what a good job looks like, I want to figure that out. And so I can define what a good job looks like, and I can do it. That's one reason it was so hard for me to come out of the formal education system and into the, quote, unquote, real world of work, is you don't get grades every day, or every month, necessarily, the feedback is less frequent. And when you grow up in a system that rewards certain behavior, you know, taking tests or other things that aren't necessarily directly connected to the real world all the time, that transition can be rough. So I've tried to adjust that from, you know, people pleasing, or getting A's or whatever. I've tried to adjust that to a sense of integrity of, I want to find out what the best in the craft are doing, determine, you know, boil down the criteria of what that looks like, kind of work backwards from that and do it. That sounds all noble and great. I don't always live up to that. And that's when I experienced the most angst, right when I'm not living in line with my values. But that really was the place that I was coming from on this particular project, especially coming out of the gate, it being my first one at the organization.

Dr. David Williams:

Yeah. Well, moving on to number three, I think you kind of began to clarify that the value and include you, you're evaluating yourself and how well you are doing these things. But it also sounds like they're going through the process and examining these different stakeholders in their, in the context. What kind of evaluate end Did you think you ended up with, as opposed to what you might have started off with?

Greg Williams:

Yeah, so I mean, we started off with something that seemed pretty similar to what we ended up with. And as essentially a learning experience, you could call it a program or an intervention or a training or something that we wanted to play a role in helping set the emotional tone of what the company was all about. And the values were all about, but then, from a more cognitive or knowledge level, ensure that everybody coming into the company was on the same page, knowledge wise about what the products of the company could do, and why they were important to our customer base, who is our customer base. So it kind of derived our learning objectives, which kind of made up the overall intervention. So I'd say that this learning experience for the new hires as a part of orientation was was the the main evaluate end.

Dr. David Williams:

Okay, and then you had some evaluations, including yourself and your evaluation process. You were hoping to make part of your design and development process, right?

Greg Williams:

Yeah, yeah, that's true. I hadn't thought about it that way. But certainly is was a part of it.

Dr. David Williams:

So what kind of criteria did you end up with for judging this evaluation?

Greg Williams:

I remember thinking through through this of, as we derive the objectives of what what is success look like? It was really hard to pin it down to something that could be measured on or that could give us confidence of what success looks like. Because, again, it's so easy to just say, well, let's just launch the program. And that would be success, especially when one didn't exist in the first place. But we thought through things like, alright, if I'm new to the company, I, how confident in my feeling about using the company's products? You know, how enjoyable was this experience? You know, how, how willing Am I to apply what I've just learned into my job as I get started at this new company? So those were some types of criteria I was thinking about is, some of them were effective in nature. And then a couple of them were, were more just general knowledge, types of things that can be measured with a knowledge assessment.

Dr. David Williams:

Okay. And did this did this effort of going through those steps one through four? Did you feel like you came up with some key questions that then told you what kinds of methods and for data collection or analysis to use?

Greg Williams:

Yeah, I mean, and like you described earlier in this, I didn't sit down with this model and walk through this, right. And like you've also mentioned, you're kind of doing all these things, similar time. And so one thing I knew from the outset is I didn't want to just sit down and build the whole thing and then see if it was good. So I didn't do this at a super thorough level. But we certainly looked at with my project team, like what's sort of the minimum amount of thing we could create, to put in front of someone to see if it's getting us close to some of this criteria that we wanted, right? So it was one question, right? Like, what, what's the basic thing that we could create that's going to reach these objectives and help us Now if we're moving in the right direction, so I guess that would be, like a formative evaluation of the project itself. Yeah. Right, and asking that type of questions. So we did create like a version one, like a Lo Fi. I wouldn't say prototype but more of a pilot. And then the types of questions to ask was just more open inquiry of Tell me about your experience? What questions do you still have? And it was less, you know, did this work or not? And more? I guess I'm getting into the methods and collecting, right. But it was more of just like, what did you think of this? How are you feeling? Try not to bias it with what we were hoping they would say. Right. But hearing what they actually said, as it aligned to what we were hoping they would, they would say, when we looked at the criteria,

Dr. David Williams:

they gave you some implications for things you might change the next time around.

Greg Williams:

Right? Yeah. And so we took that feedback and did some analysis on it, tweaked some things. And then as we made those changes, we updated a couple more pieces. And while we were working on some bigger items, like videos, which take the longest, you know, we just had images and placeholders and even audio for a little while, as a placeholder. We allow, we got it out to some people who had started at the company recently. So they weren't brand new. They weren't the exact audience we were trying to go for. But we're able to run it by them and meet with them after and have them also fill out a short survey to start capturing some metrics we knew had mattered to our stakeholders that I talked to them about earlier, like the CSAT score, or the confidence score and stuff like that. So we can start to gather those pieces and see. All right, where where are the areas that need improvement, especially as we listen to what they were saying or what comments they left, we could push on that a little further.

Dr. David Williams:

Great. So it sounds like you've kind of integrated maybe steps 5678 And maybe even nine, because you weren't reporting formally to somebody else, you're basically reporting to yourselves. And you are making these evaluations iteratively. And, and kind of like in the process that you talked about earlier with agile, right? Yeah. And then it sounds like you were probably then able to, at the same time have met evaluate how well are we evaluating, and there and thereby improving this, this effort? As you went, you didn't have a big formal meta evaluation at the end, I assumed?

Greg Williams:

Not necessarily. I mean, we're still evaluating. So we have now fallen, it's been many, not many years, but it's been more than a year since this was created. And we have a pattern now where every other quarter, we stick up a new survey, and just gather information, because I'm really pleased with where the scores are at and the general sentiment from people that I've talked through all the different iterations, I don't feel like a rigorous evaluation is needed at this point, because it's definitely in the right direction. But times change, the product changes people, and the organization shifts. So we do have a piece in there that's kind of regularly thinking through it. But something you said was interesting, too, as I think about the reporting piece, and the stakeholders and the meta evaluation, through all all of this, we had a project team, write regularly report on progress, you know, we've developed this are the videos complete? Watch it here. What do you think, version one is done? We have these 10 results in you know, what do you think? Who else has thought on this project team that might weigh in on this that? Are there any stakeholders, even though we're kind of late in the game, that would be interested in this, right. And it was just like a regular standing meeting, I want to say it was like a bi weekly, and then I'd send out like a weekly email. But just like regular reporting and opportunities for stakeholders to give their meta evaluations of how they thought things were lining up to the criteria that they had determined, you know,

Dr. David Williams:

huh. Yeah. Well, that that pretty much takes you through the process, right?

Greg Williams:

Yeah, I think it does. And I think that this, this example is like, it's a pretty big formal project. And like you said, there are lots of smaller pieces, where we still go through all these steps all the time, we just don't sit back and think about them and label them. Right.

Dr. David Williams:

Right. So as I'm thinking about the example you just gave, you asked me before this podcast and think about a suggested action that someone listening to this could take to become a better evaluator in their own life. And I guess I would say follow this example you just showed, you know, because along with doing your job as an instructional designer and part of a team that is designing, you were self studying, you were met evaluating you were trying to understand how am I doing with the things I've tried do and how can I improve those in the future? And part of that was also, I'm assuming evaluating how your colleagues that were working with you were doing and probably inviting them to do that as well, right? Because you don't want to just impose your violation of them on them. You want them to be self studying as well. Yeah, I think that's a suggestion if people are not already doing that, try it, you know, maybe try it with something small, like something in your informal evaluation life that you're doing. First, recognize that you're doing it, and then begin to ask yourself, Am I doing this very well? What could I do better to take into account these steps one through 10, or some subset of those that you think might be particularly relevant to helping you improve on your evaluation in your life and in your profession?

Greg Williams:

I mean, that gets suggest I've talked with a few previous guests about this, but just general reflection, I mean, reflection in and of itself is sort of evaluation. Right, like, Sure. And you know, whether that's the habit of journaling, or just doing little reflections on, on things of, you know, how am I doing when it comes to exercise or my eating, or as we're coming into a new year, reflecting on this year as a way to evaluate? Am I going into the new year? What What should my new year look like? versus, you know, what is my new? What is my experience with the last year? And it's easy to let all sorts of negative things creep into our valuations? I guess that's where, you know, I've talked with Sam Whitney of The Harbinger Institute about an outward mindset, but self deception can get in the way, sometimes of our criteria where we might think we're doing one thing, but we're not maybe because we're not collecting accurate data from all the stakeholders involved with some of our informal evaluations. Yeah. All right. Well, it's been a wonderful conversation. I'm just wondering if there's anything else I haven't asked you about, or resources you'd recommend, related to the things we've discussed, that I can include in the show notes so that people should be aware of?

Dr. David Williams:

Yeah, as far as resources go, I think the those two publications that I mentioned about professional evaluators lives illustrate, you know, a wide variety of approaches that people have taken and ways they've come to that throughout their, throughout their lives. I tried to, I tried to look for some themes across it. And I think I have maybe 20 or 30 themes in there that. See, I seem to see across people. But on the other hand, everyone's story is unique. And that's part of what I like about it. So but there's also a book on qualitative inquiry methods, which is sort of been implied throughout all of my comments here. Because it's it's an important way of understanding what is in complicated situations. And it's also important for figuring out what should be from different stakeholders perspectives. It's a way to help people understand each other and, and share what their perspectives are. So there's an online qualitative inquiry book that, that you should put in there. And then over the years, I've interviewed lots of people about their evaluation life. And I've created a blog that has a lot of these stories in it, that I think people would find very interesting, as they just try and explore their own evaluation life and how it came to be. So those are the main things that I'd stick in there, there are lots and lots of links off of those to other resources about evaluation that people might find interesting. And I think you have some that you've discovered and, and told me about that I didn't even know about so any of those that you want to pop in there, and I think that'd be great. As far as what you haven't asked me, you've pretty much asked me everything I can imagine. I guess the one thing I did want to emphasize again, is the need for everybody's perspective. I think inclusiveness is is really important and that's part of what has put me off by this story that one of my fellow evaluators told me with regarding our professional organization and evaluation here in the United States, is they they're imposing a narrow perspective in the name of being inclusive on their fellow evaluators. And I think that's a huge mistake. To say that, you know, you can't really do an evaluation that's valid unless you I include this particular perspective, which I'm not against that particular perspective. But to say that all evaluations have to focus on that particular set of values, I think is Hannah throws the baby out with the bathwater. You know, we want to have multiple perspectives, but we should, we should have the perspectives that are relevant to a particular evaluation, and not say that all evaluations have to be about this particular value that I think is so important. So you didn't really ask about that. And that kind of gets into a lot of political and personal stuff that probably isn't appropriate to discuss here. But I just think that people should when they're meta evaluating, they should ask themselves, am I looking at all the perspectives that really ought to be looked at here? Or am I just focused in on the one that I think is so important?

Greg Williams:

I think that can apply to so many things from like you were talking about with the puddle on the ground by the refrigerator. And guests are coming to interpersonal relationships like that. If we say, I'm going to prioritize my criteria over years, that can be dangerous, right. And reverse side, I can say I will only do your criteria, but not this other stakeholders group. Or if we don't take the time to think of all those different stakeholders, we can find ourselves in a narrowed situation where the impact is not what it could be. So I think those are relevant and important points. And this has been a lot of a lot of fun to walk through these items with you. So I really appreciate your time and, and sharing all these insights with us.

Dr. David Williams:

Well, thank you for asking. I really appreciated the chance to to get through out loud.

Greg Williams:

This has been an amazing year of learning. Going into the new year, I'm taking a break from the show as I consider my next steps in the direction of the podcast. I hope you found value in some of the conversations that I've recorded here. Feel free to send me an email at GregorySpencerwilliams@gmail.com Or a LinkedIn message with any feedback you may have. Thank you for taking the time and energy to listen to this podcast. Until next time, keep learning