What if Tech Could Be More Human? An Interview With Kate O’Neill.
In this episode of "What If? So What?" Jim Hertzfeld sits down with Kate O'Neill, tech futurist, keynote speaker, and author of "What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast."
Kate shares with listeners her perspective on balancing innovation with responsibility, exploring how leaders can make thoughtful decisions about technology that prioritize both human experience and business success.
Kate challenges conventional wisdom and inspires leaders to think beyond transactions to focus on what truly matters. Don't miss this thought-provoking conversation!
Special thanks to our Perficient colleagues JD Norman and Rick Bauer for providing the music for today’s show.
Episode 60: What if Tech Could Be More Human? An Interview With Kate O’Neill. - Transcript
Kate (00:05):
What are the decisions and actions you need to take today and every day going forward to close the gap between the most probable outcome and the most preferred outcome? And truly, that's a million-plus dollar question. Like, every organization can benefit from thinking that way.
Jim (00:24):
Welcome to "What If? So What?", the podcast where we explore what's possible with digital and discover how to make it real in your business. I'm your host, Jim Hertzfeld, and we get s**t done by asking digital leaders the right questions. What if, so what? And most importantly, now what? Hey, everybody, welcome to the podcast. I'm really excited for a great guest. She's a lot of fun. I would call you a futurist part-time provocateur, Kate O'Neal. Kate, welcome to the show.
Kate (00:51):
Thank you.
Jim (00:52):
Kate, you've got a great background, you’ve done a few things. So, what got you here today? What's your story?
Kate (00:59):
Well, I've spent 30 years in tech in one form or another, and it's, it's across a wide variety of fields and, and disciplines. But, you know, some of the highlights include, you know, having been at Netflix as one of the first a hundred employees. Their first content manager. So, that was a really fun, interesting, and formative experience for me. I also built the first intranet at Toshiba. I built the first departmental website at the University of Illinois at Chicago when I was managing the language laboratory there. So, I've had this career that was shaped by a bunch of pioneering firsts in a way. And then, and then just sort of took that from there, built over the next couple of decades. Some, some work around trying to help people understand how to build good products, how to make good decisions, how to, you know, balance this need to be innovative and, and chase the next new thing with being responsible and creating good things for humans.
Jim (02:00):
Awesome. And in fact, I kind of called you kind of building off of one of the books you wrote a tech humanist, so it feels like what you just described sort of encapsulate encapsulates that. So, how would how would you describe tech humanism? I know you've written about it, but the folks who don't know about it.
Kate (02:17):
Yeah, it's a funny thing. In, in my life, in my career, and in just in general, in my interests, I am always drawn to contradictions and or apparent contradictions. And tech humanist sort of embodies that apparent contradiction, but I don't think that it is or should be a contradiction. I think that what, what we often feel is that tech is at odds with humanity or that humanity is at odds with tech. And then there's this third kind of hidden piece to that equation, which is business. Because most of the time that tech meets humanity, it is through the rollout of business. Like business has determined some need that will be met at scale through the use of technology, and that takes shape in the form of some human experience. And it is not usually that well thought out <laugh>. It's not, not often, you know, thought through of how that's going to reshape society or humanity in, in some way. So that's what I have tried to do for the last decade and a half or so has really been to ask those fundamental questions, be, you know, just a little bit of a provocateur, but mostly to be, I hope, a force for good. You know, someone who's asking people to just like, slow down just a second, just, you know, just enough to step back and think about what's going to happen when we make this decision.
Jim (03:44):
I have a meeting today, and I always challenge meetings that I get invited to with no agenda. Like, okay, nice. So, my favorite question, I learned this from a customer years ago, was to ask what problem are we solving? It's a bit of a provocateur question, but yeah. But you just triggered that memory because when we're combining technology and humankind and business, kind of wading through this question of, like, how do we make money with it? How does it make money? Yeah. Like, this is a great idea. How do we make more profit out of it? So there's more to it than that, obviously.
Kate (04:13):
Yeah. And the profit question is going to be the sort of, it's the driving question in the business operating system. So, we don't even have to ask that question. Like, there's no world in which we are not going to be trying to answer that question. So, I think it's, it's the secondary questions that often go unspoken, you know, what, what purpose of the company are we fulfilling in some greater way by, by building this function you know, we could make money or make a profit in any number of ways. Like we used to at, when I was at manganese.com, we used to joke that we could sell pizzas out the back door if we were just trying to make a buck. You know, that, right? That's not really what the company is about. So, I think that the opportunity to bring things back to a sense of purpose and values, you know, but, but purpose, not in this kind of fuzzy headed, you know, soft sort of way, but in a, in a distillation of what the company exists to do and is trying to do at scale. So really getting back to your what problem are we trying to solve?&
Jim (05:14):
Question, right? Questions and decision making. I love that. Well, we're gonna get into more of these, these tugs of war, right? Yeah. These seemingly contradictions that we have to get through. So, I, I love where you're you and you've kind of built how you've kind of built that into your, into your work. You know, speaking of your work, you have a, you have a book coming out and I want to know kind of, there's a lot of change going on in the world for sure. It's, we're recording this in January mm-hmm <affirmative>. Of 2025. And, you know, we've been under a fast pace of change on a number of levels for the last several years. But what has sort of changed in the world for you? Or what sort of motivated you to, to write this latest book?
Kate (05:52):
Yeah, I think the big thing is that I hear consistently from leaders how much is changing, how fast the world is changing, technology is changing, and how hard it is to keep up with. And, and I feel like we are at a point where the decisions that leaders make about the deployment of technology is having vast impacts on human experience, as well as on the potential futures that we, we have the opportunity to have. So, I think that leaders feel that too. I think that leaders feel daunted by the scale and scope of the decisions that they have to make. So, what I think is lacking is, I mean, there's just, there's plenty of tech trends and predictions and lots of people who are willing to be a futurist if it means being a provocateur, right? Yeah. Like, I'm willing to say, yeah, I predict this thing is going to happen. But what there is a dearth of, I think is meaningful, actionable frameworks and toolkits that actually help leaders make sensible decisions that balance a need for innovation and competitive advantage with making responsible choices, with kind of figuring out what the impact on humanity, on human experience, on communities that are affected downstream of these decisions. How those kind of play together, and how we can make those decisions more responsibly.
Jim (07:15):
So, I have a theory, we can, you and I can workshop at some point, because I think if people are listening to this and thinking that those kinds of decisions are for the board, or they're for the CEO, they're really for everyone. Mm-Hmm <affirmative>. And, and I think that my theory behind that, that we can workshop later is that the sort of the mass or the gravity of decisions that people have to make, the scope of what people have to make is bigger and bigger and bigger because we're working with more complex systems. You know, I have a friend who is a product owner at a large retail chain mm-hmm <affirmative>. And is responsible for this, what, what sounds like a really small part, which is how you fulfill the order when you're doing curbside pickup. In a nutshell, it's a lot more complicated than it sounds.
They have to think of everything for every product category. They have to think of every situation. They have to think of every region. They have to think of the weather. They have to, so even at seemingly small decisions, you're going to, you know, you're going to impact people's lives or customer's lives, the colleagues, the team members' lives. So, my theory is that even at smaller, let's say lower, you know, quote unquote lower levels of the organization, the size, the magnitude of the decisions are bigger than ever. Sort of like a Moore's law of decision-making. Because what you're getting at is, again, maybe a CEO 10, 15 years ago, maybe not even that long ago, like the magnitude of those decisions when you just had to like, get some product to a store and get it out the door, you know, it's a lot more complicated these days.
Kate (08:50):
So yeah, there's two things that you brilliantly said, brilliantly observed, but two things that, that you made me think of. One is that CEOs are making a lot of those kinds of decisions, and they're often ascending to those roles, not out of technology backgrounds, right? Yeah. You probably know as well as I do in the world that we move in and, and interact with a lot of executives, very few CEOs ascend to that role out of a technology background. It's usually more marketing or sales or operations or finance, you know, in some cases finance rarely to but a money guy. Yeah. Yeah. Right? Not, not often. The money guy makes it to the CEO seed. So that's one factor is that you do find people who are in that top executive role who are making these decisions and don't come in from a technology background. But even more important is that observation you made, which is so many decisions are made downstream of that top executive role.
And there they're some of the most vivid examples that we can think of too. Like I, the, one of my favorite stories to tell over the last few years has been that of Amazon Go, you know, and most people by now are familiar with it. Yeah. The just walkout grocery concept. And they have a, a hundreds of them rolled out and, and they're integrating that technology into Whole Foods stores. So hundreds more of those across the continent. But the, the interesting thing to me about Amazon Go is that when you first open that app, 'cause you, you know, you have to do it, you come in through the gates, you scan the app on your phone, you gather up your groceries, and then you scan the app again as you just walk out and it tallies up all of your groceries. And you never have to go through a cash register or a cashier, you know, in a traditional sense.
But it's a, you know, cameras and sensors and a constellation of surveillance technology that's making that possible. But what I thought was so interesting is that when you first open that Amazon Go app, you get this kind of onboarding wizard that walks you through how it's going to work mm-hmm <affirmative>. And one of the things it says is that because you are charged for everything you take off the shelf, don't take anything off the shelf for anyone else. And I've used that for so many years in audiences and in keynotes, I would ask the audience just a poll question, raise your hand if you have ever taken something off the shelf or anybody else, or if somebody's ever taken something off the shelf for you, and like 100% <laugh> of people in the room will raise their hands. So, this is not an unusual, this is not an edge case kind of thing.
Yeah. That some Amazon engineer was like, “hmm, how do we solve for this problem? Well, hardly anybody ever does this. Let's just make it everybody ever an impossibility in the system.” Right? So, what happens there is that now within an Amazon Go store, we are not allowed to help each other. But as we just talked about, there's hundreds of these instances they're rolling out partnerships with on Starbucks mobile pickup. They're using it too. So, you've got, this platform is going to become one of the dominant, dominant paradigms of retail and of how we interact with one another. So, you're telling me that that's not going to impact how we actually help one another. How we show up for one another as time goes on over generations of how this, you know, maybe it gets fixed, but the point is still there, that someone somewhere had an opportunity to say, this is a challenge. I don't know how to solve this. And so the way I'm going to solve it is to simply make it, you know, forbidden within the confines of the system. And it's not natural to human experience that that is the way that that played out. So yes, many, many, many very consequential decisions are happening at quote-unquote low levels, just the operational levels all throughout organizations. And it's critically important that people are thinking about that long-range impact of the decisions that they make,
Jim (12:44):
The consequences, right? And the maybe sometimes unintended consequences. And right. The road to hell is paved with good contentions. You kind of set up maybe the first, gosh, I dunno what to call these now, Kate, know these contradictions, provocations, but they're really, I think, just modern.
Kate (13:02):
I like provocations.
Jim (13:02):
Provocations, modern wisdom. We'll go with that. <Laugh>. So, I wanna, I wanna run a few things past you because these are, these are provocations that you brought up. I collected a few of them, and I think it'd be great for the audience to hear, hear them, because one, we need to evolve our mindset, first of all, which I think you're helping us do. Okay. Honestly. And I think people are looking for the next tactic. They're looking for the next move. Like, that's, again, the genesis of this show is how do we make these things real? So I'm gonna run a few of these things past you mm-hmm <affirmative>. And I want to, I want your kind of the Kate O'Neal explanation, <laugh>, like, what are you talking about? And let's talk about how we can open a mind or provide kind of a new, a new tactic. And I'll set up sort of the conventional wisdom, and then I'll hand it off to you to sort of describe what I think is, is your provocation.
Okay. <laugh>. So, for years, you know, for, gosh, 15 years, I think Forrester came out and said in 2010, you know, this is the age of the customer. I think that was more or less when this sort of web 2.0 digital transformation era sort of took off. At least, that's how I saw it. So, we started to think about customer centricity. Let's think about the customer, and the customer's at the center of everything. And we started doing, really getting into personas and journey mapping and mm-hmm <affirmative>. All that stuff to call, which is great, right? Mm-Hmm <affirmative>. Which is great. But you sort of, I think, are telling us like, we need to think about the human experience mm-hmm <affirmative>. So, explain to us what's the difference between just a customer and a human experience.
Kate (14:34):
Yeah. It's so true that there's a, there's this terminology, and it seems like they should be consistent or that they're in contradiction with one another, but the customer is just a role that humans operate within. And so there absolutely is a time and place when customer experience design or customer journey mapping and that sort of thing is relevant. Because when you're thinking about humans in the role of customer, we want them to have the ideal journey or experience or whatever. But we also need to think beyond that customer role into, you know, how this impacts people in their lives that go beyond that interaction, the transactional nature of a fixed relationship as a customer. How is this going to, you know, for example, with that Amazon go thing, like how, how does this impact beyond, like the customer experience is perhaps fine of being able to, to purchase your groceries and not have to, you know, help, help anybody else, you know, maybe that makes you in and out of the store faster if you're in mm-hmm <affirmative>.
You know, indifference to other people might be good, but how does it affect the lives of people who are there and they, you know, maybe have some kind of physical limitation and can't reach the top shelf or something like that? How does this affect the lives of people who rely on a sense of connectedness at the grocery store with other people? Because maybe, oh, that's a good line. Maybe some people who are older, infirm, or lonely in some way like this, these little moments of connection are really all that sometimes people have when they're out and doing their errands. So, I think these, these little moments that give us glimpses into how to think about what it really means to be human, and how our decisions that seem so disconnected, so many steps removed from, from those kinds of realities, how those actually play out in, in people's lives.
Jim (16:28):
That's, that's great. And I often think of, like, healthcare journeys where I'm a caregiver, right? And I am taking care of my father and, and I'm picking up his prescription. These are real stories, yes. We can get into it, so, sure. And I love that. I love that advancement. So, another one is, when we think about customers, I have a lot of, I have a lot of clients who say, okay, I get it. I buy into the customer experience. Now, I gotta go figure out what they want. So, wait, who is my customer? And wait, what do they really want? What problems are they trying to solve? Going back to my, one of my favorite questions. And so we launched these research projects. I have a lot of customers who will say, okay, time out. I want to get it right. We'll come back to predicting in a bit. So, I launched this huge research project where I look back at the last two, three years of sales history and customer complaints to get to know my customer. But I think you're suggesting kinda a new adoption of foresight. So insight's great, but is foresight better?
Kate (17:28):
Yeah. So, I come from a background that is heavily invested in analytics, and I definitely am not negative about analytics. I think it's really important to be able to find the data you need to validate decisions, you know, get some confidence in investments you're going to make and so on. But that's not insight. I, as much as analysts, would love to use that word, you know, conflate those terms. Insight is so much more to do with wisdom. So much more to do with finding seemingly conflicting patterns and understanding the tension in those seemingly conflicting truths and something that's timeless, something that reveals something to us in a human way. Again and again and again. An insight is just, it's a nugget that is a lens that we can look through. Yeah. And that's what insights do. Foresight is an opportunity to say, as we try to understand the complexity of the terrain, we're, we're assessing.
Like, as we ask meaningful questions and we find that there are multiple conflicting answers in those questions, and we arrive at these, you know, beautiful insight lenses mm-hmm <affirmative>. We might be able to then distill that down to a timely approach. Like, we need to make a decision today. So we can't, you know, spend two months just deliberating like a philosopher in a cave. You know, what are, what is the decision we need to make, we need to make a decision now. So that's a timely approach. But what also happens in the distillation to that timely approach is that we have sort of exhaust that comes off, and I call that exhaust bankable foresight. I like to think that there's, with really good consideration with, you know, really good insight analysis, you know, really thinking things through, asking good questions, really listening, pondering human experience.
What we come to is this sense that there are things that are going to be true, most likely in the future that aren't necessarily true now. Like priorities, we can tell we are going to have to face up to that aren't necessarily requiring us to adapt to them today. But what we can do is triangulate our decisions with those foresights. We can begin to lay in place timely approaches day by day by day, quarter by quarter, that start to move us incrementally in the direction that foresight is pulling us. And I think that it doesn't necessarily contradict, like you said, the understanding of the user experience or the mm-hmm <affirmative>. You know, the analytics about a person or, or looking at all of the different data that we can gather about human performance or behavior. What those data points do is they are just feeding into this better, more well-rounded way of reframing our thinking and our decision-making so that it is more human-informed.
And by the way, you can use AI tools, generative AI in this processing. I think it just requires that we stay awake at the wheel. Yeah. You know, using these tools to review the results that we get back from the sort of crunching of this data and what's suggested back to us. If something comes back from generative AI, that has the ring of truth. Well, wonderful. You know, like that's a that's a great use of the tool, but we have to recognize it as the ring of truth and then figure out how it applies and how we're going to distill timely approaches and what kind of, of bankable foresight come from us putting
Jim (21:04):
Human intelligence on which we'll come, we'll come back to. So, you know, I love, I love research projects. There's sometimes they're frustrating because even, even kinda small incremental research, because, you know, sometimes it's, it's satisfying you, you do that query or you, the survey comes back and it's like, it Wow, it didn't really tell me anything. So yeah, it's satisfying. Okay, we're on the right track. That's sort of validating. But then you get excited when there's some something you didn't see coming. Like, oh my gosh, now there's something we didn't know about. There's something that challenges us. Right? But then the, then human intelligence kicks in again and you're like, well, wait a minute, is that real?
Kate (21:40):
Right. Right. Exactly. That's what the best is when you have two different sources, like an internal and external, and they validate one another, and then you can just be like, all right, cool. We're good here, we're done. We <laugh>, we did our research.
Jim (21:53):
That is nice, but let's, let's admit, let's agree. It's way more fun when they contradict. Like, you thought this, but you didn't know this is what they were thinking. I love those things.
Kate (22:03):
Yeah. But I always tell my clients you don't pay a consultant to come in and tell you something you don't know. You pay a consultant to come in and tell you, you are very smart. You are thinking the exact right thing.
Jim (22:13):
You're very smart for hiring us. <Laugh>. So, I think that you know, behind what you're saying in some of this sort of, with this foresight, is the ability, and we talked about sort of predicting the future and future-proofing. And I think what's behind that is because we feel like Mike, you know, we, we, we get one shot like we get this one shot that is making the best product ever. And it's either going to, we're either going to be winners or losers and if it doesn't pan out, we'll never get a chance to do this again. So we have to know everything. Like we have to predict every possible outcome, every possible future, so that we don't paint ourselves in the corner or get stuck or have technical debt or whatever thing we're worried about and scared about. So you talk about not just predicting but preparing for the future. Maybe being future-ready. So what's behind that, Kate?
Kate (23:04):
Yeah. There's this term, future-proof. I've always Yeah. Had a problem with it just strikes me as such a silly articulation of this concept. Yeah. Because you can never be future-proof. That's not a thing that really happens. Like the, the idea it makes me think of when, you know, you're, you're bringing a kid over, I don't have kids, so it's when a kid would be coming over to my house and I have to kid proof my living room or my kitchen. Right? Yeah. <Laugh>. So that doesn't make any sense. Like, I'm not gonna necessarily create future-proof spaces in my home. Like that's not, that's not how that works unless there's some sort of vortex of, you know, time resistant Okay. Experience, I don't know. But no, it makes much more sense to me to think about being more ready for the future, being future-ready rather than future-proof.
And one of the ways to do that, I think, is it, it sort of speaks to one of the things you just said, which is, you know, you said, we think we're only gonna get like one chance to do it. Right. And I think one of the reframes that we have the opportunity to do is to think more often about futures not future. Because the future isn't just one set path. It, it sort of, I like to think of it as almost more like a prism, you know, from this moment there's this kind of radiating realm of possibility, like many, many different possible outcomes from the moment we're in. And that, that sort of feels more liberating, I think, and, and more empowering because it shows how we have the opportunity to shape those futures and to narrow the realm of what is possible through the actions and decisions we take.
I'd also suggest that one of the most valuable questions that I ask my clients is to think about when they think about the future from, from this moment, and that the most likely thing that's going to happen in the future and the most preferred thing for them that, that they'd like to see happen. So out of the possible futures from this consideration in the current moment, what's the most probable, what's the most preferred? And then what's, how do you, if you do the math and subtract one from the other, like what's the delta between those, and how can you actively move them closer together? What are the decisions and actions you need to take today and every day going forward to close the gap between the most probable outcome and the most preferred outcome? And truly, that's a million plus dollar question. Like, every organization can benefit from thinking that way. Well,
Jim (25:41):
And I think, you know, we like to take things to sort of the digital realm here. I think one of the ways that we have to also think about it is to think about future-proofing, which is giving yourself options. You know, and I, every sort of product strategy or digital strategy that I've come up with or been involved in, I, I can reduce to either a, a party or picnic planning metaphor. So <laugh>, because they're, they're fun. Everyone can get behind those metaphors. But, you know, we're gonna do this if it, if the weather's good, we're gonna go here. If it's bad, we're gonna go inside. Right? Mm-hmm <affirmative>. Mm-Hmm <affirmative>. So, you know, keeping your options open. So, you know, like more flexible architectures and headless commerce and APIs and fluidity, all these kind of technical things, they kind of exist to give you options. Right. Because you don't, you don't know what's gonna happen. We may have to refactor something, we may have to alter that journey, or we may have to give different things to different personas if preferences change. So, you know, I think that's another philosophy that that kind of goes into that.
Kate (26:40):
Yeah. And I think it's sort of an interesting opportunity to bring up one of the themes, one of the observations that I, I make in the book. 'Cause I think it applies to some of what you're talking about there, that too often, I think when we talk about digital transformation and we talk about innovation, we talk about them as if they're synonymous with one another. Yeah. And they're really, really not in, in my experience, in, in decades of doing this work. Thank you. Yeah. What I find is digital transformation is the work of catching up to what the market already expects. And what innovation is, is the work of looking ahead to what is truly novel and what is going to move you into one of those likely futures or the future you, where you'd like to be. Those are very different tasks. They are not, one is not better than the other.
And there, it's not like you choose to do one but not the other. You've gotta do the work of catching up. And every organization has both of these tasks, right? Every organization has areas in which they're still catching up to, you know, the way that workflows are happening in other organizations. Yeah. You know, kind of internally, operationally, whatever. And that's digital transformation. And there is also every organization is trying to think about how they can be the ones that set the tone for what is gonna happen in this industry and in our market and so on. So it, it's, both of those are happening. And I think it's a really important aspect of, when you talk about the, the various pieces of technology that afford us more fluidity and flexibility. I think we also need this other sort of different dimension of consideration that we need to understand that that needs to happen across both a catching up to the present moment mm-hmm <affirmative>. Digital transformation perspective, and an innovation moving us forward into the future perspective.
Jim (28:26):
I really like that distinction. 'Cause you're right; they do get conflated, I think. Mm-Hmm <affirmative>. Everybody's looking for the easy button, right? Or the, the pixie dust. So, like, well, if I digitize it, it'll be innovative. Well, maybe not, right? Not to mention, by the way, it may be innovative, but nobody cares. Not solving a customer problem, then, you know. Right. What's the point? So is it really right.
Kate (28:46):
What problem are we solving?
Jim (28:47):
We have to talk about sort of distinction between human intelligence and machine intelligence. You, you touched on it before, you know, how do they, how do we sort of yin and yang and have checks and balances between, you know, what we know intuitively, right? Or through experience, which is full of its own biases, pitfalls, and risks. And machine intelligence, which is seemingly faster and smarter and knows more than, has more facts than I do, but is also sort of flawed and biased. Yeah. And hallucinating. And how do these concerns work together?
Kate (29:25):
Well, it's funny, I think we all spend a lot of sort of existential angst cycles on this question. The difference between human intelligence and machine intelligence. Where do humans thrive versus where AI is going to take our jobs or, you know, supplant us or whatever? And I think one of the ways in which it's overwrought is that machine intelligence has been heavily influenced by human intelligence, right? I mean, obviously, it's built out of human intelligence mm-hmm. But it's also been modeled after human intelligence. So many of the metaphors of artificial intelligence draw directly from concepts of how our brain works. So what we've really done is use human intelligence to build a faster, more connected versions of the human brain. Like what would happen if the brain were able to do, you know, more cycles and process faster? It shouldn't feel so threatening. I think it also shouldn't feel so alien to us.
It's <laugh> definitely a familiar model. That said, it is inherently artificial in the sense that it's synthetic, right? Mm-Hmm <affirmative>. Ways in which machine intelligence knows. What it knows is that training itself in some cases, like unsupervised learning on a lot of random information and looking for connections, which incidentally is often how we learn as well. But what we do in the world as embodied beings is move through the world with a very distinctly human embodied sense of the world. We are constantly sensemaking. And sensemaking is about meaning, meaning is the fundamental attribute of human experience. And it is something that machines don't have the ability to be in touch with in the same way and maybe never will. At least not until meaning for machines is made from whatever components they will make. Meaning from, like when, once some kind of general intelligence is achieved and they're drawing a sense of, you know, their own existence in the world, a sort of self-consciousness from the world, then maybe, but that's not even relevant to our understanding today.
Our understanding today has to be about how do we have the most meaningful experiences as humans and how does technology and even advanced technology fit into that. And so I think the question really we ought to be asking is, how should we be using AI to better fulfill meaningfulness in human experiences? How should we be using AI to supplement meaning in the world around us? How can we be automating experiences in such a way that we don't lose a sense of meaning? You know, I, I alluded earlier to the idea that loneliness when people are in grocery stores and, you know, this whole auto Amazon go thing of, of taking away that one little interaction that somebody may look forward to. Well, what are the opportunities elsewhere? You know, when it comes to banking transactions, when it, it comes to, you know wayfinding when it comes to, you know, any everyday type of thing that we have in the world, there are just numerous opportunities for us to think what ways could we infuse just that nuance, that subtle sense of being seen and being understood and being connected with others?
What are the ways in which we can do that? And that, I think is, is a question that's well worth every organization asking themselves.
Jim (33:05):
I, I agree. I, I'm, I'm glad you, I'm glad you're leaning into that. I think that's really been motivating me in a lot of my interactions with purpose and that, that those are purpose and meaning are, they're not going away. I think we need to incorporate those more in our planning, in our design, you know, in our everyday work. Because I, it's kind of like, I'm really gonna throw it back to like, culture eats strategy for breakfast. Mm-Hmm <affirmative>. The more love Drucker. I, I'll yeah. It's Peter Drucker. The older I get, the more meaning that has to me. Yeah. And I, I get it. It's like watching an old movie, you didn't, you know, you're like, you see all the adult humor, you know, yeah. <Laugh> today that you didn't see as a kid. You're like, oh my gosh, how did they make this movie?
Right? So, you know, some of these things are coming back, but yeah. Culture, purpose, meaning it's not going away, and it's, it's, it's gotta be sort of embedded in the way we do things. We're kind of out of time here at Kate. I'd love to close on, there's so many other things I want to ask you, but I want to, maybe you could leave one thing. What if, if you could give the audience some advice on something they could take away? 'Cause I think if you give, you've given us a lot of contradictions and provocations, and I think what people are looking for is to take some of this conventional wisdom just over the last five to 10 years and advance it. But if there's one thing you could tell folks, what would you ask them to do? What would you give them that they could do today?
Kate (34:26):
I think you, you know, you hit it with, you said that meaning and purpose are not going away, but I want be even crisper about that. That meaning, as I said, is the fundamental human experience, but it's also at every level we consider meaning it is about what matters. Meaning is always about what matters. And purpose is the shape that meaning takes in business. So, when we try to think about how those facets of understanding, how that wisdom, how that human insight can serve us as we consider technology decisions, those are the questions to come back to what matters in this and what is likely to matter going forward. That's the question. What matters next in the title of my book is really draws from this insight that what matters to us now and what's likely to matter in the future are the big questions we need to be asking ourselves as we deploy technology that shapes human experience. And as we build businesses and go about our lives, what matters and what is going to matter?
Jim (35:22):
And that's whether you're a CEO or a business analyst, or you're contemplating your career move or what you're gonna do next. So, Kate, thanks so much for sharing your wisdom.
Kate (35:31):
Thank you, Jim. This has been great.
Jim (35:33):
And we'll, we'll love to talk to you next time.
Joe (35:36):
You've been listening to What If? So What? A digital strategy podcast from Perficient with Jim Hertzfeld. We want to thank our Perficient colleagues, JD Norman and Rick Bauer, for our music. Subscribe to the podcast and don't miss a single episode. You can find this season, along with show notes, at Perficient.com. Thanks for listening.