Jonathan Brill, Futurist and Best-Selling Author on AI, Rogue Waves, and the Octopus Organization
In this episode of “What If? So What?”, host Jim Hertzfeld sits down with futurist and best-selling author Jonathan Brill to explore how organizations can prepare for disruption and thrive in an AI-driven future. Brill is author of “Rogue Waves: Future-Proof Your Business to Survive and Profit From Radical Change” and “AI and the Octopus Organization: Building the Superintelligent Firm,” which shares how seemingly unpredictable shocks—pandemics, energy crises, political upheaval—are becoming more frequent as global systems collide.
In this episode, Brill explains why traditional three- and five-year planning no longer works and instead urges leaders to focus on the real-world constraints shaping the pace of technological change. He introduces the concept of the “octopus organization,” a new model where decision making is distributed rather than centralized, and AI empowers people across the enterprise with better context, insight, and agility.
He also emphasizes that while technology evolves quickly, cultural transformation can take several years, which means that leaders must start reshaping their organizations now to be ready for 2030 and beyond.
Special thanks to our Perficient colleagues JD Norman and Rick Bauer for providing the music for today’s show.
Episode 69: Jonathan Brill, Futurist and Best-Selling Author on AI, Rogue Waves, and the Octopus Organization - Transcript
Jonathan (00:05):
That means here's the implication for your business. That means that AI plus your people is about to be smarter than Einstein. And not like a little bit, not like a little bit, like a lot. And so, the question is, how do you manage in that world where each tentacle, each sucker on the tentacle is smarter than you are, is faster than you are, knows as much about the organization as you do.
Jim (00:30):
Welcome to What If? So What? The podcast where we explore what's possible with digital and discover how to make it real in your business. I'm your host, Jim Hertzfeld, and we get s**t done by asking digital leaders the right questions: what if, so what, and most importantly, now what?
So today we're joined by Jonathan Brill. He is one of the world's leading futurists. He's a transformation architect, bestselling author of “Rogue Waves, subtitled Futureproofing Your Business to Survive and Profit From Radical Change”. I have a couple questions there. He was also a former global futurist at HP, and a couple interesting things I like. He shaped more than 350 products that range from AI products to metaverse technologies. I learned about theme park rides, and I was excited to learn about the World Fair Pavilion. I didn't see that one coming, Jonathan, but welcome to the podcast.
Jonathan (01:20):
Absolutely.
Jim (01:25):
Now we've got this book. And now you've got the new one coming forward. So definitely want to dive in on that, I think really timely in a lot of ways that we'll dive into. But before you tell us about your World Fair Pavilion, Jonathan, what did I miss? And kind of your background, kind of what brought you to this role and this calling?
Jonathan (01:41):
Yeah, you know you look forward and the future seems random and you look backward. And so often there was only one path. My father's best friend was an author, and I was speaking with him the other day and he said, I actually made my money as a keynote speaker. I wish I'd known this like 45 years ago. I wish, like, shortened the path. But what he taught me is that you can invent the future. You can create the future if you take the time to really observe, to know a little more about what's going to drive it. Not because you know exactly what's going to happen, but because you know what's not possible and you know where the uncertainties are. And with that you can lean into, you can take advantage of things that no one else is even paying attention to. 'Cause they're so busy trying to hit the day of the week quarter.
Right. All of that's incredibly important, but that's not what breaks you out into the next level in your family, in your business, in your career, in your life. It's that ability to see around corners to avoid downside risk, which is a lot of what I do with security agencies and, and large financial services companies. And then take the upside, which is a lot of what I do with technology companies. Yeah. Figuring out, okay, well where's that new opportunity with things like artificial intelligence with, with new markets that are available all of a sudden for, for these startups around the Bay Area.
Jim (03:21):
So, there's always something that's a little unexpected. Right. Was it Yogi Berra who said like, prediction is hard, especially about the future. I probably hacked that one, but yeah, I think what you're saying is being prepared for the unexpected, you know, having the ability maybe to see it coming. And I think it was that, that the focus of rogue waves, great title, by the way. And I say that because a friend of mine, a professor at MIT, has a PhD from MIT, and he was actually focused on rogue waves for the petroleum industry, believe it or not. Yeah. Or, you know, it's a, an offshore rig. So yeah, I don't think that's what you were writing about, but you're talking about.
Jonathan (03:59):
It, it, it's very much what I was writing about the, the type of modeling you do for looking at roadways. So, let's just talk for a second about what they're, Yeah, yeah. So, in the deep ocean, 120-foot-tall walls of water can pop up like out of nowhere. Yeah. When individually manageable, waves collide at the same time in the same place. It's called a focusing event. And in those events, it's almost a quantum phenomenon. You don't go from zero to one to three, you go from zero to 10 in a second. And I think the same thing happens in a whole range of complex systems. We see the same modeling being effective. When you look at flash crashes in the stock market, you see the same modeling being effective when you start to look at types of pandemics and, and epidemic risk. So, this, this is a, a way of looking at the world where there are these non, almost non-computable risks that are suddenly actually more predictable. Not because you know exactly which one's going to appear where, but you understand what drives the frequency. And what we saw up until the 1990s is that we thought that rogue waves were really rare events, you know, one in 10,000-year events. And what we believe now is there's one going on right now somewhere in the ocean. It's not that roadways are rare, it's that the ocean is big. Right?
Jim (05:24):
Right.
Jonathan (05:25):
The challenge is in some places, on the southern coast of Africa, these things are not rare. They cluster in times and places in events when there's, when there's a, a storm off the Southern Ocean that you would never see in Africa, you know, 36 hours later, you get roadways off of Tanzania or wherever. And my point is that when you step back and you look from a satellite view at what's actually going on, it's very different than looking at it from ground level. Right. When you look at, at the tide, if you're a little crabbed crabbing around, right? Like it's constant chaos. There's nothing but you look up there, you get knocked back into the ocean, you have no idea what's going on. You're human, and you're looking at it from a pier, and you see the waves going up and down. You see the tides going up and down.
Right. But you look at it from space and what you see is that the tide actually never changes. It is locked to the moon, which also doesn't move. So, it's this ovoid of gravitational pull. It's an ache. Right? This pointed toward the moon. Yeah. And what happens is that the earth spins around, you know, tens of thousands of miles an hour. And the only place where there's actually a lot of chaos is at that moment, in that place where the earth hits the water. These are things you would never know if you looked at it from the pier, the things you would never know if you were a crab. But if you can take a step back and look at the bigger picture, very, very different things become true.
Jim (07:03):
Great metaphor, by the way, are our organizations, or are you suggesting people quite simply step back? Do we pause to find the next wave? Is, is that sort of the direction that you advise people? Yeah. And I guess we'll take it forward into kind of your, your observations on AI and organizational development. How do we see this coming? How do we see what's next?
Jonathan (07:27):
I think there are two things, right? The first is recognizing that as the world becomes more connected, as it moves faster, we're likely to see more collisions of systems. And because we see those collisions of systems, you're going to see more roadways, right? More focusing events, more things were cr more crazy happens at the same time in the same place. Pandemics, land wars in Europe, rise of populists, political leaders, energy shocks, you know, financial shocks, right? These things are all going to become more frequent. And so, the things that were normal growing up in our careers when the world was getting more harmonized, right, are going to go the opposite direction. And so, you have two strategic approaches here. We can use traditional planning techniques and say, okay, what's our three-year plan? One-year, three-year, five-year plan? I don't know if you've noticed the last five years, I don't think it's gonna work in the next five hours.
But what you can do is say, okay, well what do we know about the world five years from now? Right? We'll have big breakthroughs in artificial intelligence, but we, what we actually know is that there's a certain amount of energy we can produce. There's another certain number of data centers we can build. There's a certain amount of chips we can put in those data centers. There's a certain amount of network bandwidth to get data into those data centers from the edge. And there's a certain rate at which we can change the fleet of edge devices, mobile devices around the planet. And so independent of whatever that crazy research and development is that I think is gonna happen because we're dropping whatever it is, you know, half a trillion, a trillion dollars a year into this thing, yeah. There will be a conceptual breakthrough. We can't know that. But what we can know is that it will take time for that to diffuse into enterprise because the physical capital will take time to build. And because companies typically, you know, take about three years after a successful product is launched to really start scaling it.
Jim (09:32):
Yeah.
Jonathan (09:33):
So, we kind of know the world in 2030 with great accuracy for AI today. You know, so, so it's like, think about the big picture. Think about like from that, from, from space, from the economic viewpoint. What do we know is and is not possible when you work back from there, you can kind of know when you should be going toward the water and when you should be pulling back?
Jim (09:57):
<Laugh>. So, you kind of highlight some constraints, which is an is I kinda like that model. There are only so many, well, there's so only so much budget. There's only so much silicon, there's only so much energy we can produce. You know, I like it, it's an interesting way of looking at it. And, you know, there's an old model, I'm sure it's still around theory of constraints, you know, it gets into operating theory and so forth. I think a lot of organizations look at things that way. They only have so much to go around, only so much time in the day. Mm-Hmm <affirmative>. One of the things that I'm hearing from a lot of organizations with respect to their AI adoptions is boy excited, you know? Mm-Hmm <affirmative>. Definitely seeing, starting to see some returns. Three years is a good, I think we're seeing that there's, I think we're...
Jonathan (10:39):
Yeah. 2026, 2027. Yeah. We'll see some hits. And by the way, you shouldn't, you shouldn't have yet. Yeah. Like historically, you look at like, there should not have been a hit in 2025. Yeah. You see companies like lovable hitting, you know, a hundred million ARR in six, seven months. Right? So we're seeing the possibility of new organization types and new types of adoption. We're seeing the rate of adoption of open AI, right? So we know that something is possible here that we've never seen before, but at the enterprise scale, there's no reason to think that in 2025 there should have been a big hit.
Jim (11:14):
Okay. Okay. But back to sort of the application of constraints, I, like I said, I'm, I'm seeing a little, a lot more, I would just say comfort level, <laugh>, you know, from some, some of the larger enterprises that, that we work with. And I think there's sort of this realization now that, you know, they're sort of falling back on what I call sort of legacy thinking around traditional operating models and org structures and mm-hmm <affirmative>. It's sort of this acknowledgment of the human element, which I would say is still a constraint, you know, in this system of systems.
Jonathan (11:47):
It's the constraint.
Jim (11:49):
<Laugh>, I'm glad to hear that I, I think it's actually releasing a lot of thinking. There's sort of like; there's a lot of anxiety that's going away. You say, well, okay, okay. We all agree this is the constraint. I think this is what I guess I wanna ask you, is that sort of the motivation for the new book you've written, you know, about AI and the octopus organization? Yeah. Is that, is that kind of the genesis of that? And maybe you could elaborate on that.
Jonathan (12:14):
It is. So, a lot of what I do is hang out with organizational psychologists and think about corporate governance and how we shift corporate governance to manage risk. But the interesting thing about risk is there's two sides. When you measure risk, you know what you're really looking at is beta, the amount of change over time. And what you really wanna do is take the upside and not the downside of that change. Yeah. And the way you do that is by managing governance, by managing the timing, the sequencing, the hedging of actions. We do that in contracts. That's how our command-and-control structures, communication structures and organizations are. That's why they're there. Mm-Hmm <affirmative>. The thing that is very exciting to me about artificial intelligence at the enterprise scale in mid-size companies is that the reason that we've built our organizations, the, the architecture, the processes, the culture on our organizations we have, like we do today, is that three things have been true for 170 years since we invented the modern organization to run the railroads in the United States.
The first organizational chart was printed by the New York and Erie Railroad in 1855. Three things were true then that I think were true until this year or maybe a year from now. The first is that you couldn't trust semi-literate people with no scientific decision-making skill to make strategic decisions. So we had a couple people at the top, we had the big man that, you know, it's 19th century, you know, a great man theory at the top, and then there's a nervous system that goes down and tells everybody else what to do. We've been doing that for a long time. We keep talking about how to do something different, right? We hear Amy Edmondson, who's one of my mentors’ heroes, talking about psychological safety, right? Have we really? mm-hmm. Improved psychological safety in our organizations in the last 20 years? Eh, you know, but we still live in that great man theory.
We're just more agile <laugh> about it. The second thing is, the thing that is interesting to me about AI plus your people is that it can provide frameworks, it can provide mentoring. It can, it can provide a coach to bounce decisions off of. As a result, your junior person might be able to operate with less bias in greater decision-making complexity than most senior leaders today. Let's think about that. AI doesn't have to make the decision to make your people dramatically better decision makers. The second piece is about context. In 1855, if you're on the railroad, you know, maybe you have a telegraph that goes one way. And if you're in the telegraph booth, you get the information. But otherwise, you have no idea what's going on a mile either direction on 12,000 miles of track. Right? So, you have no context. You have a pocket watch, and you have a schedule.
And if those things are misaligned, you have problems. Thomas Edison, right? Inventor of the light bulb and the multiplex telegraph. He was involved early in his career as a telegraph operator in an accident because one of the engineer's pocket watches was off by four minutes. Oh, my, didn't know that. You have no context. Yeah, yeah. In 1855, you have no context about what goes on anywhere. Email made that better. Telephones made that better, better meeting structure, meeting hygiene, you know, agile, continuous delivery. It's all made it better, right? But people really don't have context. The thing about AI, right? Is when you have transcriptions of every single phone call, every single piece of analysis, every single board meeting, blah, blah, blah, blah, blah, blah. And you're able to anonymize it and do the appropriate thing, HIPAA-compliant things, right? Right. In healthcare, all of a sudden, every single person in the organization can have the kinds of dashboards that your CEO dreams of today.
Yeah. This is not a technological problem. This is a problem of diffusion right now. So, we have better decision-making and we have context. What's the third reason that you have senior leaders today? It's because they've screwed up enough stuff over the last 25 years that they have a mental instinct that's a bad idea. No data, but mental instinct. That's a bad idea. So, the thing about AI is it minor might not get good at telling you binary things. Yes. No, but it's a probability engine. And so, it's gonna be really good at giving you a range of options. That's literally what it does. It's betting based on these million things, what's the most likely thing to be in the middle? Right? That's, that's the magic trick. So, it's gonna be really good at scenario analysis and looking through the second and third-order impacts of decisions, because that's what it does.
So, all of a sudden, your junior person is able to make less biased, more complex decisions, understand the implications of those decisions, and think about, okay, what does that mean across the entire organization? What does that mean across the entire industry? This is a very interesting idea. Yeah. Yeah. Because for the last 170 years, we've assumed you need a great man at the top. And not saying that goes away, but I think to deal with the speed, the agility, the complexity, the ability of a company like Lovable to get from zero to a hundred million dollars with seven salespeople, something has to change. Three-year planning cycles are not it in the near future. And that change is a change in the way we think. The 19th century model is built on a centralized mind, a big brain, someone at the top, or maybe it's a C-suite at the top, or maybe it's a board and a CEO, like some mix of that.
And then there's 50,000 people at the bottom who are all a bunch of automatons. Well, there are other ways of doing this in nature. We have animals like the octopus, which is crazy, crazy, crazy animal in all kinds of ways. But the most exciting thing to me is that it doesn't have a centralized mind that's up in the brain. It has a distributed mind. Two thirds of the neural tissue, and the octopus are not in the brain. In fact, when you look at the, the arms, and these aren't called tentacles, weirdly, they're, they're called arms. They go out on their own. And there's a reason why they look like they're moving around randomly. And it is because they are all looking independently at the world that the, the edge of the tentacle is doing its own thing. And then when it thinks it sees something, it smells something weird through its skin, it then sends that information up to what's called the shoulder.
And in the shoulder is a kind of a tentacle brand. And it says, okay, let me process that a little bit. Cool. And then there's a network between the eight tentacle brains, what's called a neural necklace. And they talk to each other, and they decide what they're gonna do. And so that's why the octopus looks like it's moving randomly, and then suddenly it's coordinated. The big brain probably never had anything to do with that decision. And then those tentacle brains, they tell the big brain what they've just done, innovation goes from the bottom up. And it works. It works. We know it works. 'Cause these things survive in nature. And they're incredibly agile creatures. So how do we do that in our organizations? The big fear is about command and control. The interesting thing with the octopus is that the big brain can actually take a look at a tentacle. It'll shift an eye over there and look at it and start to control the tentacle. So, it's not losing control by giving up command. It can take it back at any time. Right. And I think this model is incredibly important for how we move forward as the organization. The big brain can focus on one thing at a time, as you know, otherwise you end up with CEOs with like, you know, the, the attention span of fruit flies and nothing happened <laugh>.
But as the octopus does this, the other seven tentacles are all doing their own thing. They're all coordinating separately. And so, this is a really powerful way of thinking about the future of organizations. And it's a thing that I don't think was really possible at any scale before artificial intelligence. Now we see it in irregular armies. We see it in games like basketball, right? That are highly collaborative and highly intuitive games versus games like football where there's one person who calls the play. Hmm. And so, we know we can do this as humans, it's just that we haven't been able to do it at scale before. That's the breakthrough.
Jim (21:04):
Jonathan, this is great to hear. I think more people need to, to hear this in the sense that it's not just great advice better than a trip to the aquarium, by the way, but it's a change of thinking, you know, because I think when I'm listening, I'm thinking of other paradigms, for lack of a better term. Well, if we just, we need more BI, right? When we need more business intelligence, more analytics. But this is going beyond that. This is not just information flow.
Jonathan (21:29):
It's not about rolling information up. Yeah. It's about putting the information where innovation is possible. Right? Right. But seeing that, that point, the organization to innovate.
Jim (21:41):
To get an understanding of that, is to rethink how you're applying AI to your business completely. Because there's such a, there's so much momentum around automation. Like, oh, I can, I can, I can just automate this, I can make this faster, I can make this cheaper.
Jonathan (21:54):
We have to do that too. That's <laugh>. Yes. That's not that, that's a, that's a baseline. And next, if you do that in the next year, all of your competition's gonna do that too. Right? That's easy. That's table sauce. You gotta do it. The question is, what's the differentiator, right? What's the thing that allows you to move faster, be more agile, take more opportunities with greater granularity, right? And that's not about optimization, that's a baseline, but it's about coordination.
Jim (22:23):
Yeah.
Jonathan (22:24):
You know, in the next year, AI will probably improve the quality of output by about seven times. That's a little less than what's done in the last year. Multiply that out. Five years, take out a couple numbers, so you're at 30 times improvement of output, right? I think we might be thinking that like copilot plus three x is what we're gonna get. <Laugh>, that's not what I'm talking about. And in that world, literally, like when you look at RKGI, right? And this is, this is pulling, pulling and, and little edge case out of the air RKGI is this metric that we use to look at abstract problem solving with artificial intelligence. So, you know, like those weird things that like from tests in school maybe where there's like a triangle, a circle a square, and one's pink, one's blue, one's orange. Now figure out what, what, you know, what number it's representing.
Like, like that weird, like that weird, like, you know, you know what? People suck at that <laugh>, we suck at it. So, it's not a surprise that AI is getting better than us at that thing fast. Sure. <laugh>, the thing is, in the next quarter, maybe the next year, if you just kind of look at the line, AI's gonna get better than your average human at that thing. And when you think about what someone like Albert Einstein was really good at it was, I have no idea what the framework is. I have no idea what the algorithm is. I have no idea what the problem is, but I've got a bunch of random facts here. What am I gonna do with them as, can I get much better than humans at that thing?
Jim (24:02):
Yeah. What am I looking at <laugh>?
Jonathan (24:04):
That means here's the implication for your business. That means that AI plus your people is about to be smarter than Einstein. And not like, a little bit, yeah. Not like a little bit like a lot. And so, the question is, how do you manage in that world where each tentacle, each sucker on the tentacle is smarter than you are, is faster than you're, knows as much about the organization as you do. And for legacy firms, this is going to be an issue. There will be a lot of yellow pages moments, right? <Laugh>, there will be a lot of Kodak moments in this conversation. But the real issue is not technological. Like technology will get there, it will get there. You know, maybe I'm off by a year, maybe I'm off by two in my assessments, but I'm not off by five. And it will get there faster than the culture change.
So, the question to be asking is, how do I shift my culture? Now when I think about culture, here's what I mean. When you look at the way firms are organized, there's the architecture, right? Who's in command, who's in control? How do we communicate? There are the routines or the processes based on which business school you went to, and these are how we get things done. For me, I define culture as how we align around what happens next. Is it the great man? Do we do consensus decision-making? Do we just roll the dice a lot? Right? Lots of cultural ways to get from here to there. Lots of systems to do that. But what I know is in a world where all of your people are smarter, faster, and better than you are with AI, the culture has to change from one where we assume that people can't make decisions, they don't know what's going on, and they can't think through the implications of their decisions. We know that's gotta change. We've gotta become octopi’s organizations.
Jim (25:58):
Jonathan, this is, this is such a mind shift, I think for many people. I'm, I'm really glad you wrote the book. Just to give our listeners maybe one thing to do, one thing to sort of think about, because yeah, this has to start now, right? I think you're, you know, there's the shifting can start and, and it starts with maybe some small steps. What sort of one, one piece of advice or one action that you think listeners could take to get started on this shift?
Jonathan (26:26):
I'm gonna give you two. Okay. The first is to talk about Heim line for this transformation. When you do an M&A deal, and I've talked to a bunch of CEOs, I've talked with an M&A expert. You know, it takes about two years in reality to get everybody in the right place to get the organizational structure right Again. And, and in, in the middle is a whole bunch of chaos, right? But it's in about two years, right? That's, you're doing good. If you do it in 18 months, maybe that's about right. When you bring in, you do a technology transformation. You bring in SAP, you bring in Oracle, and you shift your supply chain, your finance organization, your human resource organization, which is about a three-year process. A year to plan it, a year to politic it, and a year to do it. And then you're to fix everything.
You just broke <laugh>, right? And, and maybe you will do it in two and a half years. You're not doing it in one if you're an enterprise, right? Maybe if you're a small organization, you can do it much faster. But if you got 2000, 10,000, a hundred thousand people, that's, that's kind of the timeframe. Now, how long does it take for an enterprise to change its culture? He will tell you for six months. That's not been my experience. Hmm. And that's not been the experience of HR executives and CEOs that I've, I've met and he is a genius. I've worked; I know him well. He's great. I just disagree on this point that you can actually do sustained rapid transformation of culture. And, and what I've experienced is that it's about a seven-year process. You know, every single member of the C-suite needs to move away.
Mm-Hmm. Every single layer of the organization needs to get restructured. And this is about what we saw at Microsoft with Satya Nadella, who is hands down the best technology CEO on the planet at this. So, if you believe that the change kicks off in 2030, 2032, like the real one, the one where things are 30 times better, that means that we probably need to be starting the cultural transformation now. We need to be thinking about the technology transformation in 2027, 2028. We need to be thinking about how we rearchitect our firms in really deep ways in 2028, 2029, 2030. So that's shift number one. Shift number two is how do you get your brain around this? Like we're talking about some science fiction level stuff, right? And I think you make a list; you make three lists. One is, what are the things we can't do because our brains are limited.
What are the things that, like if our organization had super robots in it, what would it do? How would it work? Right? And some of that stuff will happen and it'll happen in weird spiky ways. It won't all happen at once. And the second is, what shouldn't we do? When I was a middle manager, right? This was not my job description, but I spent 65% of my time, like in alignment meetings, just trying to get everybody going in the same direction. Yeah. Right? We're great AEs like the feces flinging isn't gonna stop. <Laugh> not, but what we do know <laugh> is that AI can help us make decisions much faster and in different ways. So maybe the politics has changed really dramatically. And then the third thing, like those first two things are, are really exciting to talk about. We're gonna build AI, God, we're gonna shift our labor force.
Those are really exciting to talk about. They're also table stakes. Those are red ocean. There's no opportunity there. We may or may not have to do those things. They may or may not happen. Mm-Hmm <affirmative>. What we do know is that there's a whole bunch of stuff that we won't do. Like either, like, it's just not worth a hundred dollars an hour of loaded labor or it's, it just sucks, right? Like, I don't make lists. I hate making lists. I suck at making lists. You know why I use ChatGPT every single day? 'Cause it takes my gobbledy look and turns it into lists so I can give it to other people so that they can get the work done. That's the thing I won't do. I used to hire a person to literally make my lists for me <laugh>, because I'm that bad at it.
What are those things that don't make sense at a hundred dollars an hour? But if there were 10 cents an hour, 40 cents an hour, a dollar an hour would be absolute no-brainers. That is the blue ocean. That is the big opportunity of this transformation. That's where all of the revenue, that's where everything is. When you look at something like lovable, we were talking about this is a vibe coding tool. So if you aren't a programmer and you can explain what you want in English, so you're product manager, it can jam out relatively simple piece of code so that you can give it to a grownup developer <laugh> to make right. Right. <laugh>, like, I just wanna be really clear. I don't think we're actually replacing developers. I think that there's this stage where, you know, the first three months of the first three prototypes are not working, that goes away because the product manager can say, here's exactly what I need much better for everybody. Doesn't mean that the amount of coating goes down, it means that it goes through the roof because all of a sudden you can produce so much more value.
Jim (31:36):
Right? Right. Yeah.
Jonathan (31:37):
And I think that's what we're missing. This is not a moment where labor goes away. This is a moment of labor abundance. Everybody's gonna increase their expectations at the same time. And the number of things we can do or about to go through the roof.
Jim (31:52):
It's, it's exciting. I think we're starting to see some of that again. I, I think again, the mind shift, the direction, some practical things you shared. I appreciate that, Jonathan, we're out of time here, but I think you have a lot more to say, <laugh>, there's a lot more to see. By the way, you called out some years I wanna call out. It's the end of 2025. So, if you're listening to this sometime in the future or in the near future, we it'd be great to see how these play out. But you are a futurist, so I'm taking stock in that.
Jonathan (32:20):
Yeah. And it's, you know, everything I say is, you know, statistically based. I, I'm not making, like, these numbers might not be right, but they're not numbers I'm making up. They're, they're looking at historical projections, experienced interviews with hundreds of executives too, to get to where we're talking about.
Jim (32:38):
It's fantastic. And I would encourage people to check out the book and visit their local aquarium because there's a, there's a lot about the ocean in this conversation. Absolute. Alright, Jonathan, thanks for the time. Thank you. Looking forward to talking to you again.
Joe (32:52):
You've been listening to What If? So What? a digital strategy podcast from Perficient with Jim Hertzfeld. We want to thank our Perficient colleagues, JD Norman and Rick Bauer, for our music. Subscribe to the podcast and don't miss a single episode. You can find this season along with show notes at perficient.com. Thanks for listening.