Skip to main content
AI data strands
what if? so what?

Susan Etlinger, AI Analyst and Industry Watcher on Building Trust in AI

In this episode of “What If? So What?”, host Jim Hertzfeld talks with AI analyst and industry watcher Susan Etlinger about the true disruptive power of AI in today's rapidly evolving digital landscape. Susan, whose journey into technology began in the humanities, explains why adopting an AI-first mindset isn't simply about automation–it involves critically assessing when and how AI genuinely adds value.

Jim and Susan explore the nuances of Generative and Agentic AI, offering insights to executives on how to integrate these technologies responsibly while preserving essential human judgment.

Tune in to learn practical strategies for navigating AI’s opportunities and challenges and ensuring your organization remains competitive without compromising its human core.

Subscribe and Don’t Miss an Episode

Listen on Overcastlinkfire

Special thanks to our Perficient colleagues JD Norman and Rick Bauer for providing the music for today’s show.

Episode 67: Susan Etlinger, AI Analyst and Industry  Watcher on Building Trust in AI- Transcript

Susan (00:05):

You know, I've had this conversation actually with a lot of people about what do you think AI-first means? Do you think that you use AI before you use anything else? And actually, I think one of the most thoughtful responses I got was from someone who said, no, it's not that you use AI before you use anything else. It's that you consider AI for the task before you consider other technologies. Meaning, is there an opportunity by using AI to do something in a way that you could not have done otherwise?

Jim (00:32):

Welcome to What If? So What? The podcast where we explore what's possible with digital and discover how to make it real in your business. I'm your host, Jim Hertzfeld, and we get s**t done by asking digital leaders the right questions. What if? So what? And most importantly, now what?

Hey everyone, I'm really excited to be with Susan Etlinger. She is an analyst researcher, a trend watcher. I'm gonna say forecaster of cool things to come. So, Susan, welcome to What If? So What?

Susan (00:58):

Thank you, Jim. It's great to be here.

Jim (01:00):

So, Susan, give us a little bit background, kind of what, what brought you into your role? What interests you? What drives you, how'd you get here today?

Susan (01:08):

So, you know, I am one of these people who actually started in technology through the humanities route. Meaning that when I was younger and I was in college and I was studying, I was really convinced I was gonna be a professor at some like, leafy little university and, you know, think deep thoughts and read great books and do all that stuff. And I went to Berkeley and I moved to San Francisco. And when I was at Berkeley, I studied rhetoric. And when I moved to San Francisco, of course the, the big gig in town was technology. And by a series of happy accidents, I found myself working for a technology firm first time out. And I realized that a lot of the interests that I had had as a translator, as a kind of, I mean, I wasn't a literary critic, but like as somebody who really, you know, read things, someone...

Jim (01:54):

Someone with an opinion.

Susan (01:55):

Yeah. That I was just as interested in, if not more in technology. And so, I ended up, I started out like as a little baby technical writer and worked my way through and did a lot of coms work, marketing work. And, I was an industry analyst for about a decade. And I'm just wrapping up a stint at Microsoft. So, it's, it's been a really kind of a, a winding road as a lot, I think you talk to a lot of people who have been down a winding road, right? I was one of those people.

Jim (02:23):

That's great. I love that background because I find a lot of people in this industry with different, different backgrounds. You think computer science major. But you know, what I also find is like music majors, you know, I see there's this, and I, I know there's been a lot written on, you know, the, the relationship between, you know, coding and composing music. You know, there's, there's something there. There's something deep there. But, you know, I also think that, you know, there's a lot more depth and thought and psychology and, and humanism, you know, ironically, we'll get back to that. We'll get to that on the topic today. Like how does that, how does that relate to, to technology? Because at, at the end of the day, we're all end users.

Susan (03:00):

I mean, honestly, like, just to kind of take it quickly now, there's, you know, you talk to a lot of people who will say, I started, you know, in my garage, you know, taking apart Commodore computers and putting 'em back together. And that's a really, it's a really familiar story. I started like in my bedroom, taking apart books and putting 'em back together. And what's interesting is that when you think about sort of where we are with language technologies and even vision technologies, a lot of it is about understanding the inner workings of language and the inner workings of how we see and perceive things in the world. And so, so that's kind of the orientation that I, that I bring. And I think it's, you know, it's as valid and at this moment it's, you know, it's actually really, really important.

Jim (03:45):

I totally agree. And we're going to jump into that. You know, we're gonna, we're gonna focus on, on AI today. It's kind of a new topic, and just kidding. So, you know, there are, there are a lot of waves though. You know, we've seen, you've, you've been in, in this space for a while. You've seen it grow up and evolve, you know, sort of waves, emerging technologies, and waves of old ones. It seems to be moving very fast. There's a lot of fear, uncertainty, doubt, threat, hope, excitement. So right now, in this current wave of AI, do you think it's truly as disruptive as it appears to be? As, as everyone's saying?

Susan (04:18):

Oh yeah. I do. I do think it's as, as disruptive. And you know, it's funny, like if you go back, like in some ways AI is like 70 plus years old, right? If you go back to , right? And the Dartmouth Conference in 1956 and the coinage of the term artificial intelligence. But we have been going through, we, everyone on this planet, have been going through these technology platform shifts, you know, for years and years and years. You can even go back to the Gutenberg Bible, right? You know, or, or even further probably in terms of every, you know, it's part of human history that new technologies come and they disrupt and they change the way we do things and we learn from them, and we move on and we figure out how to use them in new and innovative ways and sometimes in, in ways that accrue to, to the good and ways that sometimes don't accrue to the good.

But this platform shift in particular, in fact, we were talking about Kevin Kelly earlier, and the inevitable. And one of the things that he talks about in the inevitable is that in order for AI to succeed, you need three things. You need copious amounts of data, you need reasonably priced computing power, and you need algorithms. And we're at this moment now where we have just insane amounts of data and all sorts of types of data we can talk about, that computing power is at a place where, you know, obviously the capabilities that we have today far outpaced mm-hmm. You know, even what was possible 10 years ago. And as a result of those two things, algorithms have gotten so much better. And so now the pace of change is, I mean, that flywheel is moving faster and faster.

Jim (05:52):

Yeah. Yeah. I agree. And there's just, it is fundamentally disruptive. We talk about a lot in our practices, you know, in an AI-first mindset. And you know, sort of in that theme of deconstructing, you know, I, my mind kind of goes there a lot, you know, what, what does that mean to be AI-first or have to have an, have a mindset? And it's not just thinking about AI or learning about AI, you know, it is really, how does that fundamentally change the way we design systems? Like, I think we think of systems as, you know, I send a query and it looks up a database on returns of result. And what we're talking about with generative AI and with agent AI is not simple retrieval of information. And so, I think that's the really powerful part. I find that challenging to explain, easy to demonstrate, you know, like, well, this is where we used to, you used to work with it, and now you know it, we're gonna send this task or this thought or this question to an LLM and here's how they're going to handle it differently.

And so yeah, sort of seeing that that's, to me is the most disruptive part. But it also seems like, I would say five years ago, we would talk about AI speaking for myself, we would kind of go back to machine learning. You know, that was, oh, AI, that's just machine learning, or that's visual learning or, or a vision system. Three years ago, it became generative AI. So the generative AI, which, of course, is powered by or made interesting by generative AI last seven or eight months, AG agentic is the new AI, which of course is powered by or made interesting by generative AI, last seven or eight months, AI agentic is the new AI, which, of course, moment, if we can call it that, why do you think that is so compelling? Why do you think agentic has sort of taken over the moment?

Susan (07:39):

Yeah. Well, let's go back to AI-first for a second because there's an interesting nuance about AI-first, which is that AI-first, you know, I've had this conversation actually with a lot of people about what do you think AI-first means? Do you think that you use AI before you use anything else? And actually, I think one of the most thoughtful responses I got was from someone who said, no, it's not that you use AI before you use anything else. It's that you consider AI for the task before you consider other technologies. Meaning, is there an opportunity by using AI to do something in a way that you could not have done otherwise? And not only is there an opportunity, but is it the right, like, and because when we talk about ai, of course we're talking about multiple models, multiple technology, right? Multiple definitions.

You know, it is a massive, simultaneous equation of what AI could possibly mean. So going back to AI-first, I think it's important because it stops you for a moment and makes you think, what is it actually that I'm trying to do? And can I Occam’s razor this? Or should I Occam’s razor this and do sort of the simpler possible thing? Is that enough, or is there enough upside on the other side, whether in terms of efficiency, innovation, or both, mm-hmm. To be able, you know, to make AI, whatever it is that I'm using at that moment, to make that worth the squeeze. And I think that's a really important, and also it should we be using AI in that context, because of course, AI has, you know, environmental consequences, right? And so, so I think, you know, what I'm arguing for here by talking about AI-first, first before we talk about ENT, is that we are on a very fastening hamster wheel.

And I think there's a lot of FOMO out there, particularly by leaders who are thinking, I have to get into this. I have to do this. And they do. ‘Cause they have to learn, right? This is a learn by doing kind of technology. Yeah. And at the same time, you know, I think it's really important to consider what is it that I'm trying to do, and is this the right approach? And, and even if it's not quite the right approach, can I learn something in the process? Yeah. So that's, that's part one. And then part two, in terms of sort of the agentic thing, is that, you know, generative AI has opened up the potential to do so many different things, to automate so many different kinds of processes. And agents, as we think about them, are simply a way of automating things beginning to end.

And, you know, just one simple example is when you order something in ecommerce, you know, there's a lot of friction in that process. Probably less friction than some other things. But like, you know, you order something, it gets shipped mm-hmm. It gets delivered to you. You know, that is a fairly straightforward process. There are a lot of other processes though, where you want a more, you want more humans in that loop. So, with the age agentic moment, I think the big question is, again, like, can we, but also should we, like, is this something that we actually want to automate and delegate to technology, or should we keep it? And why? And that I think is one of the most interesting questions today.

Jim (10:51):

Well, I love the experimentation. I think there's you know, I've been kinda reading and hearing that there's, we're getting past this sort of POC everything stage. I mean, I still have customers that are, you know, interested in some sort of lab because they're a little bit behind or they've been hesitant. But there are a lot of organizations that have feel like they've, they've poc’d it to death. They think they, we kind of get it, but they're still hesitant. And so, I also hear stories about treating AI as is like a, something you buy, like it's a platform. Like, well, I have to decide which AI I'm going to use. And you mentioned the understanding models and what models are the best fits, and do I train that, or not train, but tune that model. So, you know, there's so much, again, changing the mindset, but a lot of people who are, they've gone through the POCs, they maybe fundamentally get it, but they haven't sort of found the killer use case.

Jim (11:43):

So, the killer business case, I know a customer recently that spent quite a bit of money I'm not gonna mention the, the organization nor the technology, they couldn't quite make the finances work in their mind. Right. Whatever hurdle they have to get over. So I'm seeing that, but what, what other challenges are you, are you seeing or hearing about? Because again, we, we both agree, I think we all agree, it's, it's real. The promise is there. The, it is fundamentally disruptive. We kind of know what it's gonna take. But there still seem to be barriers. Like what are you, what other barriers are you seeing out there?

Susan (12:14):

I mean I think one is the, the business strategy, right? Is what are you solving for? Do you have the right use case? And there are lots of different ways to consider use cases. This is not anything new to any kind of C-suite leader. Yeah. The next one is like your platform. You know, do you have the data that you need to tune, like the proprietary data to tune the models? Because out of the box, like every single model, is pretty much the same. But in terms of like your competitive advantage, company A versus company B, if you're both using, you know, a specific model, you're both getting the same advantage unless you're fine-tuning it with your proprietary data. Mm-Hmm. Mm-Hmm. Right? So that, that's really what I mean there. And then the whole question of like, you know, we all know, you know, AI runs on data. Data's the fuel, the power is AI, you know, clean, accessible, all that stuff is, is critically important. And the computing power, of course, is important, and the security and all those things. So that's the sort of technology side of it. And then there's a side around sort of understanding organizationally, like, do you have an operating model for this? You know, are you sort of just trying experiments or do you actually have a way that you are building out to be able to scale something across an enterprise? 'Cause a single departmental use case is quite different from scaling that entire use case across an enterprise. The governance is critical. Like, so there are lots of...

Jim (13:36):

Different, right. And I was just gonna chime in on that. Yeah. Governance...

Susan (13:39):

Responsibility.

Jim (13:40):

Responsibility. Oh my God, we're just on responsibility. That was great.<Laugh>

Susan (13:45):

All right, well, I guess we end the podcast here then, probably <laugh>.

Jim (13:48):

It's over now.

Susan (13:49):

Until you say my name.<Laugh>

Jim (13:51):

<Laugh>. Oh, that's great. Well, it's, it's interesting you mentioned the foundational models are, you know, there's, I don't wanna say total parody, right? I was making a joke about that with people. It's kind of like, you know, what's your favorite, you know, why, you know, and it's, it's like choosing between In-N-Out Burger and Five Guys. Okay, where do you go? Or, or my new favorite, the Habit, I'm new to that, the Habit Burger. So, I'm glad you mentioned data as well, because I think I'm running into more use cases where there's definitely an expectation that the AI just sort of knows things, and they're sort of bypassing that data problem. And I think that's becoming more and more evident that while there is an abundance of data, we have more data, we're producing more data, more data points than ever for lots of reasons. We don't necessarily have it in a usable place or a usable manner, and we still don't have the data that we want to really pull off what we want.

We still have to get things out of people's, out of their head, you know, like who knows about that, you know, machine over there, who knows about that, that customer, well, that's this person and it's, it's all in their head. I don't know how to give that out of this person's head. Yeah. But we've gotta do that. But that kind of brings up, you know, the, the human factor. So, we hear a lot, you know, the term human in the loop. I think that's sort of become a mainstream term now. And you hear grandmothers talking about the human in the loop. So, it's becoming known, but there are many other human factors. You know, how do you see sort of this role of, of humanity, especially in your background, and, you know, how do you see the humanity, you know, being properly, I would say, integrated into an AI-first business model?

Susan (15:25):

Yeah. I mean, I think this is where you get to the distinction between agents and agency, right? And agency is that power, it's that capability, it's that intention. And, and it goes back to sort of like, okay, can we automate this process beginning to end? Do we have the data, do all those things? Let's, and so let's say, you know, all things considered, yes, we have essentially what it needs to do that. And now there's the question of should we, and I'll, I'll just sort of illustrate with an example from early in my career. And this is something I actually think I'm thinking about a lot, and I don't fully know how to articulate it. So we're all along for the ride now, which is that early in my career, I worked for A-C-I-O-I was her comms manager, and I ran her quarterly business review.

And in order to do that as a 700-person technology organization, in order to do that, I had to pull inputs from 80 different people in the organization. And very often there were things like the finances, things like, you know, operations, you know, the peak to average ratio in terms of like the ability of the, you know, the systems to, to handle the workloads and all that kind of stuff, outages, call center, SLAs, all those things. And so, it was 80 different people I had to talk to. And very often, you know, and it was a huge, massive spreadsheet. And it was me at my desk at 11:00 PM crying a lot of nights because I was asking people for data that their managers and their skips and their skip skips and their skip skips skips hadn't seen. So, I was gonna show it right to the CIO, yeah.

Because that was the process. So, I went into my manager's office, and she was the SVP of that team, and I was pretty much in tears, which was a big no-no. And I said, I don't know what I'm doing. Like, I'm just like a humanities girl. I have no idea like what this is, and what I'm doing, and why I am doing this, and how it's gonna help my career. And she says, like, first of all, pull yourself together, sit down. And she says to me, You are doing it, strategy work right now. You are responsible for pulling together the data that is going to inform our IT strategy for the years to come. That is your job. And I was like, that's my job. That's what I'm doing. I thought I was pestering people, and them off, and annoying them, and trying to figure out how to get the data out of them.

She's like, that's called stakeholder management, Susan. Interesting. Now, today, if we were to do this the way that organizations are doing this, you can automate a lot of that, and you should automate a lot of that. There's no reason that the next Susan should be crying at her desk at 11:00 PM, but I learned so much. So, one of the things I think is so important, I learned stakeholder management, I learned finance, I learned operations, I learned how systems worked. How will I learn that? How will the new people coming up learn that? And we see this with developers, too, mm-hmm. And so, I think we have to, that goes back to like, can we take a moment and think expansively? That to me is one of the biggest questions that I have about how we use agents intelligently so that we're not automating away the cognitive load that we need.

Jim (18:30):

I love that nuance. And I think we are starting to see that, you know, I, I think I've, I've heard some stories about sort of the equivalent of I over over index, sorry, over relied on a generative solution to do the thinking in the, in the sort of ingestion and the processing and the training, the faculty, you know, the brain faculty and the memory. It's the equivalent of like cramming for a, an exam the night before. You know, people who've done well, what, what, and I've actually done this with Doug, so tell me more. You sent me an interesting memo. Now tell me what you meant by that. And they can't tell you, you know, because they didn't really go through the rigor of, of writing it and putting the humanity into it. So that's an interesting dilemma that I think we have never had before, you know? Yeah. Or when my, my kids started driving, they wanted to take the GPS to go down the street, and I said, no, you know, you're going down the streets, you know, you're gonna have to learn how to navigate on your own and develop that capability. So, it's something, do you see that as sort of just a, a momentary adaptation? Or do you see this as sort of one of many sort of responsibility sort of impacts that AI is having, not just on business, but on society? Like how do...

Susan (19:41):

Yeah, no, I mean, I see it as momentary, but only in the geological sense, right? <Laugh>. Yeah.

Jim (19:46):

Yeah.

Susan (19:46):

In the geological sense, it's a blip. But I think, yeah, you know, it's, it's gotta be really important for us. And, you know, if you don't, like if my, if my impassioned, you know, Susan crying at her desk, story isn't enough, think of the value chain, right? Because right now, what are we doing with our large models? We're pulling from thousands of years of human history that have been encoded into whatever data is being used to train those models. And, you know, there are reports out there that say like, historians are gonna be automated away, and, you know, all sorts of other roles. And then the question is, okay, so then who's gonna write today's history? Right? Who's going to do the work to find the sources that are not already right accounted for in the models? Who's going to provide the perspective on those sources?

Who's going to do that work? And so, and what will happen in the value chain 10, 20, 30, 50 years from now? So, I'm trying to think about it. So, I think that's something we'll acclimate ourselves to in the same way we've acclimated ourselves to every platform shift that's happened, mm-hmm . But if we look back at all these different shifts, whether we're talking about mobile or the internet or cloud or, or PCs or, you know, whatever the telegraph, <laugh>, telephone cars, there are things that we've learned about how we could have, should have, would've used them. Mm-Hmm. And so, what I'm arguing for it, sure, let's take that, let's take that moment now.

Jim (21:17):

Yeah. Well, I think, I, I don't know who said this, but it was, you know, like the side effect of the automobile industry is car accidents, you know, the mm-hmm. Side effect of electricity in every home is, you know, electrocution. It's really dark. But yeah, it's, it's in, in a way, it's not a fatalist kind of view, but the difference here is that, you know, I think Gilda Radner said reality is just a shared hunch. I think I might be, don't get me a direct quote on that. It was kind of the idea like, hey, we all kind of agree this is it, then it, then this is it. You know, AI is interesting because, and I thought there's a lot of notion of model collapse out there, and when the model just starts feeding on itself, then, right? What, what is that, what is that giving us?

So that is big, that is really heavy stuff, Susan. We got really existential here. So, for our listeners, so we've talked about sort of how this is evolving, the different ways, I think, to think about humanity and going beyond the human in the loop. But just for the average listener, you know, who's in this age agentic moment, you know, if they haven't dipped their toe in, yeah. Or they're, they're, they're thinking about what to do next, what's sort of one, one piece of advice, one actionable piece of advice that, that you would give the listeners?

Susan (22:24):

Yeah. I'm gonna frustrate you by going back for a second and just reminding ourselves that like, there are thousands of languages in the world, and our models really only represent a small fraction of those. And so, and those languages represent people, real people in the world. And so that is one thing, you know, there's, there's a professor at Stanford, Tomney, and a number of different people who are working on trying to make sure that we, that we have access to the data and the information from multiple languages and that multiple people are represented. And I think that's, that's really important too. And of course that carries its own chain of responsibilities, right? But in terms of the one piece of advice, I mean, I think it's find something compelling and relatively low friction that you can automate and learn from. And this, the difference with this technology, one of the many differences is that it's so accessible. And, you know, people have said this to me many, many times, and I'll repeat it. It's, you just have to start. Yeah. Because it is a compounding competitive advantage. If you start today, you'll compound from today forward. If you started five years ago or three years ago when generative AI came on the scene, you're seeing the confounded results of that, both positive and possibly also negative, but you've learned. And so, I think that's, that's the important piece.

Jim (23:45):

I agree. I agree. I have a few places I like to send people. And, and you're right, the accessibility of it is, is what's fascinating. And I know when I, I was getting my daughter into this in a few years ago in school, and the data sets that are out there, the training data sets, not, not in the model training, but you know, the educational data sets, it's pretty cool what, what's out there. And I, I agree with you that the, the tools, I mean, are democratized, I guess is the word that comes to mind. Another thing that makes this really fascinating, so well, Susan, thanks for your perspective and your humanity today .<Laugh>having us. It's the least I can do <laugh>. Alright, take care. Thanks.

Joe (24:21):

You've been listening to What If? So What? a digital strategy podcast from Perficient with Jim Hertzfeld. We want to thank our Perficient colleagues, JD Norman and Rick Bauer for our music. Subscribe to the podcast and don't miss a single episode. You can find this season along with show notes at perficient.com. Thanks for listening.