What if AI Was the New Knowledge Worker in Town? An Interview with Brian Flanagan.
In this episode, Jim and Kim talk with Brian Flanagan, Digital Experience Strategist at Perficient, about the rise of generative AI and its potential to replace or augment human workers. Is this the end of the road for half the workforce or just the beginning of a new era in productivity? Get ready to ponder the humanness of it all in this thought-provoking episode.
Meet the Host
Special thanks to our Perficient colleagues JD Norman and Rick Bauer for providing the music for today’s show.
What if AI Was the New Knowledge Worker in Town? - Transcript
Brian Flanagan (00:00):
It's interesting how advanced it is, is that when people find an error, like it's like, oh, I found something that's wrong. Which you would expect that there'd be lots of errors, right? When we started going down this path, and then we actually wrote the email, and nobody would know this is AI.
Jim Hertzfeld (00:18):
Welcome to What If? So What? This is the podcast where we ask what's possible with digital and figure out how to make it real in your business. I'm Jim Hertzfeld.
Kim Williams-Czopek (00:26):
And I'm Kim Czopek.
Jim Hertzfeld (00:26)
Today we'll ask What If?, So what? And most importantly, Now what? This week Microsoft released a new version of its Bing search Engine powered by ChatGPT. Have you heard of it? Building on its multi-year multi-billion-dollar partnership with OpenAI, you can sign up on bing.com for the preview waitlist. You will have to use Microsoft's Edge browser, which is kind of an interesting wrinkle to the story. To me, this kind of signals a new era in tech relevancy. We've made AI real, I think for millions of people and it's really opened things up in the public consciousness. In fact, ChatGPT hit a million users in record time. For comparison's sake, Netflix took about three years to hit a million users. Spotify, seven months, Instagram, two and a half months, ChatGPT five days to hit a million users. There are tons of stories coming about in the last couple of months around ChatGPT passing bar exams, creating recipes, writing fiction, even writing code. That brings us to today's what-if, what if AI-driven content could replace or augment knowledge workers with more data, more knowledge, better thinking, and better institutional thinking? Is this the end of the line for half the knowledge workers? Kim, what do you think?
Kim Williams-Czopek (01:49):
Hmm, this is tough. We've talked about AI before. I think the big so-what here is just another reality check. All of the potential pitfalls that we've talked about before, the accuracy, the ethical questionability privacy, IP ownership, can it scale? You know, my personal conflict is the humanness of the experience. Although I have to say in the past few weeks, playing around with some of these AI tools is the first time I felt a little scared for my own job. It's getting more and more human, you know, to your point, what does that mean to our knowledge workers, and how we really leverage AI for good and not for evil? We have someone here today who's going to help us figure this out. So to talk about the now- what we have Brian Flanagan. He's an AI strategist with Perficient. He's spent a lot of time thinking and implementing different AI. Brian, what do you think? What have we got on our hands here? Are the robots taking over or not?
Brian Flanagan (02:55):
Yeah, not yet, but it is an amazing catalyst. So ChatGPT as it got unleashed on the world is really getting people to rethink the way that they work and create. It's an amazing product that allows people to generate new ideas, and act upon those new ideas very quickly. It's really accelerating traditional productivity. It represents one form. So, we've classified Chat GPT as a large language model. It has the ability to understand text input and present information in a way that's really understandable for humans and that's really the real power of that platform. But generative AI is the type of artificial intelligence that can create new and unique outputs based on a variety of inputs. And this might be many different forms of content such as text, images, audio, video, as Jim mentioned, code, and synthetic data. So it's generating data as well.
Brian Flanagan (03:51):
This is much different than how artificial intelligence has been historically leveraged. It used to be regulated to doing the heavy lifting behind the scenes and the output that you receive was typically some form of analysis like image recognition. So we're using artificial intelligence to comb through a large repository of images, classify them, tag them the type of usage that we've seen in the past. But with generative AI, you can provide an input and the system goes and does some work for you. That's really the difference is that it's just not spitting back a result. It's going and doing work for you on the backend and then providing that information in a concise response. So they might be creating an image, writing a press release, or even creating a workout plan for you. There's lots of potential with the large language models to handle all kinds of tasks.
Kim Williams-Czopek (04:41):
But what about humanness? What do you think?
Brian Flanagan (04:44):
We've definitely been looking at how do we make the interactions more human with artificial intelligence. Some of that is really classifying the way that these systems interact with users. So, we need to bake empathy in a clinical environment or in healthcare environment, sometimes you have people reporting issues that may be very personal. So, we want to show empathy. If they mentioned that, oh think I broke my arm, we're going to say, oh, sorry to hear that and now let's make sure that you're okay, that it's not an emergency or we can redirect you to the next stage of the process. So, showing that empathy is important upfront, that's one way to respond. Then, preparing people for information. So, it's not about just outputting a bunch of information to a user, it is setting it up, hey, here's what I found or I put this together for you, which will actually help people transition from the query to processing the result that they got.
Brian Flanagan (05:41):
The other piece there is really increasing the transparency of how this information was pulled together. We really pushed the concept of explainable AI so that you can understand where that answer came from. Within the new being, Microsoft is featuring that they're actually putting in citations, so you know that it's coming from an actual source, which really validates the data. It makes you understand that it's not just all made up. Sometimes AI can fill in the gaps based on its understanding, and it may not always be accurate. We call those hallucinations. So it is, you know, just creating content that it thinks is appropriate, but it doesn't really have the information. So being able to have the citations and explain where that result came from is very valuable.
Jim Hertzfeld (06:31):
One of the funny stories I heard of reading about the Bing launch is that people were verifying the accuracy of the being results with a Google search, which I thought <laugh> I thought was interesting, you know, I got this return, and it checked out for Google, but that's interesting to hear. Now, he was also thinking, Brian, about some of these checkers that came out to verify, if you were looking to scan text, or I suppose any content and evaluate whether it was AI generated or not. You can kind of pick up on those, the tone in some of the work I've seen.
Brian Flanagan (07:07):
Yeah, there's certainly some patterns that you'll see. You know, some of the challenges is that it will repeat ideas, use the same words over and over. The longer the text you have, the more common that is. Interestingly, OpenAI released their own analyzer to determine if content is AI developed. Which is funny, because you have ChatGPT developing the content and then you have ChatGPT evaluating whether the content was developed by AI <laugh>. So they go back and forth.
Kim Williams-Czopek (07:34):
But Brian, why would anyone use AI either personally or professionally, what are some of the benefits that you're seeing with some of the advances in the space?
Brian Flanagan (07:45):
Yeah, I think there's a lot of opportunity for leveraging AI into everything that you do, right? Whether that's a creative profession creating images or just getting ideas. So, from a design perspective, we use it as inspiration and say, hey, where can I go with this concept? Gimme some ideas. Rapid ideation is really important as a content creator. These tools can generate content and it's not always perfect as I mentioned, but it really accelerates that process and we generally recommend having a human in the loop to review the responses, to make revisions. You know, having somebody that really understands the topic can be helpful to enhance what you're providing as a response. So, that’s important. These tools will really help accelerate that process. As a customer, you know, somebody that's being targeted with a content, there's a lot of value there because now you can develop personalized content that's more relevant for me, it's not a one size fits all product description, and E-commerce. Retailers are developing personalized product descriptions that are tailored to individual customers and highlight what is meaningful to them.
Brian Flanagan (08:54):
So, if it's in a car shopping experience and maybe it's a Gen Z, you're going to use a certain perspective that focuses on, you know, their more energetic lifestyle, and some of the things that they want to do. You know, going about the town and how they're going to use the vehicle versus if it's a grandmother that you're targeting. Maybe it's around safety and the ease of use of the vehicle, right? Those are things that you can emphasize. So those customers benefit by having more of that relevant. It's as if you're talking to a friend or some advisor that is giving you that direct advice of how it meets your needs, which is really valuable. For businesses, the benefit is that the cost of developing content is decreasing, right? So you can generate more content much more rapidly, which means that they can accomplish more with their marketing dollars.
Brian Flanagan (09:42):
What used to take hours perhaps to write, can now take minutes. You have the power of the platform plus the ability to train artificial intelligence to understand how you want to develop content. What are some of the parameters that you want to enforce? What's the tone and voice that you want to make sure are part of that content creation? All of that is really increasing efficiency. Now people are just going to do more, right? <Laugh>. I'm trying to get to the point where we have a four-day work week, right? Where AI can do more of the task and we get a four-day work week. The reality right now is people are just doing more with the technology mm-hmm <affirmative>. And then if you think about this, you know, from society as a whole, the impact is potentially enormous.
Brian Flanagan (10:28):
There's a lot of unknowns here. One of the great use cases is that tools like ChatGPT are democratizing access to knowledge and information across the globe. People that may not have access to some of the more advanced training and knowledge can now acquire that through something like ChatGPT and it enables education to be a truly personal process, right? How do I want to learn? I can ask questions if I don't understand the response. The ability to follow up with a question, so it's not that one size fits all approach. You can be very personal to me if I haven't had the training on a particular topic, I can go and get that training or the information there and you know, follow up my questions.
Jim Hertzfeld (11:14):
I think that's one of the more interesting features of ChatGPT is the persistence of the line of questioning, right? Remembers what you asked, right? It it's constantly contextualizing. I think that's a really powerful aspect in terms of education.
Brian Flanagan (11:27):
Yeah. And it's interesting how advanced it is, when people find an error it's like, oh, I found something that's wrong. Which you would expect there'd be lots of errors, right? We started going down this path, we thought, hey, let's write an email from AI and it'll be funny because it'll be all wrong, right? And then we actually wrote the email, and nobody would know this is AI, you know our experiment, right? Right. It's not proving funny. It's like you wouldn't even know it. I asked ChatGPT about four letter words that end with -ail and he gave me a list, but one that wasn't in there was mail. So I said, well what about mail? And he goes, oh, I'm sorry, you're right. Mail's <laugh> a four-letter word that ends with -ail.
Kim Williams-Czopek (12:13):
Well Brian, how do organizations then on the business side get started? Are these net new capabilities? Are we building on existing capabilities? What are some of the enablers that are needed to really leverage what we're seeing as a catalyst now in the AI space?
Brian Flanagan (12:29):
I think organizations have to take an AI first approach, right? So really looking at their challenges, they need to consider how artificial intelligence can help. It's not about just choosing a technology and then finding a fit for it. It is looking at what are their use cases, what are the things that they're trying to achieve as a business? Then let's see what the right AI solution for might be for helping them achieve that. So that might be things like content creation or video editing. We talked about training, even application development. It starts with what is the need? Then, let's look at potential solutions and see where we can leverage this great new power. The other thing that we've seen is that in our strategy work that the landscape is changing extremely rapidly, which means that these long-term roadmaps are, are very obsolete, right?
Brian Flanagan (13:20):
Anything that's three years out, you have really no idea what the technology is going to be at that point. So, at Perficient, we like to talk about agile strategy and that approach has never been more relevant. We're really looking at what are the near-term targets. We have plans for how we want to evolve our solutions, which may have some technical components of that, but we're accounting for technical change. As we develop dependencies in building some of the core functionality, we know that that's going to impact the rest of the efforts down the line. So, it's really important to stay up to date, to learn what these new capabilities are, and be able to apply them in an agile fashion.
Jim Hertzfeld (14:04):
That’s great advice, Brian. And yeah, I will certainly endorse your agile strategy approach.
Kim Williams-Czopek (14:09):
<Laugh>. Thanks Brian. I am truly feeling a lot more optimistic about the robots and the future of AI. Some really, really good examples on how you can benefit from using the technology both personally and professionally. But Jim, we like to close our episodes with a true, what can you do today, right? To leverage what we've been talking about. What do you think?
Jim Hertzfeld (14:34):
What do we do with all this direction that Brian gave us? Thanks again, Brian, and I really appreciate all the work you've been doing in exploring this space for few years now. If you haven't done it already, sign up for ChatGPT, Bing, or the platform of your choosing, depending on the day that the next one comes out. I think one of the things to do is sort of reflect on your daily life. What can you ask it to do professionally? And maybe every day you take one small task and you ask it to reply to an email for you or plan a meeting agenda. I know as a team the other day; we looked up common reason codes to dial into a contact center because we were trying to guide a client. We came up with a hundred different ideas and it was directionally interesting. So, I think every day you try it and you learn a little more about it firsthand. And I know I've got three questions that I'm going to ask we're done here. And that is: What If? So What? Now What?