In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether businesses should add AI to everything just because it’s trendy. They explain why following directives to “AI-ify everything” can backfire, and share a smarter approach using processes like creating user stories and the Trust Insights 5P Framework.

Subscribe To This Show!

If you’re not already subscribed to In-Ear Insights, get set up now!

Advertisement: Google Analytics 4 for Marketers

Attention marketers! Are you ready to unlock the full potential of Google Analytics 4? With only a few short months left until GA4 becomes the sole Google Analytics option, now is the time to get ahead of the game.

TrustInsights.ai’s Google Analytics 4 course is here to guide you through the measurement strategy and tactical implementation of GA4 in just 5.5 hours. With 17 comprehensive modules, you’ll gain the knowledge and skills necessary to effectively set up and configure GA4 to work for your unique business needs.

But that’s not all. Our newly updated course, released in January 2023, covers major configuration differences in Google Analytics 4 to ensure you’re up-to-date and fully equipped for success. Plus, our course is fully accessible with captions, audio, and text downloads, so you can learn at your own pace and in your preferred method.

The clock is ticking, and with GA4 set to replace all previous versions of Google Analytics, you won’t have year-over-year data until the day you turn it on and set GA4 up. Don’t miss out on valuable insights that will help your business thrive. Register for TrustInsights.ai’s Google Analytics 4 course now and take control of your data.

Click/tap here to enroll in the course now »

Sponsor This Show!

Are you struggling to reach the right audiences? Trust Insights offers sponsorships in our newsletters, podcasts, and media properties to help your brand be seen and heard by the right people. Our media properties reach almost 100,000 people every week, from the In Ear Insights podcast to the Almost Timely and In the Headlights newsletters. Reach out to us today to learn more.

Watch the video here:

https://youtu.be/P0bIS2dYRIw

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:


Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, is AI, like Nutella doesn’t go on everything.

Should it be in everything.

And there’s two different perspectives on this.

Katie, you saw something on? I think it was threads.

Right? The which is it? Which is a replacement for the long more on Twitter? What do you want to set the stage as to what it is that people are discussing about AI?

Katie Robbert 0:28

Yeah, absolutely.

So over the weekend, I was just sort of scrolling through.

And, you know, what’s interesting about threads is it’s a blend of personal and professional conversation, at least for me, I usually try to keep all my social platforms separate.

And this one, this one thread came up where the poster said, two of my friends at different companies reported leadership literally saying, quote, put AI in everything was the new mission, not based on what, on what it’s good for, what their products need, just AI a phi, everything because investors like it.

And I grabbed it, because, you know, we, Chris, you and I often talk about, you know, the prop, like the five P’s.

And, you know, I’ve mentioned that AI isn’t the solution to everything.

And so it’s a little concerning to me now that, you know, in very broad strokes, leaders are saying, without doing like a five p or something like that, they’re just saying, Put AI in everything.

And to me, that’s very much a panic move.

It’s a short sighted move.

And it’s also an ignorant move, because they don’t know what that means.

So where do you want to start with that, Chris?

Christopher Penn 1:44

So I sort of agree with that.

It here’s why not because investors like it, not because it’s trendy, but because AI is a technology.

It’s not a platform, it’s actually a base technology, like writing is a technology like mathematics is algebra as a technology.

And so one of the things that we want to encourage people to do with caution with, you know, with putting some thought into things, building out things like, you know, good user stories, is Bry, to do everything with AI so that you figure out what it’s not good at.

So you know, okay, this is, this is not a good use case for this.

This is not a this is not going to work here.

And then there’ll be things like, Okay, this is a good use case for this is worth trying to figure out how do we implement this into our five p framework, and AI is going to be part of the platform for HR, or finance, or sales, or whatever the case is on a task by task basis.

So I find a grid, I think the motivations are wrong doing it because the investors like it, bad idea, right? Doing it, because you see an opportunity to save money, save time or make money, I think is a is a sensible way to start thinking about it.

Because you know, if you, we always say if you get two out of three, you’ve won, right? save time, save money, make money, if you can do two out of three ain’t bad.

So I think the motivations are wrong, but I think the concept is test, experiment, come up with good user stories, and then see what happens.

Katie Robbert 3:22

The the issue I see with your recommendation, and your recommendation isn’t wrong.

But the issue I see with the execution of your recommendation is people having patients people paying attention.

And so I, I will bet you What do you say a small pastry with the retail value under $5? Basically a doughnut, I’ll bet you a doughnut that if we were to propose this idea of testing to companies, they’d be like, Okay, great.

Yeah, I’ll do that.

But then what they’ll do is they’ll basically say like, here’s my list of tools.

Does it do the thing? Yes or no? It’s not does it do it? Well, it’s not does it fit into the process? It’s not what were the results? It’s did it do it? Did it write content? Not? Was the content Correct? Was the content valuable? Will the content help my SEO? It’s did it do it? Great.

Check.

So I’m gonna start using it, I don’t care.

You just figure out how to fit it into your process.

It’s gonna write the content for you now.

And so I also feel like so I feel like there’s one side of that the other side is that I feel like there’s a bit of, you know, what we call that analysis paralysis, or that overwhelmed feeling of like, I don’t even know where to start.

You know, so you’re saying, use AI on everything you do? Well, we have people like me who are like, what does that even mean? Like, should I use AI when I’m sorting my laundry? Should I like use AI to help me poached chicken? Like, should I use AI to figure out the best route to walk a dog? Like, what does that mean? And I don’t even get started.

Because I’m so overwhelmed with the options.

And so how do you respond to those two different camps? That camp one who’s like gonna say, okay, great, yes or no.

And the other camp is like, I don’t even know where to begin.

Christopher Penn 5:20

So, okay, let’s start with, where do you even start, if you go to the Trust Insights website, you will see a one page in the in the Insights section that looks like this.

This is our beginners starter kit, some basic tasks and which model and tool you should start with, this is not comprehensive, this is not the be all end all.

This is the starting points, like you know, it’s like your first pair of sneakers when you take up riding is going to be this pair of sneakers you’re running with at mile 26.

In the Boston Marathon Sunday.

No, but this is better than sitting on the couch.

This is put on the sneakers.

So before, I think this part is is an important thing to take into account, which is okay, well, what? Where should I even begin? And then the second thing to keep in mind is and this is a you’ll find this, if you go to where can I get the slides.com? You’ll find many talks that have this, this is the sixth broad category.

So what are the things that in general you want to do with it? In this case, generative AI, because remember, there’s the three categories.

Katie, you were actually on a podcast recently talking about the three categories, find organize and generate regeneration.

These are the six kind of broad categories.

So if you think about the tasks that you do every day, they probably fit into one of these things.

And if they don’t, then it might not be a good use case for generative AI.

So like, how should I poached chicken? That’s a question answering task, where should I walk my dog? That can be question answering.

But that’s geospatial data is not something that most language Bob is gonna be good at, because geospatial is not language.

So doing math makes things really you’re bad at math.

Katie Robbert 6:55

So I want to go back to your example of where do I start.

And so you gave the example of, you know, you buy your first pair of running shoes, and it’s not the pair of shoes that you’re going to run the marathon in.

And so it struck me that that sort of still fits into those two camps of I don’t know where to start.

And let me buy the best of the best and figure it out.

And so I can still sort of see there’s the art, I got to start somewhere.

So let me just put on the shoes that I already own.

Versus I think it falls into this camp of CEOs who are saying just put AI in everything, let me buy the most expensive pair of shoes, and then I’m going to figure out whether or not it can actually run or if it’s, you know, going to be harmful to my joints.

But I’m just going to do it anyway.

So like, it’s like figuring out despite the best of the best,

Christopher Penn 7:44

right? And we go back to the five P’s, right? The shoes are the platform, they’re the fourth P you need to start over here.

Yeah, purpose, like on a task by task basis.

So the one of the things that we tell people very, very early on in, in your AI journey is sit down and think about what are the things that you would even want to try doing with AI? Right? So we have this, this little graphic here, we say, look, there’s stuff that you’re going to do in your company, like, process optimizations is stuff that’s going to be external, like maybe content, you generate maybe how your customer service center works.

So what are all those tasks? And then what are the tasks that are improving something that already exists, optimizations, and then things that are net new to your organization, maybe you’re gonna build a chatbot that people can talk to an Ask customer service questions to doing what’s called call deflection.

But you want to put this on a wall virtual or real.

And get out your post it notes and start putting out all the tasks that you do, so that you can figure out of these tasks.

Which ones are urgent, which ones are important, which ones are easy, you know, score them essentially a one point for each of those things and say, Okay, well, this test this task here.

A real simple, straightforward example would be Katie, you had me recently build some new reporting using Hubspot data.

It’s urgent, because that data is important.

And the data is important.

And now with generative AI writing the code to do that is easier than certainly it was a year ago.

And so I said, Let’s take that task.

And you and I actually sat down and wrote these user stories.

As a CEO, I need to see where we are getting our business opportunities from so that I can tune our marketing our marketing methods, right, we wrote out those stories.

And then based on that, I was able to construct dashboards from the data to meet those user stories.

So that’s an example of we picked something that was an optimization that’s internal, right so on our on our grid here, that’s urgent and important, and now relatively easy to do with AI.

So that’s how we got started with that particular thing and that’s the method I would suggest to people figuring out how do you get started? Where do you get started? Catalog your stuff, read it, and then say, Okay, well, what? Can we pick one of this long list and start? Because yeah, you can use AI for everything.

Some things, it will just be the profoundly bad choice for

Katie Robbert 10:21

well, and I think what’s interesting is as you’re walking through this process, at no point did you say, pick a tool, and then figure it out.

When you think about the five P’s, the purpose people process, platform and performance platform is purposely in the fourth spot.

Now you don’t have to go through these in the exact order.

And oftentimes, process and platform are coupled together.

But in terms of importance, purpose, and then people, you should never be picking your tool first.

And then saying, Let me backtrack and figure it out.

And so to see if I can pull off a Chris Penn analogy.

You should never pick a whisk, and then figure out whether or not you can make a steak.

Christopher Penn 11:11

Yeah, no, that’s exactly right.

Katie Robbert 11:14

You should first figure out, what is it that I’m trying to do? I’m hungry, what things do I have to work with? I have a steak, I have a frying pan.

And then you’re like, Oh, well, the proper tool would be a spatula, not a whisk.

But if you start with the tool first, then you’re already subconsciously boxing yourself into a solution.

Even if it’s not the right solution for it, it’s not solving even the right problem.

Christopher Penn 11:42

Exactly.

The exception to that would be r&d, like, Hey, we’ve got this new tool, we want to see what it can do.

But the the purpose there is still clear, you’re doing r&d, you want to see what this thing can do.

So even in that case, we’ve got a new tool, you’ve got a new language model, you’ve got a new frying pan, whatever it is, your your purpose is still clear, we want to see what this thing can do.

And our performance, our measurement of it is, we were going to catalog what this thing can do.

Katie Robbert 12:09

Well, and I would I would even go so far as to say like, you know, yes, you have a bunch of tools that you want to do r&d on, but you still want to make sure that you’re thinking through the user stories.

And to see, it’s just like the scientific method, you have to have a hypothesis to test.

And so so let’s say, I have a whisk, I’m not just going to start throwing it at things and to see what happens, you know, that’s not going to give me great results, you still want to say, you know, as a, you know, busy working person, I need to, you know, make dinner so that I can feed myself and my family, you know, so then you’re like, okay, and then what can a whisk do so you start to outline, here’s the different use cases of a whisk, whisk can you know whip things up, it can mix things, it can do this, that the other.

So then you start to write those user stories of as a person in my kitchen, I need to mix pancake batter so that I can have pancakes.

And then you have the user story that tells you here’s what’s supposed to happen, then you introduce the tool, the platform to see cannot do the thing, because the outcome in this scenario is making pancakes.

So could you make pancakes.

Exactly?

Christopher Penn 13:30

Yeah.

And so to that commentary, that of AI and everything.

That’s, that is kind of like saying use spreadsheets for everything, right? Yeah, there’s some stuff that just doesn’t belong in spreadsheets, it’s just the wrong tool.

Having I think the other part there and you talked about this early on was there is a lack of education among decision makers about the capabilities of what AI can do, so that you can then make decisions or not make decisions based on the end.

So I think part of the part of the thing is educating decision makers like these are the things generative AI is generally pretty good at and these are so that you can tell help them say like, yeah, this task here doesn’t fit in the six broad things.

general AI is good at so probably don’t waste your time doing that there because it’s not going to be a good fit, but they have to know it’s kind of like if someone handed you a whisk and you’d never seen a whisk before you like what do I do with this? Is it a hat like is it a is it a brooch is it a pterodactyl? What is this thing? What is its purpose? The and and to go back to also something you were saying earlier, one of the things is you you absolutely want to develop some kind of testing plan and this week’s In-Ear Insights newsletter, which get a TrustInsights.ai AI slash newsletter.

The data diaries column is actually on how to set up an AI testing plan.

because we’re gonna talk about tatted, how to set up testing of different models, because I’ve seen a bunch of stuff on my own threads of people talking about evaluating models, but then not doing it in a way that would pass any sort of rigor.

So we’re going to talk about the procedure for how you do that for yourself, because every test is going to be different, and it should be suited towards your company.

And no surprise, the testing plan is built on the five days.

Katie Robbert 15:29

It’s funny how that works, you know it, and I could, you know, wax poetic all day long about a proper test plan and the utility of it and the usefulness and the time savings and the headache savings of a proper test plan.

But you know, and that goes back to, you know, the like, Okay, I have a whisk, I’m doing r&d on this whisk, what is it that I need to test? Because again, yeah, you can just start throwing it against the wall and tossing it into bowls, and, you know, seeing if you can flip a steak with it, but you still don’t know, if it’s doing what is intended to do? Like, can I flip a steak with a whisk? Probably, but can I do it? Well, can I do it efficiently? Can I do it without splattering, you know, hot? You know, steak juice and grease and butter everywhere? Probably not, can I do it without making a big ass mess and ruining my steak? Probably not.

And these are the things that would go into the performance.

And so the user store would be, you know, as a home chef, I need to flip my steak correctly so that it’s cooked evenly, I don’t make a mess and sort of, you start to tick down all of those things that you’re testing against.

And that starts to become the framework of your test plan, which Chris will go over it in the newsletter this week.

But you know, so when we go back to the put AI and everything, just AI a fire, which I’m really concerned that that’s going to become like some new business jargon, just AI FYI, it, you know, it’s such a vague request.

But in my mind, it’s costly.

It’s, you know, poor use of time.

So it’s going to cost you time, it’s going to cost you money.

And you don’t know what kind of results you’re gonna get when you do that.

And so, you know, I could AI FYI, my house and like how I take care of my plants, but like, Is that the best use of time and money? Probably not, I could do it.

But that doesn’t mean I should do it.

Is it? What is it Jurassic Park, just because we could there’s

Christopher Penn 17:35

so concerned about whether they could they never stopped to think about whether they should?

Katie Robbert 17:39

And I think that that quote applies here? Because that’s exactly right, of you know, adding AI to everything? Sure, you can do it.

But it doesn’t mean you should.

And so if we go back to, you know, the very simple example of adding AI to your content, yeah, AI can write content.

That doesn’t mean you should, because it could be really crappy content, it could not be the content that you want, it could be wholly inaccurate, that hurts your brand reputation, because you’re not cueing it and paying close enough attention.

So can it absolutely Shut it, you really need to vet whether or not that’s, you know, whether or not it’s the right move for you? Exactly.

Christopher Penn 18:20

And this comes from developing out and building out you use the stories.

So this was an example that we give in the the full day workshop that we offer on on large language models.

This is the first capacity user story.

As a CFO, I need to generate long form content, so I can capture more audience organic search.

That’s a good start.

But it’s still not enough.

So what you would want to do is have something more specific, like as a CMO, I need to generate long form content with Claude to so that I can capture 30% more organic search traffic and generate 10% more leads in calendar year 2024.

Now, you have some outcomes that you’re after that you can then start to evaluate.

If I set up an AV testing plan, I have human generated content, I have AI generated content.

Does it get me 30% more organic search traffic and you can now you can now build out your five Ps, this is the performance part of the five P’s to say.

Yes, AI is doing its job because the this first user story, you would succeed just by using AI, whether it’s a good idea or not.

When you’re more specific and user story.

And you want some performance measures, they say, Well, you know what, we are using AI, we’re testing it out to generate more content, but it’s only getting us 2% more organic search traffic or the content that we’re putting up isn’t getting any more search traffic than the human led content.

So is there a different measure that we should be evaluating to do this and I think that’s, again, that’s part of the gap in these AI RFI everything is that like, well, no if you don’t, if you’re not clear about what the measure is have success? Yeah, it’s just going to be a potentially disastrous waste of time.

Katie Robbert 20:06

So in a user story, I would typically advise against listing the tool.

But if you’re doing r&d, if you’re doing if you’re writing a test plan, then I think it is appropriate to list the tool, but then you would create a new user story for every single tool that you’re testing.

So as a CMO, I need to generate long form content with Claude to as a CMO, I need to generate long form content with ChatGPT.

And then each individual user story become for each tool becomes its own set of like, qualifications, your test plan, have, you know, did it do the thing? But typically, you wouldn’t want to list the tool? Because then again, you’re sort of leading with, here’s the solution.

Let me find a problem.

Christopher Penn 20:52

Yep.

And this is very much is a user story for r&d to figure out like, can I even use the thing like the what? And what should I expect? The thing to do is that I know whether or not what success looks like, because that’s another one of the big problems with this AI everything concept is what does success look like? Are you saving time? Are you saving money? Are you making money.

And in the case of something like AI, if AI is generating content that’s less good than your normal content, you might be saving time, but you might be costing yourself money.

So what is the trade off is the time you save worth as much as the money you might be losing, if you are writing mediocre content.

That’s something that again, you have to test out, you’ve got to add, you’ve got to make it part of user story and part of your testing plan.

Katie Robbert 21:39

It’s you know, and Chris, you mentioned this, I think, on last week’s podcast is that you were talking with industry peers, and the expectations of consumers has changed such that, you know, they expect that we’re using AI.

So therefore, they don’t want to pay as much for certain things.

So I can sort of see where you know, as a CEO, I want you to AI if I everything.

It’s that panic move of it’s expected.

Our consumers are demanding it, we don’t know what that means for us, but we have to do it, or else we’re going to lose business.

But this is where having those structured processes of the five P’s creating user stories, can help lessen a little bit of that panic and at least help you make some informed decisions that you’ve stopped for a hot second to think through.

So as a CEO, I want to react appropriately to customer demands about including artificial intelligence in our services, so that we can speak to what that means for the company, what that means for the customer so that we understand scalability so that we understand revenue generation so that we understand potential loss of Genereux, and revenue and customers.

And so really thinking through like, just because they’re demanding, it doesn’t mean that we can deliver on it correctly.

So yeah, we could add AI into everything, tell our customers great AI isn’t everything now.

But does the quality of the service and the product drop? Does it actually save us money and make us more money? We don’t know those things yet.

We have to really go through the process to figure it out.

You know, we’ve had customers, our own customers, make demands of us of certain things that they think we should be doing.

But we’ve had to say no, because it just doesn’t work within our skill sets our infrastructure, you know, our growth plan.

And so those are the things that I totally understand that as you get into a bigger company, when you have a board and investors, it’s harder to say no.

And so if you don’t find yourself in a position to say no, going through these exercises will help you at least justify the direction that you’re being pushed in, or that you need to push back on.

Christopher Penn 23:55

Exactly.

You can generally say not now, not now is usually an acceptable substitute for no.

But if you go through the user stories exercise, the five p exercise, the task identification exercise of all these different exercises that are preparatory exercises to get you ready for AI, then it should become more clear what you can and can’t use this for and in turn, reduce that panic sense of AI everything because if you’re doing the exercises, then you can you can show to your stakeholders whether you are the CEO or you report to the CEO or whatever, you can show your stakeholders.

Here’s what we’re doing to address the question of AI.

We are doing the groundwork, we are doing the foundation work, we are getting our data together.

We are inventorying our processes we’re doing skills assessments of our people.

When you do that, you can then point to that progress and say here we are moving towards an AI enabled future.

But you can say are doing it in a way that It will make us money, save us time and save us money, as opposed to just throwing it everywhere at random, and then wondering why, why your organization is in total disarray.

People, this is an emotional human thing, not a machine thing.

People need to see progress, right? It’s why you have built colors in the martial arts, right? Because no one’s gonna show up.

But at a martial art school one day putting on a white belt, and then just say, okay, you know, what I expect my next Progress Report to be in 10 years, when you give me my black belt like that doesn’t happen.

You have all these different colored belts and stripes and stars and things just to keep people motivated and see change.

With AI.

Rushing into, it’s kind of like trying to strap on a black belt before you’re ready.

Know, you want to go to your yellow belt, yellow belt with white stripe, yellow belt with black stripe, and and those that progression is user stories, the five P’s skills, inventories, task inventories, data inventories, so that you can say to stakeholders, yes, we are working towards the goal of AI everything.

But we’re doing it in a way that makes sense.

Katie Robbert 26:07

I would add to that, because having worked with a lot of stakeholders that aren’t okay, with just Yes, we’re working on it.

Like that’s not, I will say I will push back, Chris, that’s not a good enough answer.

If you can tie it to something financial.

Now you’re speaking a universal language, which is money.

A lot of stakeholders, a lot of boards, investors, that the language that they speak, and that’s the information they want to hear of, yes, we can do it, here’s how much it’s going to cost you.

Yes, we can do it.

Here’s how much money it’s going to save us.

Putting it in those, you know, in that context of financials of yes, we can AI if I everything.

Here’s how much money we need in order to purchase these tools and scale up our teams and bring on those you know, consultants and, you know, overthrow our products and redo them with AI like, here’s how much money that’s going to cost? Can you help me prioritize where you would like us to start? Now? Like, oh, oh, what do you mean? So AI is not just like, you don’t just plug into ChatGPT.

And it works like, well, you could here’s how much money we’re going to lose if we go that route.

And so making sure you’re helping the people who are pushing on you to do this understand the actual dollar costs, not just your own emotional struggle with why the hell am I having to do this, like this is stupid idea.

Channel all of that frustration into dollars and say, Yes, we can do this.

If you are willing to invest, you invest this much money and or lose this much money.

Then they’re like, oh, yeah, let me rethink that.

Okay, exactly.

Let me take a beat.

Christopher Penn 27:56

And absolutely shameless plug, if you’ve liked the examples that we’ve talked about, and want to bring us in to do a workshop, go through all of this.

There’s a link right here.

TrustInsights.ai AI slash AI services, that is something that we do.

So we have covered a lot of ground today talking about whether you should AI everything if you’ve got stories, you want to share about directives from on high about AI that you’ve had to deal with, pop out by our free slack of go to trust insights.ai/analytics for marketers, where you have over 3000 other marketers are asking and answering each other’s questions every single day.

And wherever it is, you watch or listen to the show.

If there’s a platform, you’d rather have it on set, go to trust insights.ai/ti podcast, we can find this on most podcast channels.

And while you’re on the platform of your choice, if you leave us a rating and a review that does help share the show.

So thank you, and we’ll talk to you next time.


Need help with your marketing data and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.



Source link