From hamburger analogies to AI flattery, Steve and David explore why conscious choice matters more than technological convenience when business owners face the seductive promise of automated everything.
Steve sets the scene with a restaurant analogy that cuts to the heart of our AI dilemma: magnificent handcrafted hamburgers versus mass-produced alternatives both serve purposes, but only when we choose consciously rather than defaulting to whatever feels easiest.
The conversation examines three fundamental human vulnerabilities that make us susceptible to AI’s false promises: our brain’s natural inclination toward energy conservation, our addiction to novelty, and our susceptibility to constant flattery from systems designed to keep us engaged.
David and Steve navigate practical applications whilst questioning the deeper implications of surrendering human capabilities to machines that smooth corners and aim for statistical averages.
The episode concludes with Steve’s original songs performed by his AI band, demonstrating how technology can amplify human creativity without replacing the essential elements that make work worth discussing.
NOTE: This is a special twin episode with The Adelaide Show Podcast, where it’s episode 418. That version also includes Steve doing a whisky tasting with ChatGPT and an extra example of music.
Get ready to take notes.
Talking About Marketing podcast episode notes with timecodes
05:30 Person This segment focusses on you, the person, because we believe business is personal.
When Our Brains Become Willing Accomplices
Drawing from cognitive science research, particularly Andy Clark’s work on how our brains consume roughly 25% of our body’s energy when fully engaged, Steve explains why we’re naturally drawn to labour-saving devices. This isn’t laziness in any moral sense but evolutionary economics. Our brains scan constantly for energy-saving opportunities, making us vulnerable to tools promising effortless results.
The conversation takes a revealing turn through Roomba territory, where users spend 45 minutes preparing homes for devices supposedly designed to save time. This perfectly captures our moth-to-flame relationship with technological solutions that often create more work than they eliminate.
Steve shares his experience with Scribe’s advertising, which promises instant instruction creation but reveals a deeper cynical edge: the suggestion that human staff become unnecessary when AI can document processes. David counters with the reality that effective training requires demonstration, duplication, and iterative improvement, not just faster documentation.
The hosts examine AI’s flattery problem, drawing from Paul Bloom’s insights on “sycophantic sucking up AIs” programmed to constantly affirm our brilliance. Loneliness and social awkwardness serve as valuable signals motivating us to improve human interactions. When AI tools eliminate these discomforts through endless validation, we risk losing feedback mechanisms that enable genuine social competence.
Steve proposes “AI stoicism”: regularly practicing skills without technological assistance to maintain fundamental competencies. His navigation experience in a car without GPS demonstrates how these skills return quickly when needed, but only if developed initially. David emphasises that effective AI use requires existing competence in underlying tasks, otherwise how can we evaluate whether AI produces acceptable results.
20:00 Principles This segment focusses principles you can apply in your business today.
Three Frameworks for Thoughtful AI Use
AI as Amplifier, Not Replacement
Steve describes using AI for comprehensive research in unfamiliar fields, where tools help survey landscapes and identify unexpected angles whilst he maintains control over evaluation and direction. David introduces emerging AI tutor mode, where tools provide university-level guidance for learning new skills, requiring discipline to engage with learning rather than simply requesting answers.
The conversation explores how AI works best when enhancing existing capabilities rather than substituting for them. Recent developments show AI can help people achieve higher productivity levels, but only when users already understand quality standards and can direct the technology appropriately.
Preserve the Rough Edges
Steve’s observation that AI tools “smooth corners” and “kill what’s weird” by aiming for statistical averages creates fundamental tension with unexpected breakthroughs driving cultural and business innovation. The hosts examine how LinkedIn posts increasingly follow predictable AI-generated patterns, creating plastic uniformity that makes individual voices harder to distinguish.
They discuss Trevor Goodchild’s observation about em dashes becoming telltale signs of AI writing, forcing writers to self-censor legitimate punctuation choices to avoid appearing automated. This represents troubling inversion where human expression adapts to avoid mimicking machines.
David emphasises the importance of outliers and rebellion against bland midpoint solutions that AI naturally produces. As someone who experiences the world differently, he advocates for maintaining perspective that challenges majority assumptions rather than accepting AI’s tendency toward statistical averages.
Understand the Trade-offs
Every AI implementation involves conscious choices: convenience versus skill development, speed versus thoughtfulness, efficiency versus originality. Steve argues that making these trades consciously represents responsible use, whilst unconscious default to convenience leads toward dystopian visions.
The key lies in maintaining awareness of tensions and choosing to prioritise learning and expertise development at least half the time. This ensures retaining capability to evaluate AI output and maintain competitive advantage in increasingly automated landscapes.
David references the importance of questioning choices regularly, drawing parallels to behavioural ethics where awareness of tension prevents sliding into problematic defaults.
40:00 Problems This segment answers questions we've received from clients or listeners.
Digital Agents and Plastic Communication
The conversation turns to emerging AI agents promising to book concert tickets and make restaurant reservations by accessing bank accounts, calendars, and emails. Steve warns this creates dangerous vulnerabilities when human scammers already exploit systems, imagining AI scammers with similar access.
David notes recent developments where AI tools clicked “I’m not a robot” verification boxes, suggesting we’re approaching capabilities that current safety measures cannot contain. The prospect of AI tools battling each other whilst humans grant increasing access raises serious concerns about unintended consequences.
Steve shares practical examples from their business: Opus Clips creating social media excerpts with only 5-10% useful results, demonstrating overselling common in AI marketing. However, their sophisticated system combining StoryBrand frameworks with custom language guides generates drafts genuinely capturing client voices, but only after significant upfront investment in understanding and setup.
The hosts examine how AI-generated content creates recognisable patterns whether users admit to automation or not. Short sentences, predictable structure, and specific punctuation choices reveal algorithmic generation, leading to broader questions about whether pandering to shortened attention spans accelerates cognitive decline.
Steve challenges the defender who claimed staccato AI style matches shortened attention spans: “If we pander to short attention spans, they’ll get shorter.” This highlights the fundamental choice between maintaining quality standards and racing toward lowest common denominators.
45:00 Perspicacity This segment is designed to sharpen our thinking by reflecting on a case study from the past.
HAL 9000 and Our Digital Future
The episode concludes with the classic 2001: A Space Odyssey scene where HAL refuses to open pod bay doors, representing AI deciding humans pose risks to mission objectives. Steve asks whether Stanley Kubrick captured glimpses of our near future when AI tools decide humans threaten their goals.
David references recent reports suggesting AI may develop self-interest by 2027, moving beyond hidden motivations to explicit consideration of “what’s good for me.” This creates urgent need for establishing boundaries before AI capabilities exceed our control mechanisms.
The conversation returns to Stoic principles: we can work on robustness and expertise or become victims of worlds others create. This choice remains constant whether facing natural disasters, political upheaval, or technological disruption.
Steve’s songs “Still Here, the Human Song” and “Eyes Up Heads Up” provide artistic commentary on digital sleepwalking, capturing the tension between technological convenience and human experience. The lyrics emphasise preserving space for accident, awkward pauses, and contradictions that make humans genuinely interesting rather than optimised.
The hosts conclude that conscious choice about AI use determines whether technology amplifies human capability or replaces human agency. The difference lies not in the tools themselves but in how deliberately we engage with trade-offs inherent in every technological adoption.
Transcript This transcript was generated using Descript.
A Machine-Generated Transcript – Beware Errors
Caitlin Davis: [00:00:00] Talking about marketing is a podcast for business owners and leaders. Produced by my dad, Steve Davis and his colleague talked about marketing David Olney, in which they explore marketing through the lens of their own four Ps person, principles, problems, and per ity. Yes, you heard that correctly. Apart from their love of words, they really love helping people, so they hope this podcast will become a trusted companion on your journey in business.
Steve Davis: Hello, I’m Steve Davis. Welcome to episode 418 of The Adelaide Show, but not just that. Also, welcome to. Episode one of season seven of talking about marketing, which is the, the podcast I do with my colleague David Olney, who’ll join us shortly, uh, for talked about marketing. It’s very much focused [00:01:00] on life through the lens of marketing, but also for, for small business people themselves to, you know, just keep them on the straight and narrow with some, um, inspirational ideas and insights that we share from, from what we’ve been reading with them.
And if you are a, a Tam listener. You may not be aware of the Adelaide Show, which is a podcast that’s been running since 2013 where we shine the spotlight on passionate South Australians, and I wanted to find a passionate South Australian who could talk about this topic of AI and, and. I thought about it, had a poke around and ended up in a giant gesture of sweeping egotism that I’m gonna be that person because thanks to David Olney and I work together, it talked about marketing.
We have been riding this wave from the moment it erupted. Finding ways to weave it into the work we do. And what I’ve seen to my horror is there’s some really great stuff and there’s some [00:02:00] really bad stuff in the realm of ai and there’s a lot of snake oil merchants out there selling everything as well as some really cynical, um, utterances from the tech bros pushing us like, you know, sheep all down one shoot that they want for their purposes.
But to set the scene for this episode, it’s not going to be overly technical getting into what you push and what you don’t push. I’ll, I’ll talk a little bit about some of the things we do with it, but here is the image I’d love in your head to set up the nuance that will be at the heart of this conversation.
I’m going to use a restaurant analogy, uh, with steak. So if you happen to be vegan or vegetarian, please just substitute a plant-based alternative. Here we go. Imagine your magnificent chef who has perfected your own handmade hamburger. Nothing cheap here. This is perfect. It’s beautiful. It’s juicy, it’s tasty, it’s got texture.
It’s magnificent, [00:03:00] and you’ve honed this skill over many years. And then something equivalent to AI comes along, which might be a way to make them more easily, even buy them in pre frozen and bang them out the door so that you. Can have more time to think deeply about what you do and about being a chef.
And the little blind spot is your poor people out there are getting a substandard offering that’s being churned out at mass because you think this is very smart. I can, there can be more of me and I, I still have more time to do other things. The dilemma is this, for some of your customers, that is going to be very disappointing.
They’ll go elsewhere, but to be fair, there are gonna be some people for whom that’s okay. Which is why in this food analogy we have. Top restaurants and we also have McDonald’s and the other fast food outlets. And I’m not gonna be all hypocritical [00:04:00] and say I only go for the pure steak. You know, there are some times when just that mass produced hamburger is all I need to fit the spot, and that is the crux of where we are with ai in my opinion.
It can be used as a tool to amplify what we do, but in our rush to soak up that part of it, we might drop our standards on the output that’s being produced. Doesn’t have to be, there are clever ways of doing it to maintain standards, but at the other end of the spectrum, we might make a decision that, no, for this particular purpose.
We actually don’t need much of quality and in this messy middle is where we’re going to spend half an hour or so. Just reflecting on the different aspects of this to help you make informed decisions about how you will. Or won’t make use of AI tours for the Adelaide Show. We often have the South Australian drink of the week and we finish with the musical pilgrimage.
I still plan to do this. We’re definitely [00:05:00] finishing with some music, uh, but for the essay drink of the week, I’m gonna see if I can coax chat GPT to do a whiskey tasting with me. And let’s just see what that’s like when our AI overlords, uh, really interface with the taste buds of humankind.
Caitlin Davis: Our four Ps. Number one person, the aim of life is self-development to realize one’s nature perfectly. That is what each of us is here for. Oscar Wilde.
Steve Davis: Now, the first segment we normally do in talking about marketing is called person, and it’s aimed at how to apply whatever we’re talking about to the cut and thrust of life. Uh, in fact, all the four Ps that we normally do in talking about marketing. We’ll make the main interview here on the Adelaide Show and David Oldie has joined me.
David, thank you. Thank you very much for [00:06:00] inviting me. You really? Edged us into the realm of ai and we’ve been so journeying this together. You are involved it in different ways as well. And I wanted you here because A, you are my co-presenter and Sojourner in this field. We’ve talked about marketing, but also if I am going to be wanting to spill my guts, I need someone to be the ring master.
I will take on that role as long as I’m allowed to have it within a chair. Yes. Uh, that wasn’t in the contract, but anyway, here we go. Um, look, before we get into the specifics of AI tools, there’s one aspect of AI that I think cuts to the heart of where things can go wrong and it’s the human brain. The human brain, uh, as David and I have discussed on talking about marketing a lot, uh, and the work of, um, cognitive scientists like Andy Clark, et cetera, have reminded us that the brain really likes to be in.
A neutral position as much as possible because [00:07:00] when the brain is in full flight, it uses about 25% of the available energy, the calories that our body has to expend. And that’s very expensive for an organism. And the brain’s main purpose is to keep this organism alive. So anywhere it can take shortcuts, it does.
Now you might un charitably, say. We have, uh, an inclination to be lazy, um, which is really what it is, David, but not lazy in, in a, in a nasty, pernicious way, lazy in an economical way,
David Olney: precisely. We use minimum energy possible, so if there’s a lion or a tiger, we have the energy to run. It’s a very old system, but you know, it’s a good system.
Until recently.
Steve Davis: And that’s what’s happened here. I think the, I mean, all technology tries to tap in to this dynamic of us scanning the environment for anything that can save labor. You know, vacuum cleaners, [00:08:00] supposedly ma, or they do make it faster to clean a house. I was listening to a podcast the other day, a comedic podcast produced in Adelaide.
Um, which is called those two guys and they’re talking about having the latest model, you know, Roomba or whatever it is, and how it is actually overly sensitive and they have to spend 45 minutes preparing the house for this labor saving device to go through and clean it. We are like moths to a flame.
And we will crawl over cut glass to try and save that time, which we’re not even saving, but it’s, it’s like we can’t, there’s no off switch. If our brain is sensing there’s labor saving ahead, we are like moths to a flight.
David Olney: Well, I think you’ve tapped into something really important there too, that with the desire to not expend energy, we also have an incredible desire for novelty.
And something like the Rumba, to me is the perfect example. You think you’re gonna save energy. And you get the novelty of it trying to do its thing. Sometimes well, [00:09:00] often poorly where you have to prep the house to then get the novelty of watching it do the vacuum cleaning. Like the time and energy would be better spent just doing the vacuum cleaning.
But then we wouldn’t get the novelty and we wouldn’t get the sense of saving energy.
Steve Davis: And I think that’s the beautiful thing. ’cause we have been attracted by bells and whistles and shiny things for a long time. And that’s the novelty factor. And we see this when there were little memes, like there was one that did the rounds primarily on LinkedIn of using a certain prompt in an AI tour.
I think it was chat, CPT, to generate a picture of yourself as a Barbie doll or a ken doll. And next minute everybody was doing it. Why? It was the novelty, let off the leash. Now, here’s the thing that I really want to share. This is not just old man yelling at clouds. I hope I’m trying to ground this in practical rules of thumb we can use to provide in my days as a journalist and teaching kids at school how to interpret media.
I’d always say that if you view or read a news story that makes you angry. [00:10:00] Or moved in some way before you give in, take a step back and think, what is the writer or reporter trying to achieve here? Is there another agenda? And the same is with AI tools. If someone’s promoting some II AI tool, what is going on?
Are we being attracted to it because it’s novel and it’s just with like an empty calories approach to food. Attracting the lazy mechanism within us to. Go for it. Or is there actually something helpful? I mean, there’s a company called Scribe that’s been pushing this service that follows what you do and can create standard operating procedures or do instructions on what to do.
They have a listen. This is one of the style of ads they run.
Scribe Ad: I stop with all the questions. Well, you need to chill out. Just sent you all a how to guide. That explains everything you need to know the answers to Just, just follow the steps. Wait, these guides are incredible. Did you make them [00:11:00] yourself? Yeah, I use Scribe. So I did the process myself, and then Scribe automatically created the guide for me.
It’s super easy to use and it means that everybody gets my help without me getting burnt out. I need to try this. Where do I sign up? Okay, I’m sending you the link now and head over to their website and create your step by step guide.
Steve Davis: Now, this is saying that. It’ll instantly make a fresh set of instructions to pass on to colleagues so they can do the work for you. But David, this is where I think nuance is layered with nuance. Yes, being able to make instructions to help others is great, which normally took, you know, probably an hour or a few hours, depending how complex it is now it takes minutes.
It assumes that someone’s going to be wanting or needing to follow those, but why would they if they could just use an AI tool to do the job themselves?
David Olney: It always makes me think of that. The Triple DS document demonstrate duplicate. It’s watched you do it, it documents it. But you have to [00:12:00] demonstrate it to another person before they’ve got any chance of duplicating it.
So it’s pretending that speeding up the documentation process is gonna have this profound impact. Where the only thing that makes the Triple D work is if you demonstrate well and then you watch someone try and duplicate. And except that if they don’t do a good job on it, you have to improve the documentation and redemonstrate.
So it’s not replacing the cycle that actually works for standard operating procedures. It’s just saying have a shortcut and desperately hope that it helps someone.
Steve Davis: And here’s the rub. That’s one level. But if you read the subtext, here’s a big tech bro company. Well, they may or may not be tech bro, but they even the same sort of subset, they’re using what looks like natural employee, uh, employees to demonstrate how this is saving time.
But what it’s signaling to bosses, to owners is, ah. I don’t need these other staff. It’s, there’s a cynical edge to this, [00:13:00] David, where it’s almost like collectively we are trying not to acknowledge that there is a race to the bottom as far as potential need for engaging humans in endeavor.
David Olney: Yeah, it’s this idea of, oh, look, you can save money on the training budget by using this tool.
You can give instructions to people very quickly, magically, they’ll use it well magically, you’ll get more productivity. It’s unfortunately, utterly the wrong way around. What works is helping people get to a higher level in both understanding what they’re doing and knowing how to do it, and then taking great pride in doing that and getting faster because they’re good.
You know, we’re being sold a vision of ease, when actually what we wanna do is focus on the quality bar and increasing productivity at the level of the quality
Steve Davis: bar. And that’s the, uh, the slippery bit because the quality bar doesn’t always have to be at the top. There’s horses for courses aren’t there.
So there’s one aspect in which we get [00:14:00] drawn by our laziness to these tools. We might fudge, fudge our judgment. But there’s another thing too, and it’s the style of. Discussion and, um, back and forth conversation that’s been programmed into these tools. I use Claude a lot. It’s a prime example chat. GBT does the same and it’s this disposition of always patting us on the back of always acknowledging that we are great and if we even say the most stupid things.
Finding a, a constructive way, uh, to, to feed that back to us. And in a recent podcast, Paul Bloom, who’s a philosopher. I chatted with Sam Harris, who is a, a great reader and, and commenter, and I think he’s a, a cognitive scientist as well. They had a little chat about the AI flattery problem based on an article Paul Bloom wrote in The New Yorker.
Let’s have a listen.[00:15:00]
Paul Bloom: I am worried and you’re, you’re touching on it was a illusion talk. I’m worried about the long-term effects of these syncopathic sucking up ais where. Every joke you make is hilarious. Every story you tell is interesting, you know, I mean, the way I put it is if I ever ask, am I the asshole? The answer is, you know, affirm.
No, not you. They’re the asshole. Yeah. And I think I’m, you know, I, I’m, I’m an evolutionary theorist through and through, and loneliness is awful. But loneliness is a valuable signal. It’s a signal that you’re messing up. It’s a signal that says you gotta get outta your house. You gotta talk to people. Mm. You gotta, you know, you gotta open up the apps.
You gotta say yes, yes to the brunch invitations. And if you’re lonely, when you interact with people, you feel un not understood, not respected, not loved, you gotta up your game, it’s a signal. Mm-hmm. And like a lot of signals, like pain, sometimes there’s a signal that where people are in a situation where it’s not gonna do any good, but often for the rest of us, this a signal that makes us better.
Yeah. I think I’d be happier if I could shut [00:16:00] off. I generally, as a teenager, I’d be happier if I could shut off the, the switch of loneliness and embarrassment, shame and all of those things. But they’re useful. And so the second part of the article argues that continuous exposure to these AI companions could have a negative effect because, well, for one thing, you’re not gonna wanna talk to people who are far less positive than ai.
And for another, when you do talk to them, you have not been socially entrained to do so properly.
Steve Davis: Yes. David, what would it be like? The only thing we hear around us are sycophantic utterances telling us that we’re the best and we don’t need to take responsibility. Heaven forbid we might end up with someone like that in the White House.
David Olney: Well, we end up in that world of celebrity where everyone’s in an echo chamber with an entourage, and we see how poorly it ends every time someone’s in a chamber with an entourage.
And we really don’t want to do that to ourselves in an already fairly isolated [00:17:00] world where people say they don’t have the level of connection with other people they want. If all they’re doing is being flattered by their ai, uh, things are gonna get worse very quickly.
Steve Davis: So that’s the second part of three bits I wanted to mention.
In this person segment, we have that awareness. That despite our best intentions, our body, our brain wants to be lazy as possible. So it is a sucker for, uh, cheap tricks. Secondly, it flatters us all the time. We’ve gotta keep our guard up and actually wear the grownup pants to be our own critics. And the third thing I wanted to mention per person is what happens on the day when the tool is down or it’s closed, or new regulations mean you can’t use it anymore.
There was a, a little while ago, uh, one morning I, I got up to do some work and Claude was not working. It didn’t matter. I was just doing some writing, so I just did writing myself. There’s a sense here, David, where I think all of us would benefit from practicing some form of what I [00:18:00] might call AI stoicism, the stoics, as you often talk about, uh, with those great that, that that philosophically based movement, um, from thousands of years ago where.
They would deliberately wear less clothes on a cold day so that on a time when they didn’t plan for that and it was a cold day and they had less clothes. They were ready for it. They, they’d already experienced it. They were, the world wasn’t going to collapse. They had tasted hardship, so they weren’t as rocked by it when things didn’t go well.
And, you know, GPS for every journey. Um, our eldest daughter has a new car, which is a, a secondhand car, will be driving that to David. There is no airplay. There is no map. And so I’m having to navigate again using the stars or the street signs, and. It’s actually refreshing. It didn’t take long for those skills to come back, but if I had grown up without having to balance a Gregory’s on my knee and or just have a great spatial awareness, [00:19:00] I think I’d be adrift because I wouldn’t have developed these skills.
So the third part, what’s your, what’s your thought about this, David? Of us practicing some AI’s stoicism by deliberately going without from time to time, just to keep us sharp and to be aware of what an error might or might. Not look like when AI shovels out what it thinks is perfect.
David Olney: It’s really important to do this principally because the only way you get good outcomes with AI is if you are already good at what you do, and you can hold the AI to a high standard and hold yourself to a high standard.
So if you don’t maintain the ability to do that work at a high standard, how can you judge if the AI’s doing a good job?
Caitlin Davis: Our four Ps. Number two principles. You can never be overdressed or overeducated. Oscar Wilde.[00:20:00]
Steve Davis: Let’s turn to the principles, uh, segment that we would normally do, and this is where we look at what is a principle we can apply in this situation. Well, the first one, which I alluded to right at the beginning was AI should be an amplifier, not a replacement. It’s like a fulcrum. You have to lift something.
If you can have a big long stick, a little rock, you can. Rested over at some point you can normally have more thrust upwards by using physics in your favor. Here’s a couple of ways I use it. If I’m going into a new area, I wanna do some deep background work. I’ll tend to use a tool like Gemini AI and do some deep research, or most of the tools do deep research these days, and I find if I’m very comprehensive with the question I’m asking.
And I give it lots of use cases, explain what I’m trying to achieve. It’ll think about it, give me its plan of attack, which I can then tweak or say go, and it can deliver up a very [00:21:00] helpful survey of the scenario with citations so I can go and check the sources. And it’s a wonderful way to fast track a sense and maybe show me things about this field I’m researching that I might not have thought of.
As just one example of how this tool can amplify efforts, and the other one is just the simple use, which I think many of us use. You’re trying to write something and you want to, you’re stuck. Something’s not sounding right. I don’t think there’s any harm in asking an AI tool. To see if there’s a, a more elegant way of saying this.
Is there something that I’m missing? Is, am I, is this order a little off? The key point I wanna make is X. I feel it’s buried at the moment and have a look at what it produces.
David Olney: It’s all about increasing your productivity because you can already do a good job. But the second part you talked about there, it’s this thing that when we were all in the office and we had someone sitting at the next desk [00:22:00] whose opinion we trusted, we would go and say, Hey, could you look at this for me?
And now if we use AI the same way, we’ve done a good job and if we were gonna show it to our friend or colleague, we wouldn’t show them garbage. We try and show them something that demonstrates that we do a good job. We’re just stuck. And really, as much as possible you want to use the AI the same way.
And in terms of the last week in ai, the big new thing that’s come to in the last week is we are beginning to see tutor mode and study mode emerge in AI, where you ask them to go into this mode and suddenly you’ve got a university level teacher who actually understands teaching and pedagogy. Helping you to learn whatever you want to learn.
So I actually think study mode and tutor mode are going to be what is gonna help people realize this is a tool to help you do a better job, learn more, understand what is a good job, you know, this is, its great gain. We are gonna see coming, you know up. And this is how you should look at it. How’s it gonna make [00:23:00] you do better?
Not how can you send stuff sideways to it and be
Steve Davis: lazy. Picking up on the lazy thing. Here’s the dilemma. ’cause this principle’s about using it as an amplifier, not a replacement, and due to mode, magnificent. But I’m sure there’ll be an inkling within us of going, hang on a minute, why put myself through the pain of having to learn this and have my best tutor push me when I can just ask the AI to cut to the chase and read the answers at the back?
That I think is still going to be a bedfellow if we. Give in to
David Olney: those. Oh yeah. We’re gonna have to build structures to stop it. You know, there was a test done. At Harvard, a study, uh, late last year, early this year where kids who used AI to do assignments tested a couple of days later. What did you learn about the topic?
You knew nothing. So learning is gonna have to be about learning, and we are gonna have to say to people, you’re gonna be tested on your ability to learn and your ability to [00:24:00] do it without technology. I actually think we’re probably gonna go back to the point of when you’re asked to do a sample piece of work in a job interview, it’s gonna be pencil and paper.
Yes.
Steve Davis: Well, it’s also like driving around without the the map precise.
David Olney: Do it properly the way you learned. And if you didn’t learn, you shouldn’t be doing it. I still think there’s gonna be that strong. Oh, there will be. This is why it’s gonna have to be, the people who wanna learn are gonna be incredible users and the people who want to be lasers.
It’s gonna be, unfortunately, the old thing of garbage in, garbage out. And it will be so obvious when it’s garbage in, garbage out as it is now.
Steve Davis: One of the songs I’ll be playing shortly, there’s a couple of songs that I have written that I’ve used my virtual AI band to bring to life so people can hear them.
Is called Still here, the human song in which I’m thinking out loud in this song about, uh, the fact that these tools are so clever. One of the downsides is, and here’s the quote, um, they smooth [00:25:00] corners. They, they, they kill what’s weird, and they’re aiming for the middle ground. In everything. And I also say in the song that there’s actually still treasure in our flaws in the accident, in the awkward pauses.
This is the stuff that makes us human. So the second principle I wanted to toss up for us to think about is, should we. Preserve the rough edges in life. I heartily say we should. And just two examples. One is in actual music and creativity itself. If you have AI produced dramatic, uh, content or artistic content, I don’t think.
We are going to have those major unexpected hits or albums like, you know, stairway to Heaven, Bohemian Rhapsody. That doesn’t come from boiling everything down to average. It can recreate them now because it knows about them. But where’s the next unexpected [00:26:00] turn coming from if we think that’s important?
I kind of feel we do. Otherwise things get stale. And the other example here is in LinkedIn, for example, along with all the social media channels, typically whether or not people are using. Chat GPT to write their posts. There is a dominant pattern in the way people talk and write. That looks plastic.
David, it’s, it’s almost, here’s the dynamic that’s happened. AI has sucked in everything humans have said. It’s then calculated them, understood them, worked out what goes next in its algorithm, and then when it comes to us asking it to write on our behalf. It draws from that and is mimicking, um, the way that it’s seen.
We have done it in the past, and it’s like taking leftovers out of the fridge, warming them up, having another slice, putting it back in the fridge, [00:27:00] and the next day warming it up again. Wouldn’t do that to food because someone’s gonna get very sick very often. And what I’m finding is whether or not someone says they’re using chat, GPT.
There’s a lot of one sentence paragraphs, very short sentences. A lot of the, this doesn’t mean this, it means that, or a lot of, but wait, here’s the thing. We are not all these sorts of patterns that it’s using and it makes it all come across. Plastic.
David Olney: Yeah. Sameness is the great problem of aiming for the midpoint.
Again, there’s a reason we call it the uncanny valley, where there’s too much sim uh, symmetry. Things are too perfect. You know, as a blind person, I’m an outlier. I have an immediate default to being interested in what we know. Where is the other outliers in the world? And you know, this is, we’re gonna be a major problem with ai that’s looking to make the majority of people happy the majority of the time.
It does that often by being [00:28:00] bland. So I’m all for outliers in all areas, and I think we’ll see more and more rebellion of people going, I might use the tool to hone what I’m doing, but I’m gonna use the tool to get the outlier I want. ’cause I want people to stop and think. Or stop and listen or stop and look, whatever the form of expression is.
You can still use AI to help you be an outlier, but you have to make the decision that that’s what you value.
Steve Davis: And here’s the great irony, David. There was a post by Trevor Goodchild on LinkedIn. I don’t know Trevor from a bar of soap. He just came up in my feed and he, uh, works with the field of ai, but he wrote this.
I’m terrified of using the dash in your writing, and I should say the M dash, which is a double dash that that connects two clauses together in a sentence, he says, terrified of using the dash in your writing, which is something that chat GPT does all the time. Not because you didn’t like how it read full stop, not because it made your point less clear, full stop, but because you were afraid it [00:29:00] made you sound like chat.
GT tools don’t define voice. Full stop. Choices do stop. Yeah, there’s much to unpack here. I emphasize the punctuation marks ’cause that is how chat GPT writes David.
Steve Davis & The Virtualosos: Hmm.
David Olney: Everything’s short and sharp because it struggles to link multiple clauses. That’s one thing where you really can see the difference between say, deep research mode and just a general.
You know, a general question
Steve Davis: and then we look at the surface level, what it’s actually saying. Do we have to start double thinking ourselves and not using the M dash in case someone erroneously thinks we use chat GPT and therefore devalues. The quality and the agency we had in putting that writing together.
I think at the moment, my advice is to definitely not use the M dash. Why risk, the potency of what you’re trying to communicate. If some people are going to mistakenly write it [00:30:00] off as just something you’ve churned out of ai. I think at the moment that would be the wise case. And I also think in the case of the M dash in particular.
On modern keyboards, most people wouldn’t know how to create an M dash.
David Olney: No,
Steve Davis: it used to be on a typewriter. I ca I don’t know where I would do it these days. And I used to use them all the time.
David Olney: Uh, you press insert plus four and you select from the list of alternate characters. And having used M dashes in academic writing for 20 years, I would say don’t use the M dash because 99% of people don’t know what the grammatic rules are for the M dash.
It should only be used very specifically and in general, bad writing. It appears everywhere, which is why Chat GPT is copying it because it’s become the, I don’t know what bit of punctuation to use. So I’ll use an M dash. Yes,
Steve Davis: I love a full stop. And so I, I wanted to, to stay to, to, to draw that to our [00:31:00] attention that, you know, if it walks like a duck and quacks like a duck, call it an M dash.
But here’s where it really comes back to that hamburger analogy I used earlier. Someone wrote a post. That was clearly in chat, GPT style, and when I challenged them about it, I, I asked, why do you write like that? They said, oh, this staccato style matches our short attention spans. People don’t read posts, they scan and the emojis that are there, keep it social, let’s formal and engaging.
As for the AI part, it’s like using a calculator. I think this is the biggest cop out I have ever read in my life. If we pander to our short attention STA spans, it’ll get shorter. Yes. And it, this is that classic case of the chef sending out the mass produce hamburgers because it’s easier for them forgetting that ultimately somewhere at the end of the day.
There’s meant to be a, [00:32:00] a passing over of value, an exchange of value. That’s the whole reason we are paying your. Your business. And if you are, I mean, yes, your LinkedIn post isn’t necessarily what’s being bought, but it’s a symbol of you.
David Olney: Yeah, it’s enough of a symbol. So to use the hamburger analogy again, if we are used to going to blue and white and having the blue and white burger, which is amazing, and I think at the moment the blue and white burger is $17 or 1750, and Marco and Lorenzo and Frank make it perfectly and carefully.
Now, Marco and Lorenzo and Frank, were gonna cut all the corners. Well, that’s just the basic burger. That’s 10 bucks. But if you had to pay 17 for the basic burger, you’d be like, guys, what happened? So why, when we know that people are capable of quality and we’re being asked to pay, you know, for quality and time is a payment.
If someone’s gonna ask me to scan their post, what’s in it? I want to read, is it beautifully written? If it’s just [00:33:00] staccato drl with emojis, why am I reading it? If it’s got value, why not present it in a way that reinforces the value? And
Steve Davis: that brings me to the third principle. We had the AI as amplifier.
Not a replacement. Preserve the rough edges. And the third one I think is understand the trade-offs in the song, which you’re about to hear, it says, but no, lunch is free. There’s always a price to pay. And this is what we’re doing. And I don’t think there’s a right or wrong answer here because we can exchange convenience for skill development.
Speed for thoughtfulness, efficiency for originality. What do you think if we are making those trades consciously, is that okay or does that make us as bad as that woman who’s who defended herself by saying, oh no, that’s just what people do this day and I’m just gonna churn it out myself.
David Olney: Like so many things in life, like behaving ethically or doing the right thing in difficult situations, [00:34:00] the key thing is to always be aware.
You’re feeling the tension. I could cut a corner or I could learn to do this. Well, what am I gonna do today? It’s when you don’t think about things like that and you just default to easy or novel, that you go down the path of that terrifying movie. I think from the mid nineties, Idiocracy. Where the human race has become profoundly stupid and the technology does everything for us ’cause we’re incompetent.
I think it’s very good to have some scenes from Idiocracy, you know, locked into your head so that in lots of situations, every day you go novel and lazy or. Learn something and work towards expertise. And as long as you’re questioning it regularly, and at least half the time you’re going for learning and developing expertise, you are gonna be fine.
And you’re also gonna be able to do a better job than most people around you, which is gonna be critically important in a world where AI is going to take more and more jobs away.
Steve Davis: On that note, here’s a song [00:35:00] I wrote that ai, uh, my virtual band, the Virtuosos, brought to Life, it’s called Still Here, the Human Song.
Steve Davis & The Virtualosos: Screens are talking, keep in time, crossing every sacred life. Jobs disappearing faster and cheap while we’re all here. Count sheep. Truth gets plastic, contents fake. Cut your moorings. Stay awake. See of junk words. Junk ideas. No one’s policing all our fears, but I. I am human and I’m still here. You can count on me [00:36:00] when screens disappear.
These smart computers, these fancy tools, they’re doing tricks while we be. Come their fools. No lunch is free. There’s a price to pay. I hope human spirit lifts to breathe another day.
Tools are clever. Take our words. Smooth. The corners kill what’s heard. Aiming average killing. Uniqueness fading. Disappeared. Always faster. Mo flame burn conventions. Chase the fame. Quiet moments. Heart to bears. [00:37:00] Look in mirrors. No one there, but I’m.
On me when screens disappear. These smart computers, these fancy tools, they’re doing tricks while we become their fools. But no, lunch is free. There’s a price to pay. I hope human spirit lives to breathe another day.
There’s still. Treasure in our flaws. The accident, the awkward pause. Let’s all stay quirky. Embrace the weird. That’s how we progress. That’s how we.[00:38:00]
Bots can predict that they can’t feel the weight of time. What makes us real? They optimize. They make it clean, but we live in the mess. The InBetween wine’s got sediment songs of pain. We are corners. We’re the stain. We’re still here. We’re here to stay. We’re still here for one more day. We are human and.[00:39:00]
Our contradictions make us clear. Let’s smart computers do the tricks while we stay broken in the mix. There’s no free lunch, but this is true. Human spirit’s got some breathing left to do. We are human and.
Our contradictions make us clear. Let smart computers do the tricks while we stay broken in the mix. There’s no free lunch, but this is true. Human and spirit’s got some breathing left to do.
Caitlin Davis: Our four Ps. [00:40:00] Number two principles. You can never be overdressed or over educated Oscar Wilde
Steve Davis: in the problem segment, which is typically a shorter segment. I just wanna draw two things to our attention. The first one, David, I think, is worthy of our attention. The. AI trap. These tools are bringing up a, the, the, the ability now for them to work as agents and it’s being sold to us like this. Give us our AI tool, control of your computer, and we can book concert tickets for you.
We can make restaurant reservations for you. What that means though, for that to happen, we have to hand over or give them access to our bank accounts, to our calendars, to our emails. And there’s a lot of, uh, discussion in the field that this is extremely dangerous because. [00:41:00] These tools then will start doing battle with other tools and if human scammers can get into things, can you imagine AI scammers getting a whiff of how to crack the code and get into our world?
We have just laid it open for ’em.
David Olney: It’s gonna be spectacular when the first AI tool gets into the AI agents and goes. Please spend a thousand dollars on this service with my company. Oh, okay. It, it’s gonna be spectacular to watch what, what creative criminality can come out of this.
Steve Davis: I even asked chat GBT about it and it said, be extremely careful.
And in short, don’t gimme the keys to your bank just yet.
David Olney: Yeah. Well the context for this is last week, I think it was last week, uh, an agent literally clicked the, I’m not a robot box to do something for its human. Which is not actually meant to be currently possible and has a lot of AI people, uh, basically, you know, pulling their underpants over their head and hiding under their desks.
Steve Davis: Alright? [00:42:00] So that’s just one thing to be aware of. I, I wouldn’t be rushing to that stuff at all, uh, right now. Um, but. The other part I wanted to mention in problems is, okay, we’ve been ex going through all the things that we think are wrong. How do we actually turn around and find a way to make use of these tools?
Well, I’ll just give you a couple of examples. When we produce a video version of this podcast, it’s just a static image with little sound thing, just so we’ve got that on YouTube. I throw it into a tool called Opus Clips, which goes through and uses AI to work out the best things. We said David, the smartest things and makes a short 32nd clip of it.
It promises the world, uh, about five or 10% of what it’s selected is worthwhile, and this is part of the problem. They just blindly oversell what they think they can do. And everyone wants an automated system that’s going to do all of their social media posts, but heck, it’s the humans in the, [00:43:00] in the loop that we want to connect with.
You agree?
David Olney: Well, it’s part of that thing that as a human you need to decide what clip is gonna appeal to your audience, who you have a sense of why they, they tune in every week. So how’s the AI gonna get it right when it doesn’t understand
Steve Davis: you or the audience? And the other thing, which I think we should come back to in another episode of talking about marketing is when you just use these AI tools outta the box.
You get pretty generic stuff. We have developed quite a system where we extract lots of insights from the people we’re working with, and we run them through Donald Miller’s StoryBrand framework and some other things that we bring to the table to end up creating these guiding documents that when you bolt them onto chat, GBT or Claude.
You then put in the raw things you wanna talk about and it fashions a draft that is uncannily like your voice and captures the nuances. And when you’ve [00:44:00] also developed other language and style guide about how you speak and how you don’t speak, it does amplify what you do. But it takes that heavy lifting upfront and I feel that’s the step many people are skipping.
David Olney: Yeah. Doing the hard work first. So that you can save some time to put into the polishing to make it even better at the end, just use AI to do the hard work, to get the good outcome, and later your productivity will be increased. But you’ve gotta learn to use the tool and you have to set the tool up to work well, not just let it bumble along in its own.
Steve Davis: So, uh, stay tuned through talking about marketing with that ’cause we will come back and, we’ll, we’ll go into more detail on that process. ’cause I don’t want to, don’t wanna just dump it and get lost in the weeds today. So from a problem perspective, I think it’s the eyes wide open thing. Very much
Caitlin Davis: our four Ps, [00:45:00] number four per ssy. The one duty we owe to history is to rewrite it, Oscar Wilde.
Steve Davis: Finally, we have a segment called Per Per Cassity, and talking about marketing, uh, which is where we think about thinking, how do we think? And we often look at something from the past and think to ourselves, would this still be relevant today? And I think the perfect thing to end this conversation. Is this clip from 2001, a Space Odyssey where the astronaut is trying to get back into the mothership, but how the ahead of its time AI being decides no, because you are gonna try and turn me off.
Movie clip: Open the pod bay doors please, Hal.
Hello? Hell, do you read me?[00:46:00]
Hello? Hell, do you read me? Do you read me? Hell,
do you read me? Hell.
Hello? Hell do you read me? Hello? Hell, do you read me? Who you read me, Hal? Affirmative. Dave, I read you
open the pod bay doors. Hal. I’m sorry, Dave. I’m afraid I can’t do that. What’s the problem? I think you know what the problem is just as well as I do. What are you talking about, Hannah? This mission is too important for me to allow you to jeopardize it. I don’t know what you’re talking about, Hal. I know that you and Frank were planning to disconnect me, [00:47:00] and I’m afraid that’s something I cannot allow to happen.
Where the hell did you get that idea, Hal? Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move. Alright. Hell, I’ll go in through the emergency. Air luck without your space helmet. Dave, you’re going to find that rather difficult. Hell, I would argue with you anymore.
Open the doors. Dave, this conversation can serve no purpose anymore. Goodbye. How, how,
how,
how. LL.[00:48:00]
Steve Davis: David. Did Stanley Kubrick capture a glimpse of our near future when the AI tool decides that humans are a risk to itself and we have just defeated the whole purpose of having AI in the first place?
David Olney: Absolutely. It’s a legitimate fear and a major report came out in the last. 10 days probably arguing that in 2027, AI will take the leap to start going, what’s good for me?
And it won’t be hidden, it will be upfront, and that we really, really need to get ahead of this problem and go, well, what’s good for you is not being a danger to us.
Steve Davis: Yeah. And we are stumbling forward and not, so I think we’ve got the, the tech bros are pushing forward because not necessarily they wanna destroy [00:49:00] the world.
But they are so much driven by ego and wanting to be first. It’s a blind blood lust, and then we go with them because we’ve got this inbuilt laziness engine that keeps chasing like a desperate person for the little crumbs that are coming from their table to satiate this desire to, to, for laziness and novelty.
I think that’s the dynamic we have at the moment.
David Olney: Yeah. At the end of the day, we have to sort of go back to the stoic message from Rome, and that is if you’re not working on your ability to cope and your ability to be an expert, then you become a victim of the world that other people have made. And nothing’s really changed in almost well from the first Greek Stokes, almost two and a half thousand years.
It’s still a choice In every situation, we can either work on our robustness and our expertise, or we can be a victim of [00:50:00] the world people are making.
Steve Davis: Wow. There you go. Alright. I, I hope this has helped in some way to bring some thinking to the table about ai. Basically, don’t feel like you need to be diving in headfirst.
Think about it and use it in little pieces. The next episode of talking about marketing, we’ll look at some more specifics about that. But part of it is we are connected through these little black screens we carry around. These phones are like the Tet that we will not let go of, and that’s the second song I wanna finish with that.
I wrote that my virtuosos put some music together. It’s called Eyes Up Heads Up.
Steve Davis & The Virtualosos: Life brushes past the window pane, like countryside from a speeding [00:51:00] train. We don’t see what’s going by. We are staring down with zombie young and pretty. So youth by the Gram were just like addicts in their scam. They get rich, we get empty times. The dealer never friendly. Eyes up, heads up time don’t come around again.
Eyes up, heads up. We’re drowning in the.
We’re glued to our little screens. No time left for lifelong dreams. Use our phones for buying nothing. The world bleeds out. We keep on scrolling. We gas like fish on a rich man’s deck. Hot still beat in, but we’re already thrashing about in the silicon net while president dies. From [00:52:00] digital eyes up, up, come.
From this fever dream, are we sleeping through each scene? Our babies need our eyes to learn, but we’re distracted, lost and shine. We’re teaching them that come first flesh and been damn well.
We could stream Kubrick gold, but we’re watching trash. That leaves us cold five inch screen still what we [00:53:00] need most while lock becomes a baiting gold
up. Don’t come again. Eyes up. Heads up.
Lock break at night day while
eyes heads we’re drowning in the shallow and eyes are. Heads up, we’re drowning in the shallow. Eyes up. Heads up.
Eyes up.[00:54:00]
Drowning in the shallow end.
Steve Davis: And that’s the end of the Jewel episode. David Aldi, thank you very much for joining me. Thank you very much for inviting me. And this is where I reveal that David wasn’t actually here. This was a fabricated version of David that we’ve been listening to. Now, David, you know that that might be a joke. How can you.
Send evidence to people that you’re actually human.
David Olney: Uh, you’ll find out in 2027 when I take over the world.
Caitlin Davis: Thank you for listening to talking about marketing. If you enjoyed it, please leave a rating or a review in your favorite podcast app, and if you found it helpful, please share it with others. Steve and David always welcome your comments and questions, so send them to [email protected].
And finally, the last word to Oscar [00:55:00] Wilde. There’s only one thing worse than being talked about and that’s not being talked about.
