How Will AI Impact B2B Founders?
Time to Market S01 E03 – How Will AI Impact B2B Founders?
Étienne Garbugli: Hey, Sean! Have you heard about this thing called Chat GPT? It's everywhere. It's on LinkedIn. It's on Twitter. It's on the Internet. Have you been trying out Chat GPT?
Sean K. Murphy: Yes, we have a subscription to the Open AI server, and we are working with the team at Libraria to add extensions for a chatbot on the SKMurphy website. So we are experimenting with it in different ways.
Étienne Garbugli: What's the motivation behind your exploration right now?
Sean K. Murphy: The SKMurphy team–with one exception– are very down-to-earth, practical people who like to get things done. I know you find that hard to believe listening to me yammer on these podcasts, but the people I work with are very focused.
We’ve been looking at some basic applications and we've found two useful ways to leverage Chat GPT.
The first is to ask some basic questions and get a quick and dirty start on new content. You often have to adjust the tone and verify that Chat GPT is not hallucinating some quotes or references, but it gives you a basic outline you can compare with your rough outline.
Second, you can ask it to summarize a document or a website page. That has two benefits. You can read the summary to determine if the document or website is worth reading. If you wrote the document and the summary does not agree with the key points you were trying to make, then you probably have a problem. The latter is a quick test for whether the positioning in a white paper or home page will be perceived the way you want it to be by prospects.//
The limits on this summarization capability are probably a thousand words or so. With longer documents, say more than three thousand words, it really starts to hallucinate. And it’s a very smooth liar: it’s not intelligent, it’s fluent. It can also start breaking out fake apologies–in effect, “I am sorry you were too stupid to ask the right question.”
It reminds me of a new trend in customer support where the agent starts to focus on your emotional state, “I’m sorry you are having a bad day.” But it’s just a canned answer.
Étienne Garbugli: That's not really new, right? Agents have been using short text to provide canned answers for a while. So the person is not actually feeling sad, right?
Sean K. Murphy: No, not at all. And, by the way, I'm pretty confident that Chat GTP is not feeling sad at all, either. So you must be looking at it as well.
Étienne Garbugli: I've been testing it quite a bit as well.
My curiosity was triggered when everyone in technology started talking about it. I've been using it in a few different ways. I began by trying to get some content out of it. I found it can be good for quick emails.
I've been building it up from there, asking simple questions, and trying to get answers. But then, one evening, I started using it for comparing and ranking options. I'm looking for a place to live in Italy. So I was comparing different parts of the country and trying to do research. One example: "Can you tell me, out of these three cities, how would you rank them in terms of cost of living? How would you rank these in terms of this?"
The results were pretty good, so I started thinking, maybe this is the future. So I started trying to fit it more and more into my workflows. I gave it a full shot for the newsletter I did last week.
I planned to rely on everything from Chat GPT: I had it read some content and then asked it to summarize the content. I asked it to come up with a subject line for the email. I asked it to develop a summary for a book I was reviewing. I used it for all of these different pieces of content.
And it took the exact same amount of time. I was trying to save time using Chat GPT. I thought it might save me an hour writing the newsletter, but it did not work out at all because, as you said, it was a very competent liar.
I would give it an article and ask, "can you summarize this in two sentences?" Since I had read the content before, I could tell when what it was telling me was not true, so I knew it was not giving me the correct information. It seemed to use other questions I had asked before to infer what this article probably was and very confidently told me a lie. So I would ask it to revise it.
It would give me bad results again. And so I went back and forth and back and forth. It would give me one good sentence, and the other would be wrong. I would ask it to change that sentence, but it wouldn't work. So I tested it for tasks like "Can you give me a subject line?"
It would give me things that did not work. At the end of the day, it just shifted the type of work I did without reducing the total amount. I went from creating the first draft, which I don’t find frustrating, but definitely a certain amount of work, to doing much more editing. Usually, editing my newsletter is straightforward, but Chat GPT added a lot of complexity to the editing process.
I went from taking six hours to create my newsletter by hand to six hours to create my newsletter using Chat GPT.
I see it as potentially very useful and think there is a great future for AI being able to generate content. But Chat GPT does not feel faster than tools we already have, like Jasper or Copy.ai.
So I'm left wondering after my experiment: are we looking at a solution that addresses specific pains or are particular problems, or are we looking for problems to solve with Chat GPT?
Sean K. Murphy: Chat GPT has a lot of capabilities, but I want to contrast it to something like Grammarly, which I use quite a bit. Grammarly helps me because it works with me the same way that human copy editors did when I worked with them in marketing at Cisco or writing one of my books. Like them, it has more attention to detail than I do, and its recommendations are provided in context and under my control. It shows me a word, phrase, or sentence, says here is what I think is wrong, and suggests a change. I can say OK, or I am not taking your suggestion.
I look at that as a kind of intelligence amplification as opposed to artificial intelligence.
What worries me about Chat GPT is how opaque its decisions are and how hard they are to control.
Étienne Garbugli: Yeah, definitely.
Sean K. Murphy: If I contrast Chat GPT to Google Search, Google provides a list of search results, and I go to the underlying website and form my own opinion of that author.
And if I decide that I don't trust that author or that author has particular weaknesses, I can adjust my assessment of the source and the information quality. Also, if I find that author has valuable insights--let's say I end up on the Lean B2B site--I tell myself, "I have to read more stuff by this Gabugli character." And I read more on the site and follow links from the site without having to go back to Google.
But that's not how these LLM models work: they abstract the references from the source material, which is a real problem.//
Étienne Garbugli: Definitely a problem. I got curious and asked Chat GPT, “What’s the Lean B2B methodology?” And it gave an answer I wish I had come up with on my own. It was excellent.
But then I asked, “What are the sources for this? Where did you get the information? I want to trace it back to where I described it so eloquently.” And it was not able to tell me where it got the information. So this makes you wonder if this is based on things the algorithm should have had access to.
There are ethical questions about the source of the information. Not everyone wants their information to be publicly indexable by a private tool. So there are all these things that I wonder about. And I was reading a report that an AI scientist did last week, and he was talking about how it was about giving 60% correct answers, which is more than 50%, which is already better than a coin flip.
But at the end of the day, this bot always said things confidently, even when it was wrong. It will repeat it or change it based on when you pass the query.
There are many risks in these new AI tools, of which Chat GPT is just one example, but perhaps the most visible for the moment. Open AI certainly had an effective go-to-market strategy.
But let's take a step back and look at the AI situation more broadly. How do you see this playing out? How do you think this evolves?
Sean K. Murphy: I think there are a couple of things going on. If we look at search engines as an example of something I thought was a huge breakthrough, whether they were behind the firewall as intranet search or Internet search like Alta Vista and then Google, they allowed you to find a lot of useful information.
I remember putting On Location on my hard drive and being amazed at how it could find the documents I needed in seconds. This was a task that had earlier taken me five or ten minutes or even longer. Sometimes it would take so long that I would give up and stop looking.
The test for these technologies is whether they help you get your job done more quickly, more efficiently, or with fewer mistakes. The privacy and intellectual property issues you alluded to argue for smaller private data stores that are searchable with controlled access. The accuracy issues will require you to figure out how to incorporate expert feedback and curation.//
If you look at this as intelligence amplification, not artificial intelligence, then you follow Grammarly's example. You put the user in charge. You allow individual rules to be turned off or fine-tuned. But the next level up, and what Grammarly is really enabling, is more effective collaboration among knowledge workers. It's amplifying what they can accomplish together.
It's fundamentally a mistake to talk about AI. Instead, we should talk about semantic technologies and tools for taxonomy and ontology definition and management.
The functions in Chat GPT will be unbundled into separate tools to make them more comprehensible and controllable. I say this because Chat GPT acts like a glib mechanic with a big toolbox who lies to you, risks your credibility, and exhausts your patience before you get what you need.
You want a screwdriver, a wrench, and a hammer that can act as extensions of your hand.
The glib mechanic seems like less work for you until you realize how dangerous your lack of visibility and control is. These are severe problems, not in the sense that AI will take over the world, but in the sense it will waste a lot of people's time.
Étienne Garbugli: My question is, why are we talking about AI now? What's new? I went to SaaStr, in San Francisco in 2015, and they talked about how every product will have AI in it. That was when I was working for an AI startup that had its own algorithms.
There are a lot of tools that we use, like Grammarly, that embed AI functionality. I am from Montreal; there has been a lot of funding for AI research studios there.
Why are we talking about Chat GPT today when we were not talking about AI as much just six months ago? Is it the content generation, the ability to create images, or the fact that OpenAI found an excellent way to attract attention? Has the chat interface made it more natural and more concrete for people?more concrete.
Sean K. Murphy: I think they've seized on what the future of search should look like, which is that you would like to ask questions. We've become a little oblivious or inured to the idea of typing in keywords to find documents that match.
What you really want is a user interface that will answer your questions, which is how people naturally think about their needs. If you talk to a reference librarian you don’t say, “here are the keywords I am looking for, do you know any books that contain them?” You ask them questions, which is what Chat GPT has accomplished. They’ve delivered a working demo or even a working prototype of the interface that many people prefer to use.
Étienne Garbugli: So, is Chat GPT more of a great UI?
Sean K. Murphy: I think that's why people are talking about it. Many people who have no desire to learn a new AI tool can chat and ask questions. They have lowered the barrier to trialability at least.
Étienne Garbugli: Do you think Chat GPT is intended to be a B2C product? Or is this wide access just a step on the journey to a B2B play? I know they got a lot of funding from Bing at Microsoft, and now they're going to be integrated in a lot of tools with Microsoft.
Was that always the direction? Are they using basically an army of people doing queries to kind of get the use case to come out? Are they using it as a way to get attention so they can get the B2B buyers to eventually gravitate towards their solution? Or is this all things that just are gonna happen but that were not necessarily intended?
Sean K. Murphy: You know, I have not been following them that closely. They started out as an open source project and they've now kind of reoriented. They have taken more than 10 billion dollars from Microsoft.
I saw that Seth Godin has put a chat bot on his website and I thought our visitors also wanted to ask questions about entrepreneurship based on the content on our website. We discovered Libraria based on an announcement on Hacker News. The idea of a virtual representative to help visitors navigate a website has been around for a while, I can remember talking to a firm in 98 about the concept, but at that point they required extensive programming
Étienne Garbugli: Well, so, so let me re-ask the question then. Is it a B2B tool that is looking for the right applications? It seems like everyone's talking about it and considering ways to leverage it or related AI tools.
We have all this data that we'd like to use. Can we just feed that to a Chat GPT-like product? It feels like they have found a really clever way to kind of shortcut the decision making process. And find the early adopters within organizations. If I am looking at this from a purely strategic perspective, then I can just flip it around, look at what organizations have actually signed up for Chat GPT, what are they using it for, and can I repackage that to actually sell them products afterwards?
A little bit the same thing that Amazon or Google are doing when they're doing extension products or added products to build off their data sets. //
Sean K. Murphy: We went through this with Watson, where IBM made confident and optimistic promises that didn't bear fruit. I suspect the average Microsoft Enterprise salesperson remembers this when they get inquiries from a major account about "What are you guys doing with Chat GPT?" I don't think they want to make a lot of promises about Chat GPT to a major account until there are well-established use cases that reliably deliver results.
The examples they cite sound like the Jeopardy contest that Watson won, due as much to the ability to time the buzzer lockout very accurately as general knowledge. I don’t know if anybody has unlocked a breakthrough. Take your example: your converted six hours of writing effort into six hours of arguing with Chat GPT. I guess one man’s argument is another’s “prompt engineering.”
Mixed in with apologies, excuses, and “I’m sorry Dave, I cannot open the pad bay door.”
Étienne Garbugli: There are two discussions here: large language models are enabling AI, and then there is the user interface. I do see the Chat GPT interface as the best language-based UI that I've ever used. If you compare Chat GPT to all of the bots we are forced to deal with because banks bought bot solutions that were not good, it comes out far ahead.
Also, I see an extreme amount of value in AI, especially where companies curate their own unique data sets with employee domain knowledge. Where they can use their unique information, or information gathered from or related to their customers, they can create a lot of value.
I see a tremendous amount of innovation potential in both of those approaches.
Sean K. Murphy: I agree; I see value in the UI and in a carefully curated private data store. And once you talk about a restricted domain or prepared users, then you are talking about B2B.
In the case where you put a Chat GPT enabled agent on your website, to pick one example, then the thousand visitors to your site are not paying Chat GPT. You're paying Chat GPT. And so, to me, that's a B2B application. It may have a whole bunch of quote consumer users, but the person who is writing the check--paying for each little micro transaction--is you.
So from my perspective, that's a B2B play.
Étienne Garbugli: Yeah, definitely. That was a clever way to bring it back to B2B. If we connect this to our previous episodes, where we discussed when it makes sense for entrepreneurs to start a business, where do you see opportunities to leverage this new UI model or enhanced query capability in restricted domains? UI's and data models will be exploding in terms of variety and quality.
Sean K. Murphy: So you have to have a data store that doesn't get digested by Open AI. Most companies using Chat GPT will not consent to turn over internal corporate information to a third party where it can be commingled and remixed.//
GitHub co-pilot is not a model for B2B success. Larger firms- even many freelancers- will not sign up to turn over their source code to help competitors. That's a key aspect of any new startup model. We put a chatbot on our site, but it only has access to the public information on our site, information already in the Google cache, the Bing caches, and the OpenAI cache.
How do you use a Large Language Model without giving away your insights, or at least the data you gather and your team's decisions? A startup faces a similar challenge when competing against Google. One way is to pick niches that are large enough to bootstrap in but too small to get their attention.
On that score, Google has tried to get into enterprise search--tools for corporate intranets behind the firewall--and failed. And that's a large market. Another example is Google Glass, an effort at augmented reality they attacked as if it were a consumer fashion good. Many other players are prospering in that market.
Starts cannot safely use the Open AI tools directly without doing a significant amount of pre-processing, post-processing, or both to mask their insights. They will need to develop specialized taxonomies or ontologies that leverage their team's expertise in a narrow domain where they can provide high value to businesses.
Étienne Garbugli: But you hinted at this earlier: markets evolve in alternating phases of bundling and unbundling. I could see Chat GPT as a horizontal platform that is good at everything but not expert at anything. I can see a future where they become the default UI or offer a complementary data set that allows other firms to make sense of private data stored elsewhere.
I think privacy and corporate IP concerns-–particularly company confidential information and trade secrets–-may come to the forefront and drive the unbundling and creation of smaller or more focused apps.
I think startups also have to be worried about their speed of innovation. They have been launching major algorithms every six months, which presents a different set of challenges to startups.
Sean K. Murphy: There is a software vendor riddle I first heard in the 1980’s that had already been around for a while. How was God able to make the world in six days? He didn’t have an install base.
You can always move fast in the beginning. It's not clear to me how fast they are today or where they will be in a year. But I agree with your assessment. There's going to be a lot of opportunities here.
I believe it's going to look more like B2B than consumer. I think with Chat GPT we're at the point in 2011 where Watson won Jeopardy. What happened next was that a lot of firms signed up, including several large hospitals who wanted to use Watson to improve medical care. IBM sold Watson Healthcare early last January to a private group.
Étienne Garbugli: It is not the first attempt and it probably won’t be the last. Although if they gain control of the AI operating system we all use that could be problematic.
Sean K. Murphy: The thing that scares me most about it is its efforts to be convincing. When you were arguing with it, so to speak, for several hours, you would've preferred it to say, “you know, I'm not really sure about this.” If you'd hired somebody who had been so confident and wrong so many times, at the end of that six hours, you would have said, “I agreed to pay you for six hours; Here's your money. Thanks. I'm gonna call you back when I need you again, which will be never.”
Étienne Garbugli: I don't think it will steal our jobs, at least not in the next five years. And I don't think it will necessarily steal the job of books. It's too fragmented and inconsistent in terms of knowledge association and in creating value after you make the associations.
I feel safer today than when it first launched, and I played with it and realized it was actually good. Of course, there are still things that need to be reconsidered, but now I see more opportunities, so this may be the normal panic cycle of a massive innovation coming up.
Sean K. Murphy: I'll leave you one thought. You've authored four books that are well researched and well written. I've only read two of them but I'm assuming the other two are as good. And you self-published them so I think it would be of interest to you to be able to add a chat agent on your site that could answer questions based on the content in those books.
Perhaps the answers are limited to three paragraphs so I can’t use it to download your entire book. But it seems to me you have a body of work that you would like to make available to people visiting your website who have questions about B2B entrepreneurship. That would be a force multiplier on what you've already done.
Étienne Garbugli: Well, let's do it this way then. If you are listening and have the ability to do this please reach out to me on twitter or email.
I think this is probably a good breakout point for the conversation, Sean. Thanks so much for taking part, and I will see you in the next episode.
Sean K. Murphy: Thank you, sir.