http://Kevin%20O'Marah

Kevin O'Marah

January 17, 2023

OpenAI’s massively successful launch of ChatGPT, an AI chatbot that answers questions conversationally based on internet documents, is here and it’s hot. In just a few days it signed up over 1 million users (including me), and by some estimates, will have a billion users before 2023 ends. We recently published a set of supply chain predictions including one that says “ChatGPT breaks trust”. 

Is ChatGPT bad for supply chain? 

No. It’s good. But it’s also bad, and it’s going to get ugly if you aren’t smart about it.

The Value of Trust 

Supply chains depend on trust. We count on suppliers, logistics providers, and our own operations teams to report truthfully what’s happening. And yet, the sizzling hot demand for “end-to-end” visibility proves that trust is still in short supply.  

ChatGPT will worsen the trust deficit, at least for now, by enabling people who know nothing about a topic to sound like experts. Supply chain leaders should learn how to use this new tool/toy if we want to maintain and build trust with customers. 

The Good News 

ChatGPT works. I tested it with a simple question: What are the risks of sourcing in China? The answer, rendered instantly, was a respectable, albeit high-level, five-point answer including: 

  • Quality Control Issue
    • Intellectual Property Protection
    • Language and Cultural Barriers
    • Regulatory Compliance
    • Long Lead Times

    Each point came with a decent sounding 20–30-word explanation. 

    I asked the same question for India, Vietnam, Mexico, and Russia. The answers for each varied appropriately and offered some meaningful distinctions across the five. As a starting point for an important discussion like diversifying manufacturing away from China, ChatGPT was ok. 

    The mechanics are simple, and reminiscent of Malcolm Gladwell’s bestselling book, The Wisdom of Crowds. ChatGPT chooses the most likely words to appear in answer to a question, using the spoken language format we know in real life. It’s more than search because it infers what words, in what order, make sense as an answer. 

    The Bad News 

    Crowds are sometimes mobs, and for anyone who knows the lemming-like behavior of a volatile stock market, mobs can do some crazy stuff. They overhype on the way up, accelerate crashes on the way down, and often throw out the baby with the bathwater. ChatGPT, which for now is limited to a dataset that runs only through 2021, could easily do for sober-sounding research and analysis what social media has done for lifestyles – namely magnify backward-looking conventional wisdom to the point of cartoonish disaster.  

    Tomas Chamorro-Premuzic, a Professor of Business Psychology at University College London, points out the fatal flaw with ChatGPT in terms of how it helps with research and analysis: asking good questions in the first place. As Chamorro-Premuzic says, “when the answers to all questions are openly available and accessible to all, what matters is the ability to ask the right questions.” This reminds me of a conversation I had this week with Kevin McCoy, a visionary manufacturing leader at New Balance in Boston, who constantly asks what needs to happen to make something that seems impossible, possible. ChatGPT cannot figure out, as New Balance is now, how to cost-effectively make shoes in America. 

    It’s also clear to me, as a former Amazonian, that ChatGPT won’t help supply chain leaders write a good proposal document of the kind that helped launch Prime, the last mile delivery program, and other operational innovations at Amazon. It can answer questions, but it can’t imagine operational breakthroughs. 

    The Ugly 

    ChatGPT is beautiful – on the surface. Its answers sound totally reasonable, and it is literally instantaneous. How nice to short-cut the Google-driven process of doing research for your job!  

    But it’s not research. There is no citation trail, no way to click through to other sources, and no way to know whether your question is brilliant or stupid. Rendering incorrect information that looks legit is something computer scientists call “hallucinating”, and that can be very ugly. 

    Beneath the surface, once the hallucination fades, you’ll look bad if you haven’t gathered your own data, bulletproofed your logic, and asked what it all means for the problem you’re trying to solve.  

    Never miss an update

      3-minute read