I bust BS from a different lens. I’m big on the scientific concept of falsifiability. What this means is actively trying to disprove everything, and then only tentatively accepting for further investigation those things that cannot be disproven. A structured analytic technique called Analysis of Competing Hypothesis (ACH) is great for this. In practical terms, this means whenever someone asks me a question such as “find out where the weapons of mass destruction are in Iraq” or “find me where the opportunity is in the FinTech market” we need to unpack that and see what evidence actually supports weapons of mass destruction being in Iraq or that there is an opportunity in the FinTech market AND that it is more desirable than other markets in terms of market potential and market challenge to become a leading player in that market.
What upsets me the most about my industry are players peddling poison by using AI as a buzzword to sell something that either A) Fundamentally doesn’t work. OR B) Has not been proven to lead to better results over baseline human intuition.
In many circles, AI has devolved into rebranded statistics or is used in a way not supported by the technology. ChatGPT is the most well-known in popular discourse and in my strategic planning projects with kindreds, I’ve had to flag many cases of ChatGPT being used in a way that detracts rather than adds from a strategic assessment. Such as using ChatGPT as a calculator or asking ChatGPT to rank in priority order the factors that influence buyer behavior.
It is not just small to midmarket companies struggling with this. A law firm got caught using ChatGPT to citations that did not exist. Zillow applied a model that bought houses at prices unwarranted and lost almost 1 billion dollars. Walmart had issues where AI for hyper personalized recommendations led to recommendations to offensive/hateful products. AI-curated newsfeeds used to replace journalists have been caught creating fake news articles or offensive polls suggesting that a murder case was fake or actually a suicide.
I think many times the rush to apply “AI” onto a product or service is driven by a desire to cut corners or a get rich quick scheme honestly.
I will conclude this post by reaffirming that I am by no means anti-technology or anti-AI. On the contrary, this week I am submitting a proposal to speak at a ’24 conference on how I’m using predictive AI into models designed to predict how a competitive landscape could evolve over time for strategic planning decisions. My upset is related to industry players peddling harmful applications that can destabilize entire companies, and entire industries.