If you like this post or my Substack, please share!
I wanted to start this piece with a quote about how AI is expected to render many jobs obsolete. I asked Google’s Gemini for one, and it answered:
Here's a quote that captures the concern about AI making jobs obsolete:
"The robots are not just taking our jobs, they are taking over industries." - From the book ‘The Rise of the Robots’
I asked Gemini who the author was and when the book was published, and it said:
I apologize for the mistake in the previous quote. The quote I provided isn't actually in the book "Rise of the Robots."
Gemini is therefore like a Russian or a Kazakh stockbroker, but I’ll come back to that, as it is part of what I originally intended to write about.
Knowing little about AI, I’m nevertheless going to discuss it. That’s the kind of guy I am. Though my understanding is shallow, at least I won’t make anything up, as AI does. Because I’m ignorant, I had to start by Googling the definition of AI, which yielded a succinct one on IBM’s website: “Artificial intelligence … is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.” Sounds awesome.
AI achieves this by reviewing massive amounts of data and integrating it, either with human assistance (machine learning) or on its own (deep learning). This seems to work well when the problem is one that a human would try to solve by poring over reams of documents, e.g., medical diagnoses. While AI is not a doctor, it can quickly generate ideas that point diagnosticians in directions they might not have thought of for a while, when speed is of the essence.
Another perfectly apt task that I gave AI was to identify the subject, Margaret Rose, in a painting I own by the artist Alice Neel. A plain Google search had been of no use, but within seconds Gemini found Rose’s name in a biography of Neel, which I otherwise would’ve had to track down and read.
Given that AI learns by combing through available data, it starts to get in trouble when the data is conflicting, or limited, or non-existent. I have spent 30 years investing in Eastern European companies, and although my fund publishes a monthly letter that ChatGPT could read, it only distills what we have experienced the prior month into digestible summaries for our limited partners. What is omitted is the sausage-making – the difficult analysis that would be too long and complicated to describe in one page but is the actual work. AI will not find it written anywhere, because it resides in our heads.
Pattern Recognition is Much of the Job
Experts in many narrow specialties rely on pattern recognition to evaluate scenarios and solve problems. Chess masters assess board positions quickly not by looking many moves ahead, as you might think, or as a computer might do, but by rifling through the thousands of games they have played or watched and are stored in their brains. David Epstein in his book Range1 tells a story about chess prodigy Susan Polgar, who after a brief glance could recreate the position of 28 pieces mid-game on a board. When Polgar was shown a board where the pieces were set up in a position that would not occur in an actual game, she couldn’t memorize it. There was no referent in her head for the second game, whereas she had instantaneously broken the first one into parts that followed familiar patterns – what psychologists call “chunking”.
Similarly, research by psychologist Gary Klein showed that firefighting commanders made 80% of their decisions instinctively and in seconds based on repeating patterns seen in flames and buildings about to fall. As with the chess players, the more unfamiliar the structure, the harder it was for them to assess quickly. Long experience becomes a less powerful tool in an unusual situation.
When the fund I co-manage, Firebird, considers a stock, we look at the financial statements, the management, and other specific company factors, but our initial gut reaction and the final decision often rest on comparing it to a prior investment. The story of Eastern European banks, retailers, utilities, etc., varies from country to country but follows a limited set of patterns. Does the bank have a dominant or growing market share? Does it know how to lend to small business? Is the management technologically sophisticated and properly incentivized? There’s more, but it’s not a long list.
We could program an AI-assisted computer to search for candidates, as quant traders do, but still the onus will be on us to input the myriad factors that have made a difference between gain and loss. Will we remember to tell AI not to trust a CEO who wears two-toned shoes, or to count the number of stray dogs at a factory or the Porsches in the parking lot of a supposedly unprofitable company? Successful quant funds have needed a human, like the late Jim Simons, or a small group of humans at the top inputting the salient factors and helping the computer to separate signal from noise.2
Still, I decided to give AI a chance, asking the question: “Give the name of a bank in a country on the accession path to EU membership that is listed on a stock exchange and has high market share.” Gemini linked to some EU resources, then basically told me to go away and figure it out for myself. Microsoft’s Copilot had an answer: PrivatBank in Ukraine. Oops – there’s a war on and besides, Privat has been nationalized after being looted by its previous owner. ChatGPT did best, recommending a Croatian bank that looks pretty good, except 1) Croatia is already in EU, 2) ChatGPT confused the bank’s price-book ratio with its price-earnings ratio, and 3) it failed to note that recent earnings were enhanced by one-off gains. Overall, not a great showing from AI, though admittedly my prompting needs work.
When Patterns Don’t Work
Pattern recognition does best with the bottom-up part of investing, especially when a manager is expert in a narrow field like healthcare or energy or Eastern European stocks. It starts to fail when applied to macro situations, such as whether the U.S. Congress will cap drug prices or what the consequences will be of an oil spill. Putin’s 2022 invasion of Ukraine, which came contrary to the predictions of about 90% of Russia experts, is a prime example.3 A fund manager can try to analogize from past events but each time a political chessboard is set up, the pieces are in novel positions.
Even very experienced people can detect false patterns, sometimes in the quasi-delusional hope that a wonderful new opportunity is going to succeed. For example, Firebird has observed a pattern that newer EU members or countries on the accession path to EU tend to return to reforms, even when the leaders periodically take them off track. This is because the peoples’ desire to attain higher EU living standards, as well as EU subsidies and job mobility, makes them vote out governments that jeopardize these goals – as happened in Romania and recently in Poland. But it doesn’t always work out that way. In Bulgaria, our purchase of shares of a bank there based on a pattern of prior good bank investments ended in failure.
A life spent seeking patterns can easily slide over into conspiracy-mongering. The smarter an investment team is, the more they can connect bits of information into a theory that conveniently tends to support their pre-determined course of action. Firebird has made big mistakes by “identifying” vast conspiracies when Occam’s Razor was actually in effect: i.e., the best answer was the simplest one. That said, I believe in most conspiracies, and I have some great ones to share if anyone has a couple of hours free, starting with the Laurel Canyon conspiracy (look it up).
Mistakes in the market provide feedback quickly – often quicker than I would like. The late Daniel Kahneman demonstrated that pattern recognition works better in an environment where feedback is quick, and cause and effect is clear (a so-called “kind environment”). Investing failures can be more informative than successes; not only do they generate a checklist of mistakes to avoid, but they also reveal your personal investing weaknesses. For example, while I have an accurate “bullshit detector” for Eastern European brokers and promoters, I can be fooled by British and Canadians, around whom my guard has tended to be down since they’re “the same as me.”4 I’ve also learned that Russians and Kazakhs want to seem knowledgeable, are ashamed when they don’t know the answer to a question, and will reply even if they make something up. It is in this way that Gemini and ChatGPT are like them.
What Jobs are Safe From AI?
Humans in narrow fields that enable them to identify patterns, most of which occur in real life and are never written down, should retain a significant advantage over AI. At the other end of the spectrum, jobs that continually present new scenarios (“wicked” environments) requiring emotional intelligence and nuanced judgment to address them, can’t be done by a computer. A profession that has elements of both should be the safest of all. Diplomat comes to mind, as does Freudian psychoanalyst/dream interpreter.5 But the #1 job that I can’t picture AI doing is talent agent at CAA. At least not until movie stars are (literal) robots.
I wanted to end with a quotation I saw the other day from the physicist Niels Bohr. It had scrolled by on Bloomberg, and I didn’t write it down, so I thought it would be fitting to retrieve it by asking AI. I am positive that Gemini did not make this one up:
“An expert is a person who has made all the mistakes that can be made in a very narrow field."
Range (Riverhead Books, 2019)
I have noticed that in some Wordle games up to 10% of the humans “see” the word on the second or third move, while the Bot is still diligently eliminating letters. When you beat the Bot, it gets angry and gives you a low skill score, saying you just got lucky. Data analysis of humans vs. Bot in Wordle is needed.
As of mid-February 2022, Firebird had rated the risk between 25-30%.
British stockbrokers are much less honest than Russians, in my experience. Russians, operating under weak rule of law, have been dependent on the relationships they build with clients, especially ones with long-term commitments to the market. They almost never took advantage. The British philosophy is said to be that if you can make money by outwitting even your best client, it’s nothing to be ashamed about.
Yes, there still are people hanging in there with Freud, including a fellow congregant at Brotherhood Synagogue. One time after Saturday services she interpreted a recurring dream where I’m late for a flight and nervously awaiting my business partner in a hotel lobby. She explained that it was a general anxiety dream about my son, which I thought was not bad work for five minutes over a challah.
Skinner spent the last 20 years of his life torturing dogs, so I guess they weren’t sufficiently warned by his verbal threats.
AI seems vaguely behavioristic, which brings to mind Chomsky's "The Case Against BF Skinner" in the NY Review of Books c 1970. Skinner claims that the proper response to physical threats is conditioned after several instances of an injury following a threat, so that a person exposed to the threat in the next instance will understand what follows the threat and behave properly to avoid the outcome. Chomsky points out that if Skinner is correct, then "it would appear to follow...that a speaker will not respond properly to to the 'mand' [Skinner's technical term] "Your money or your life" unless he has a past history of being killed."