Whenever a new piece of technology is introduced into the marketplace, particularly one that is quite disruptive it is always difficult for people to gauge its potential impact on the future. Referring back to 2007 when the iPhone was introduced and it was roundly criticised as a solution in search of a problem, in fact, many considered it little more than a toy that would not last beyond Christmas. Fast forward to the current day, some 2.2 billion iPhones have been sold worldwide and there apparently song 6.84 billion smartphones in circulation worldwide.
To me, the same appears to be true about the burgeoning AI market. It seems that a lot of commentators particularly those I would consider being old school are struggling to understand the impact that AI will have in almost every arena of human endeavour. I have to state at the outset of this piece that I am a fan of AI and I use it on an almost daily basis for everything from responding to emails to generating and summarising recent research articles. This is not to say that by any stretch of the imagination, I am an expert I’m really just an interested consumer which is why I’m interested in the opinions of other consumers in the field
It is for this reason that I took particular interest in an article that appeared in the ASX newsletter. the article in question is titled ChatGPT no substitute for research, advice. The article itself contains a series of misunderstandings if not mistakes about what the current version of AI is and what it will be able to do in the future and I want to look at these problems from the perspective of someone who is not only a user of the tool in a professional capacity but also someone who has an understanding of how the world financial analysts works. So a disclaimer at the beginning everyone is well aware of my opinion of financial analysts and a summary of this opinion can be found here in this video.
Like Google, ChatGPT is affected by the algorithms set by its programmers and the available information. So, it can provide an answer, but can it provide an accurate evaluation of the issue to which it is responding?
This is not strictly correct this gives the impression that ChatGPT and its descendants are closed boxes. More appropriately large language models such as the current breed of AI are actually learning machines they are capable of learning from their own errors and adjusting the answers accordingly. They are actually better at self-correction then the majority of humans are, they can do this because of a lack of ego
In preparing to write this article, I thought I would ask ChatGPT to write an article for me about how it supports retail investors (this is NOT that article). Interestingly, it produced a 472-word piece, starting with:
I specifically asked ChatGPT 4 which is the current iteration of this engine to provide me with direct investment advice it refused. It stated that as an AI engine, it was incapable of doing so. This is one of the built-in safeguards that currently exist with this engine. That is not to say that in the future there may not be instances where these safeguards are circumvented. But at present none of the existing engines will give you direct advice on what to buy or sell. They merely provide factual data.
Arguably, in the future, ChatGPT will be able provide information, insights, and analysis, that might help investors to make informed choices.
The problem at the moment is that the information is date limited.
This is incorrect the latest iteration of ChatGPT4 which is built into the Microsoft search engine is fully up-to-date its information is in no way time limited. It is only the earlier iterations that suffer from this limitation.
For example, I asked about the names of the Chair and CEO of an organisation and not only was I advised that the platform only had information accurate as at September, 2021, but then gave me the wrong names for both positions as of that date.
This raises questions about its current functionality for investment purposes – it may have useful historical information, but you’re not going to be able to receive accurate sharemarket results. Investment advisors need not be worried yet.
Again this is an error that arises from not being familiar with the subject at hand if you were familiar with what is happening with AI and its rapid pace of development then you would be aware that you are using a legacy version. For example, I asked the version I use a series of questions relating to the financial reports of several companies. It was able to access and summarise the latest reports of all examples I sought. I too asked it to give me the names of the CEOs of several companies and it was able to do so without any problem at all. It was also able to list the resume of each of these CEOs and as far as my digging went it seemed to be completely correct as to their past experience and their educational qualifications.
Many people say not to follow the herd when investing. Yet, there is a real likelihood that ChatGPT – informed by programmers and the internet – will be the source of information for the herd in the future.
This is criticism of methodology investing more than anything else, if you are not following the herd then you are continually buying companies that are going nowhere. Trend trading relies upon following the herd it relies upon individuals acting in concert to move an instrument’s price.
There is also the potential for gaming of the system. If we reach the point where people make decisions on whether to buy, hold, or sell shares, based on ChatGPT’s recommendations, the temptation may be for a company or individual to flood it with one type of information that skews its answers.
My guess here is that the author has never heard of a pump-and-dump scheme. But there are always bad actors in all environments there is simply no way around this. Once the Internet was created and people realised that it could be used as a mechanism for the dissemination of information there was always going to be a need for the individual to display some degree of scrutiny as to the veracity of the information they were receiving.
Even before the introduction of tools such as social media the finance industry in the form of stockbrokers was forever ramping up the prices of instruments. This notion that because new technology has appeared that somehow the basic tenets of human nature will change is somewhat naïve. Besides I have never met an analyst who didn’t think every company they covered was a raging buy despite the fact that it was heading down the toilet. They would then do their best to convince their clients that it was the best thing since sliced bread. Remember there were analysts recommending Babcock and Brown as a raging buy right up to the time it was delisted.
Without human intuition, judgement, and creativity, it is arguable that the platform will not be able to differentiate between information and misinformation, even if it is happy to tell you otherwise.
Humans at present even without the influence of AI are incredibly poor at sorting information from misinformation. This occurs as a function of the inherent narrative fallacy bias that bedevils all humans. We love a story, in particular, we love stories that agree with stories we already have in our heads. Investors do not seek out information that contradicts their existing belief system they always seek out and are drawn to information that confirms their existing beliefs. This is why echo chambers exist. To also believe that financial analysts are without bias is a profoundly naive belief and flies in the face of all the available evidence.
As the technology evolves and the database informing the answers grows larger and more contemporary, it is possible that AI will become more informed, accurate, and useful.
Real-time analysis would certainly be useful, and AI’s ability to give an answer quickly may give some people an edge, but there will always be questions around the biases of the programmers who created it and the algorithms they develop.
What is interesting here is that perhaps the world’s single largest analytical firm Bloomberg has already invested in and launched its own AI engine. Clearly, they understand that the world is changing and that the way information is consumed is changing. I will state quite clearly that if you are a financial analyst in five years’ time you will have either adapt to the role AI performs or you will be out of a job. In fact, at the present speed of evolution, I find it hard to conceive of how many database professionals like analysts, data scientists, and software engineers will in ten years’ time have a job. It is also necessary to state that AI like those built into Microsoft search engine are not constrained by their programming in as much as there are no biases built in. If there is a point to be concerned about AI it is in the fact that even its creators do not know really how it works.
If we continue to head towards AI investment advisors, however, regulators will need to work hard to keep some control over the system to protect the consumer.
All I can say regarding this point is good luck with that and let me know how it works out. I take such a cynical view because regulators have never had the heart of their legislation or any part of their remit the welfare of investors. If they did they would enact very simple legislation such as banning super funds from attempting to be active investors and forcing them all to invest in the index.
If you are curious about this new technology, you may want to sign up for a free account and ask ChatGPT a few questions for yourself. Or ask it to write you a poem or limerick
Alternatively, you could sign up for one of the latest iterations and ask it to generate a course in conversational French for you and have it sit there for an evening talking to you in French correcting your grammar. Or you could ask to generate resources for you on how to learn calculus. You could then sort those resources on the basis of which ones were free which ones were offered locally and which ones offered some form of progression and perhaps a certificate at the end.
I use AI extensively and I use it because it performs certain tasks with extraordinary speed and with much greater precision than I can. I have used to generate complex formulas for excel which has saved me countless hours researching how to do them because they were beyond my abilities. For fun, I have got another AI engine to generate a presentation for me on the Dutch Tulip boom which it did with remarkable accuracy. I then asked it to convert this to a video with narration which it did. I then asked to take this video transcribe it and apply timestamps to it which it did with remarkable ease.
The point being what we’re seeing now are the embryonic iterations of new technology, a technology that learns for itself and does not seem to be bounded by the simplistic very linear rules of something like Google. This is a tremendously powerful tool that will not go backward it will only get better and become more pervasive. The speed of change and improvement is somewhat disturbing.
From my perspective humans are evolved to deal with the world that moves at an analog pace what we have built with AI is an instrument that moves at a digital time scale which is a scale we cannot conceive of. Therefore its speed is disturbing but that is more of a social issue than one that is germane to this piece.
My final words are simple AI is coming for your job it may not be tomorrow it may not be next year but it may be in five years. And for those with young children, it will certainly be within their lifetime.
Photo by Andrea De Santis on Unsplash and Hitesh Choudhary on Unsplash