ChatGPT is better than you know and dumber than you hope
We don’t often start our blogs with a click-baity or controversial headline, but - hey - “new year, new we.”
There has been a lot of hype in the media around the capabilities of GPT and its likely impact on a number of sectors, including the legal sector. As a company focused on the creation and refinement of the most reliable data for lawyers and having spent some time over the past 12 months working with generative models like GPT-3 (see here), we found most of the hype to be somewhat detached from reality. At this stage, we will not comment directly on the probable products and likely impact of this new technology. Instead, we decided to record a conversation about what it means to have an intelligent machine with our friend, Dr Stephen Enright-Ward, the CTO of a regtech / fintech company, Bilby.
Our conversation was meant to last 15 minutes, but ended up running for about an hour. We snipped together this 25 minute video which covered the following topics:
Capturing the minds of experts in silicon
Is ChatGPT anywhere close to “human intelligence”?
A historical recap what people considered as “human intelligence”
“Blabbering” as a way to mimic intelligence
How much of intelligence and creativity is a parameter search problem?
Don’t trust GPT-3
Parallels between contract drafting and coding
Our reluctance to admit to knowing not enough