Imagine putting someone from hundreds of years ago on a FaceTime call today.
The idea that we could not only talk to, but see, a person on the other side of the world in real time would have been unimaginable.
They didn’t even know there was an other side of the world.
When it comes to artificial intelligence, we have to remind ourselves of this idea every day.
It’s natural for us — i.e. old people — to think we have it all figured out. We’ve lived long enough, read enough, learned enough to know what’s what.
But you don’t need to look too far back in history to see we don’t know what’s possible — until it is.
FaceTime debuted in 2010. The thought of walking around with a computer in our pockets that could video call people any time, anywhere, started the year my 13-year-old was born. Just imagine what could happen by the time she’s 20.
The problem is: we can’t imagine it. We’ve been taught not to.
What capital E education teaches us about learning
In all your years of schooling, how much time was dedicated to imagining, thinking outside the box, questioning — literally — anything?
A quick read into it will tell you everything you already know about the way Education is set up today. The modern education system was designed to teach future factory workers to be “punctual, docile, and sober” - Quartz
And in no way am I saying this is the fault of teachers or even administrators who work so hard today. Changing the entire system, the entire idea of schooling, is not going to happen one person at a time.
(Or maybe it can. See above about not knowing what’s possible until is.)
What the tech bros can teach us about learning
If one person read eight hours a day for 170,000 years, they could read the amount of data that large language models are trained on today, according to Meta’s chief A.I. scientist Yann LeCun.
At school, we learn by reading: text. We learn by listening: text. We are assessed by writing down what we know: text.
And now, without any further advancement whatsoever in artificial intelligence, we are unimaginably behind.
But LeCun continued, explaining that a 4-year-old human has seen 50 times more data than the biggest LLMs trained on all the text publicly available on the internet.
That 4-year-old hasn’t read a single word of text. Or attended a single day of school.
What does that mean for formal Education? I don’t know. But I’ve followed enough A.I.-in-education experts to know a ton of brilliant educators are trying to figure it out as we speak — or text.
How that looks in today’s (and tomorrow’s) robots
A month ago, OpenAI (maker of ChatGPT) announced a partnership with robot creator, Figure.
Two. weeks. later. they released a video. They took the robots that Figure was already making headway with and put in their most up-to-date (publicly available) large language model, GPT-4.
Robot + all that language it would take one human reading eight hours a day for 170,000 years to learn.
Then yesterday, Nvidia, a tech company most people have never heard of, but has a higher market cap than Google and Amazon, announced their entry into the field. Start at 1:46 for the most relevant part to this conversation.
Robot + real world learning that that 4-year-old has gained.
The OpenAI and Figure model is impressive, if not a little slow for the moment. But knowing what you know now about the way large language models and four-year-olds internalize data, just imagine how quickly Nvidia’s model will transform what we think we know today.
We are just the current model of the people who didn’t even know there was an other side of the world.