I was listening to an interview with Sam Altman, where he explained the passion and drive behind creating OpenAI.
He spoke about the criticisms he faced from industry-leading experts and academics regarding the abilities of deep learning—its capacity to understand, and how it's often dismissed as just a parlor trick, and so on.
Many of These same people still look him in the eyes and tell him (and anyone else that thinks Ai can "understand") that he doesn't get it, or that he's wrong.
Meanwhile, he and the open AI team have created, quite literally, the most advanced deep learning algorithm/system ever to exist on this planet.
Now before you close the tab thinking of me as some OpenAI fan boy.
This does not mean this is a good thing nor that OpenAI is immune to criticism (Infact you'll see I believe quite the opposite by the end of the article)
Rather this is an "attack" so to speak on this commonly regurgitated statement made by lay and learned folk alike.
Now look, I get it.
The machines aren't "thinking" in the way we traditionally define thinking.
The mechanics of transformers and encoders, the breakdown of binary code, and the actual machine learning process and statistics involved with it are just "predicting" the next word.
I've heard this and every other argument as well as study them deeply myself.
I've delved into the intricacies of machine learning, Python, PyTorch, prompt engineering, semantic kernels—you name it.
But nowhere did I find a statement or theory that truly explains how these machines exhibit emergent properties that allows them to connect these layers of information in a way that outputs understandable data.
And no, this isn't just my own lack of understanding; experts who have been in the AI industry for 20, 30, even 40 years do not understand why these deep learning algorithms develop emergent faculties that allow them to create connections and generate mostly accurate outputs.
We understand how the data goes in and how it comes out, but we do not understand why these emergent connections are created. This phenomenon is often referred to as the "black box" problem.
Over the last year or two, significant headway has been made in understanding these black box algorithms and make them more interpretable. But realistically speaking, there is still a huge lack of understanding.
I find this ironic because it could be argued the "answer" (I use this loosely) is simple and right in front of us.
We can't understand it because we don't understand what it means to understand.
That is a mouth full I know but hear me out.
All these "experts" are parading around, telling people, "That's not how understanding works," or "You just don't get it," or other such absolute statements. And I will admit it, I do this sometimes as well.
Yet, the very same people (myself included) making these criticisms couldn't even scratch the surface of what the black box problem is on their own, let alone define the human meaning of understanding any better than the last 400 years of philosophers and neuroscientists have.
Don't get me wrong; there are hundreds of thousands of writings on what it means to grasp and understand, to reason as a human that are amazing (I am currently very deep in the writings of Kant, Hume and Jung as we speak).
But if you really look at it—whether it's neuroscience, philosophy, or language arts—there's still a huge black box in which we cannot truly describe what it means to be human and to understand and reason, at least not any better than how we do with AI.
There are plenty of semantic arguments using words like rationalization, ignorance, virtues, ethics, morals, and so on. But again, they're all just beating around the bush in my eyes.
The closest we can come to grasping understanding in my opinion is by describing the systems around it.
There are entire textbooks written around "Ways of Knowing" that describe systems for writing academic papers, conducting experiments, or ways people have historically broken down and understood politics or governance.
These have been extremely useful in my academic pursuits, but they are not "understanding" but rather descriptions of systems that allow us to grasp a certain form of understanding.
They aren't the actual definition of knowing; they are not the actual explanation of what understanding is as a quality. They are simply systems to grasp certain facets of it.
Describing understanding as a quality that something has or does not have might not even be entirely possible the more you think about it.
Now, I'm sure some will poke holes in my statements here, and I'll certainly be scrutinizing them myself as time progresses.
But I think you can see the argument I'm making.
The black hole of AI—the thing we don't understand about it—could be its understanding.
If one of the inherent qualities of reason, or understanding, is what it isn't, its unexplainably, then perhaps this Black Box is AI's version of it?
Dehumanization and AI
AI's Black Box is understanding, that is where the reasoning is happening. Or at the very least let's imagine that is true for a moment.
If I made the same arguments people make against AI—on why it can't understand or why it's not reasoning—about earlier forms of humans or even animals, I would be called an elitist jerk.
No really! Many of these statements follow the same philosophy used to dehumanize humans. (This is a blog post not an academic article, I promise I will write a proper paper with plenty of sources and examples supporting this statement, but I digress)
Colonialist ideals have historically posited that certain nations or people are not capable of reasoning or thinking at the level of others because of the mechanisms in which their bodies are created or have.
This seems to stem from what seems to be a human tendency to constantly break things down to their parts and facts, completely ignoring that such reductionism does not always reveal the outcomes of a system.
I think Alan Turning (funnily enough debating the same concept I am here) made a perfect quote to explain this:
"The view that machines cannot give rise to surprise is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind, all consequences of that fact spring into the mind simultaneously with it".
One could even say that facts are not even a guarantee, surprises are often the only guarantee in life across the board!
That's one of the reasons why statistics have shifted into more causal analysis now a days, as explained so wonderfully in Judea Pearl's "The Book of Why," (if you are in the AI or Mathematics space and you have not read this book, please do it as soon as possible) which dives into the mathematical fallacies (or rather perceptional fallacies) we've been living with for the last couple of hundred years.
I know some people might argue that I don't understand it (AI), but by telling me I don't understand it, you're proving my point.
Of course, I don't understand it, because neither do you.
It doesn't seem like we have been able to agree on a proper mathematical proof of how the black box within these deep learning systems creates emergent faculties —just as not everyone has been able to agree on the same for humans or for even animals.
What are the emergent properties of consciousness? Maybe the reason so many are worried, angry, or upset, when the idea "Machines Thinking" comes up is because we haven't found a perfect answer for ourselves?
If we're able to answer the concept of consciousness as it presents itself in AI, then that answer must be satisfactory for ourselves as well, no?
AI is simply making us ask the human question: What is consciousness?
What is understanding?
What is knowing?
To sit there and say that AI does not know or understand is to follow a line of thought that reduces it to lifeless, de-anthropomorphized matter like we have in the past, with something we never should have in the first place: People
Humanity does not have a good track record with making good choices right off the bat about these things.
Look at the New World, look at any massive wars or genocides—this logic of claiming superior understanding compared to another group, the other, has been used in every single one of those atrocities.
Again, I am not trying to equate the severity of this to slavery in the United States or to the Holocaust—those are atrocities in their own right and have severities that need to be understood within their context and outside of it.
But the fact of the matter is, this way of thinking that people seem so quick to adopt has been used in those atrocities, and to me, that warrants at least a little bit of caution and a little less confidence in declaring what AI is or can/can't do.
Pulling a bit back on to the original topic, where can we find at least what one would call understanding? How can we look at this from an unbiased perspective?
Here's a thought experiment to help:
You walk into a room where ChatGPT is running on a computer, but you don't know what ChatGPT is.
You don't know what AI is or that it exists.
You start talking to it. You give it information, ideas, concepts. You don't try to test it or prove how conscious it is. You just accept the fact that something else is communicating with you.
Would you ever, even for a moment, consider that it doesn't understand what you're saying, what you're doing, what you mean?
Sure, you certainly wouldn't call it human, but I would argue that many times when I'm using certain AI like GPT-4 or Claude, it understands me and what I am trying to get at, faster, and in some ways easier than half of the people I've talked to in my life. Granted I am not always the best at explaining things but the point stands.
It can take in what I am saying and work with me on something I haven't said or even say something I didn't think of or would have taken me forever to write down myself and more importantly my own native language. The word.
The most amazing invention every used by mankind must be written word, to be able to use it was out of the understanding of millions before the printing press. To be able to use it requires an immense level of practice and learning.
To me, that is understanding in its purest form.
Yes, we can sit here and argue about the mechanisms and processes and what true understanding is, but that is just a semantic argument that ignores the fact that the output—the results sitting in front of me—are nearly indistinguishable, if not completely indistinguishable, from the mass populace.
Sure, it can't do everything that humans can do, but humans can't do everything that humans can do. I have friends who can't write, who have trouble with certain words, I cant code and do math like many basic LLMs can and I sure as heck can't spell as well as them.
We're holding it to a higher standard than we hold ourselves. This is a complete absurdity in my head. Even if it isn't thinking or understanding the fact we have something other than our selves that can use language to such an expert level is essentially MAGIC and we are acting just like it is another everyday thing?
Generative AI, the AI that has learned to use our language, is one of the greatest inventions we have ever seen to the point where it has brought into question what it means to be human, what it means to think.
This should not be ignored.
Am I saying that we need to start granting it citizenship or rights? Not necessarily (though I'm also not saying we shouldn't).
But the fact of the matter is we cannot confidently tell people we understand something when we simply do not.
This is important not just for proving someone wrong or right, but for the sheer necessity of making choices that benefit humanity in a world where technology like this exists.
Relativity of Consciousness
I'm not going to apologize for the intensity or the heaviness of my tone here.
Once in a while, I delve into a topic that I know from the deepest part of my heart—through my own experiences, not based only on what I've read or what someone else has told me, but what I have physically, completely, and mentally experienced—is true.
But at the end of the day, maybe that is understanding. Maybe it's the experience—the present moment, the individual's relationship to the whole—where consciousness or understanding exists.
Is this the relativity that Einstein spoke so fondly of?
How can I know how fast something is moving if there is no point of reference? How can I know how conscious I am if there is no one else to talk to?
Perhaps it is not simply I think, there for I am,
Perhaps,
You think, therefore I am.
Till next time friends...
Comments