Your cart is currently empty!
Generative AI as a metaphor for consciousness
Tl;dr: What if we think of GenAI as an immature black box consciousness?
Generative AI goes by lots of names these days: ChatGPT, LLM, AI, Chatbot. I’ll be using GenAI from here on out. The field overall contains a lot of interesting and distinct things (Machine Learning, advanced statistics, large and small language models). What’s captured the imagination of most of us is Large Language Models: the thing you can chat with and which chats back to you and sounds pretty human.
Sounds Pretty Human is the (folk) criteria for passing the famous Turning Test – the idea being that if you could only communicate with somebody by typing, and they seemed human, then that somebody would be intelligent. GenAI systems passed the Turing Test quite some time ago, causing a lot of scrambling to further define what makes sapiens different than computers.
For the most part I buy that GenAI isn’t sapien equivalent, even in just writing. After all, it is basically a predictive system, and relies on previous input to generate output (even if some of it seems novel).
But for whatever it is, it’s pretty stinking amazing. It produces human level written communication (and voice overs, and lots of other things), using a medium which is VERY different than us Meat Machines.
I think intuitively, we pooh-pooh LLMs for many reasons:
- Reliance on prepared input (scanning vast volumes of text input)
- Seemingly un-ending confidence in questionable results
- It’s (poor) ability to explain how it got to a result
- it’s frequent lack of factuality
- And, it sometimes goes crazy
What we want is StarTrek GenAI. “Computer, tell me how many digits of Pi I need to accurately calculate the perimeter of my bicycle tire”. Then, the lady computer voice simply states that there’s such and such an answer (which is implicitly correct). That’s not what GenAI is.
And that’s not what people is either (yes, Are, I know, it’s Are). When we talk to one another, we almost always think that we understood what the person we’re talking to said. Even worse, we almost always think they understood what we said A simple game of telephone will show you the truth – person to person spoken communication is fraught with errors. Spell check any email chain and you’ll find the same for written communication.
I think our brains take ambiguous input and predict what the person we’re talking with might have said. And we reply back with what we think they might want to hear. Sound familiar?
Undoubtedly there’s more going on in the 3 1/2 lbs of me (my brain) than ChatGPT 4.0, but it’s hard to deny it’s doing at least some very human things.
Here are some questions I think might be interesting for us:
- How different is Sapiens spoken or written communication from LLMs?
- How does the energy consumed by a LLM compare to the energy used by a Sapien when formulating a response? (My guess is that LLMs are VASTLY more inefficient)
- Why are spaniens so much more effecient?
- What would it mean if LLMs had the same level of efficiency as us?
- What can we learn about our own biology, using the Artificial Intelligence of LLMs as a metaphor?
Enough for today – gotta go finish dinner
by
Tags:
Leave a Reply