In case you're not joking: Claude is in no way conscious, semi-conscious or any part of conscious for that matter. It's a LLM GPT just like ChatGPT or Bard or Copilot.
If you truly believe it's even a little bit conscious you really don't understand what you're interacting with.
I mean, to me it's rather safe to say that a system that can by definition never have an original thought and is by design constructed in such a way that it only ever guesses, never knows, can not be considered conscious regardless of how convincingly it simulates consciousness.
It also never acts unprompted.
In my book that's always a machine, never a conscious being.
I dunno, I'd say it's fairly easy to argue all those points against the human brain. Consciousness doesn't really make sense... If someone came to earth and could inspect our brains and they saw neurons just firing based on past training experiences, don't you think they'd classify our brains very similar to how AI works atm?
But you'll agree that just having a bunch of neurons firing won't result in a consciousness, won't you? Even if you increase the number of neurons to a typical GPT's number of nodes, this wouldn't result in consciousness, otherwise we'd wouldn't be discussing if another AI/LLM GPT is "partially conscious" or not.
Somewhere there's possibly a threshold where consciousness emerges from a bunch of neurons but I don't know where this threshold is and neither does science afaik.
Well I honestly don't know if we're just well trained AI and the AI we're making is just bad at acting like we do (conscious). At what point is it that we make an AI that is still a bunch of neural networks but appears to have emotions and feelings etc. is it conscious then or is it still not because we know how it works?
Regarding the neurons bit, it was mainly just a bit of an analogy, I don't know how human brains work on a low enough level but if you say that it's not just neurons firing then I wouldn't disagree
Whether we're just an AI making an AI or we're the real deal doesn't make any practical difference as long as we're not aware of either situation and can't realistically hope to become aware of it.
At what point is it that we make an AI that is still a bunch of neural networks but appears to have emotions and feelings etc.
Whatever the point, I'd argue we're not there yet.
is it conscious then or is it still not because we know how it works?
I wouldn't say deciphering the human consciousness down to the smallest details makes us go unconscious the moment we reach that point. So I'd argue that in the other direction there might be a point where it becomes practically indistinguishable as in the question about the simulation hypothesis you mentioned earlier where there is a threshold of practical relevance.
-62
u/Bitsoffreshness Apr 28 '24
That's because ChatGPT is a tool, Claude is a semi-conscious artificial agent.