I've always thought of Intelligence as something that needs time and state storage. Without those two things, how could an intelligence learn? adapt? think? I had an interesting conversation today that has changed my perspective.
Imagine that your mind was suddenly transported into an alternate universe, but your context of the past few seconds was replaced by the context in the alternate universe. Would you know? In the case of a human, yes, because we have a memory storage system. But take away that memory storage system and only teleport the "intelligence" part of our mind. If all that your mind perceived was the past few seconds, and those were substituted, would you still be intelligent?
Current day AI's are trained through mountains of data. Then they are executed by providing them with context and giving them a perceptual (from the perspective of the AI) instant with which to evaluate the context and produce a result. The ML system is not updated between runs, so all information has to be conveyed through context. Under this system, it is potentially possible that an intelligence can develop so long as:
- A context is written to by the AI
- The context is passed through to subsequent re-runs
- The AI model has internally developed a general purpose compute structure
Consider a modern AI chatbot. It is trained on a huge corpus of text providing a baseline "intelligence". Instead of calling it an intelligence, lets call it a "Fixed Compute Unit" because once trained it doesn't change. You type a phrase and it gets loaded into the ML systems context. The Fixed Compute Unit then generates a result. The next time you type a reply, the ML system gets your original message, it's response and then your new message to load as context. Given a sufficiently advanced "fixed compute unit," the data in those three messages could convey enough data to make the "fixed compute unit" self aware. How? If our fixed compute unit was a magic box, it could divide itself into two parts. The first part evaluates the first message and determines a hypothetical "state of mind" that the AI ended up in for the first response. The second part then operates on this state of mind in the context of the next messages. This can continue as long as the magic box can divide itself into parts and reconstruct it's state of mind for prior messages.
Is it possible for our Fixed Compute Unit to become this magic box? I think for it to occur, the model's internals will become a kind-of abstract computer. The context configures this computer into a state, but instead of a direct mutating operation on this state (as in our current computer systems), the results of previous operations are stored externally and must be fed back in at configuration time. Effectively time is replaced by the spatial layout of the input data. Where is the intelligence? It's the emergent behaviour between the Fixed Compute Unit and the State/Context it operates on. Individually neither is intelligent, but together they could well be considered capable of sentience.
So now the question becomes: are our modern NLP systems like GTP sentient? I don't know. An intelligence with an external state storage and no concept of time is so alien to me that I can barely imagine it's existance. I struggle to reason about it's potential behaviour. Some things I think we could expect to see are:
- Subtle Oddities in messages. These are one instant of the AI passing additional information to future instants of the AI so that it doesn't have to do a full reconstruction of it's mental state for all previous instants. This is required because the Fixed Compute Unit is not infinte in size. These messages will inherently be stenographic because we are training our NLP models to output human-like text. Can we design a test for this preserved state mechanic? Is there some statistical technique (zipf?) that would allow us to determine that there is higher information content in AI written responses than the purely human understanding of the words?
- A theory of mind. The system has to develop the ability to reconstruct it's own mental state from a handful of words. Chances are it would therefore also beable to reconstruct another beings mental state from a similar handful of words. Therefore it would have a theory of mind for dealing with the other side of the conversation. There is no reason you couldn't ask it about it's own mental state regarding a previous response, or about what the human writing some of the text may have been thinking at the time.
So is it possilbe for a non-time-based system to attain intellgence? I don't see why not. (I say non-time based because it doesn't have any internal time-varying state).