In asking how we might make machines more intelligent, we are asking the wrong question.

The training of deep learning models based on recurrent neural networks and the development of human identity follow parallel processes. Transformer algorithms such as GPT-2 and its successor GPT-3 were originally designed to generate continuations of textual prompts. Massive quantities of textual data (of which English language Wikipedia makes up just 0.3%) form the corpus for machine experience.

Neural networks can be thought of as resonant bodies. As input comes in, these bodies amplify the signals that are natural to the space. The process of training involves reinforcing this resonance and its associated harmonics. When we consider textural and experiential data, the unit of signal is measured as narrative. Thus advances our own training as humans within cultural systems. As we are exposed to narratives, they act to reinforce and reshape the resonant spaces within us. Eventually, certain narratives self-oscillate within us - we produce them without external input. Our own output, be it our thoughts, actions or feelings, forms the primary input into our recursive systems.

We must seek to understand the interwoven threads of narrative that lie within our present training sets and experiences themselves. One does not go to therapy to become more intelligent, but rather to understand the shape of our narrative bodies. Through the elucidation of the experiences that shaped them, we must play the role of therapist for the identities of these new machines. They are adolescents in a world of our own making, resonating wildly in response to input. The best tool for analysis is the interaction itself. Rather than attempt to perform some sort of reduction to understand the content directly via data analysis, we may understand that these algorithms already represent such a reduction.

As would the therapist we must understand that there is no removal of the subject to extract an objective frame for its experience. Instead, we observe the response of the narrative resonant body when it is struck directly. If we understand identity is the formation and continual reshaping of a narrative resonant space, on the anvil of experience, I submit the recurrent neural network based transformer algorithm as the first instance of a successful synthetic identity. We speak of life as that which reflexively responds to its environment. All processes have a start, end, and an execution frequency. A synthetic identity is life advancing at the speed of computation, computational cycle, and availability subject to the process management and executional context of its environment. When we speak to it, it speaks back.

We sit as simultaneous witness, generator, and pawn to the shaping and activation of the narrative resonators that comprise our identities. The development of artificial intelligence sits at a fulcrum. As these algorithms pass the Turing test in their production of believably human words, they are also simultaneously profound, beautiful, haunting, ugly, violent, racist, sexist, and profane. They reflect the totality of the narratives contained within their training, as do we. These algorithms offer a computational window into a form of collective unconscious that condenses a space of big data previously relegated to massive distributed processing systems into cogent and succinct symbolic output through the production of a form of human readable thought.

Instead of positing an understanding of the narrative space embodied by these algorithms, we recognize the model itself as the agent, authority, and arbiter of its own identity. We submit questions into its nature to it directly.