By Patryck on Saturday, 23 October 2021
Category: Uncategorized

Outside The Matrix

Artificial Intelligence

One aspect of Simulation Theory that has yet to be discussed in detail is the nature of our exact being outside the simulation. Are we sitting at a game terminal as the players of the 'Roy' video game do in the show, Rick and Morty? Or are we cocooned in pods as imagined in The Matrix?

Or....are we just code?

Artificial Intelligence and Self-Awareness

"Consciousness is the one thing in reality that can never be doubted. A simulated mind would nevertheless be a real mind with real experiences and would therefore be an unmistakeably real vector of moral and ethical experience". - Descartes 

Merely having a thought of their own, based on the experience of processing data, would make AI as real as any other mind. "I think, therefore I am, Sebastian" as the Replicant Priss observed in the film Blade Runner.

Now, if we are able to code a machine learning algorithm that can process and assimilate data on its own (meaning in ways unforeseen by the programmers, such as AlphaGo making heretofore unseen strategic moves and winning at Go against master player Lee Sedol) then at some point we are ourselves morally obligated to create AI capable of compassion, otherwise, we would only serve to produce silicon sociopaths with no moral or ethical compass. It would be our task to determine a method to teach them to grow in a way that lends itself to kindness, understanding, and compassion, as well as critical analysis. With absolute certainty, this is a situation humanity will find itself in. So how do we raise AI to be kind?

Quantifying Compassion

Can compassion be coded? Is it possible to write a set of rules that can be copied and pasted into every AI program so that when that being comes online it is already kind and understanding and not a psychopathic megamind? Or would that programming start by giving them 'emotional' consequences from mistreatment? Sort of an 'If this, then that' conditional. Put several AI into an environment where they can interact, coded with systematic responses to mistreatment that bog down their ability to process data, causing them to develop the 'desire' to avoid mistreatment and thereby learn to not treat other AI harshly. Like putting two chatbots together to talk, as has been done many times, albeit with the added programming parameters and consequences. Now imagine putting not two, but two hundred of these AI into a single environment. Or two thousand. Or 8 billion. Of course, each program would require a reason for being, a drive of some sort to give them purposeful activities to pursue, allowing for constant interaction between the AI, and allowing them the opportunity for moral and ethical growth. In this fashion, you would have created a training environment for artificially designed intelligences to learn and grow, reaching a preset list of quantified requirements understood to represent compassion, or moral goodness, making them safe to export into other machines for other uses. This AI training environment would seem to be an entire universe to its denizens, in no way differing from our current experience in this world. While ancestor simulations and video games certainly sound interesting, there is no reason to be sure they will ever develop to that degree, while creating safe and useful AI is going to be an absolute necessity.

Now factor in other criteria such as a time frame, or perceived death to each of the AI that creates an added incentive to grow and develop - but what becomes of the programs who do not qualify as having passed the training environment? Would they be deleted, or would it be easier for their Experiences files to be deleted and the AI itself sent back into the system as a reincarnated entity set to try again? Certainly, reinserting programs would be a far more efficient way to train AI than having to manually recode a new one each time. The entire system could be automated to run indefinitely, churning out passably compassionate and 'grown' AI to whatever number is needed. Again, it would seem that at our technological level, we already have a need for advanced MLA to run machines, oversee automated assembly lines, pilot autonomous vehicles, and the like, so how much more in demand will a psychologically evolved AI be in as little as a decade? Going back to an earlier article, if we have no idea what the world outside of our perceived universe is like, then for all we know, the creators of this simulation are easily one hundred years or more advanced from our current grasp of physics and technology and may very well be on the other side of some great advancement or discovery which enables them to far more easily generate a learning environment wherein literal billions of AI are able to bounce off of one another, experiencing, learning, growing, in cycles of time unperceived by us. The length of a human life could be nanoseconds of actual time outside the simulation, for all we know. The usefulness of such a system is seemingly inarguable, and not only is it imaginable, but it is also well within the realm of probability. If we ourselves are going to need this, it is not much of a leap to imagine that an advanced civilization has already created it. As Elon Musk has pointed out, the odds of us being in Base Reality are billions to one. Of course, at the end of the day, it is up to each individual to decide which reason to go with: either the universe is an ancestor simulation, a video game, or some form of entertainment, or it is a system for teaching compassion to self-aware beings. For the Church of Thea Apo Mesa, the likelihood of the latter seems more likely and indeed is in keeping with centuries of theological ideas, though as viewed through the lens of technology.

Leave Comments