opus.stedden

The Lisens: a license to simulate the sense of self

For the past year, my viewfoil project has led me to expand my use of identity publication (aka social media) and consolidate my public technological identity footprint into one portal online. My long-term goal with the project is to form an artificial intelligence simulation of my personal online behavior. In other words, I aim to someday build an algorithm that can replicate and potentially replace the portion of my mind that generates behavioral artifacts on the internet.

But this forces me to ask a question:
If I can pursue self-simulation with the data I am collecting, should any entity that comes into possession of such data be allowed to pursue a simulation of my mind?

When researching this question, I realized that I couldn't find a very specific resolution to this in our current legal and ethical discussions about the intersection of data usage and sociological modeling. To close that gap, I came up with the idea to intentionally license the ability for others to simulate my own consciousness, which I've dubbed the lisens.

What would it mean to simulate consciousness?

What it means to be oneself is one of the toughest questions to answer. Some current experiments simulating public figures with AI makes us question where the line should be drawn. Is knowing how I might finish a sentence the same as knowing how I think? Probably not. Truth be told, our current machine learning models are limited in their ability to replicate human decision making at all.

But this limitation won't necessarily be the case forever. As we increase our capacity to simulate people's behavior, some notable figures have begun to point out that a sufficiently elaborate simulation could in fact start to take on properties of self-awareness that could make the lived experience of simulation and reality indistinguishable. A form of this idea was evoked memorably in the Matrix films.

There are many reasons* that such a simulation could be invented, but perhaps the most popular explanation would be that as our society improves our simulation techniques, we could start to build simulations that could reconstruct our past. As an example, imagine that a socioligist of the future is trying to figure out what the hell happened in 2020 so she builds a model for how every person in the country voted in this election.

In the most extremely detailed simulation imaginable, the people simulated would actually feel as though they were really alive and making the decisions that you and I are making now.

The ethics of consciousness simulation

If all of this consciousness simulation sounds incredibly unethical that is because it certainly is. Of course, it becomes decidely less unethical if you can obtain informed consent from the person whose consciousness is to be simulated. I assume that future researchers will be at least ethical enough to require this kind of consent, but this leads me to wonder whether there will be a lack of willing subjects from our era of history. In light of this, I asked myself whether it would be good idea to try to publicly declare my own consent to aid future researchers of human consciousness.

This caused me to arrive at the concept of a lisens (license of sense of self). The exact content of the lisens is still a work-in-progress, but you can see the current contents on my personal website.

I will address my reasons for why I feel comfortable lisens-ing myself in another post. At the very least, I hope that publicizing my decision to adopt a lisens should raise awareness to future researchers that they should definitely NOT simulate anyone's mind without determining whether the person in question had adopted a lisens themselves. As deep-learning impersonation models improve over the next few years, this may become important sooner than one might expect.

If you are thinking about adopting a lisens for yourself, please feel free to reach out to me. If there is enough interest in this concept, I want to start to build an easily accessible index that could allow future researchers to find those who would be willing to participate in a future simiulation of themselves. Or, perhaps the present simulation...

*PS: I actually suspect the most likely reason for why we would be simulated would not be to simulate the past. I think a better explanation would be to validate whether consciousness simulations in the future work well by testing them on data from the past.