Aug 23, 2021 (Read on Medium)
Who will own your upload? And how well does all this fit with tech giant business models?
If it were possible to create a digital entity based on the specific workings of your brain at a specific moment in time, would you own it? Would the company own it? Would it own itself?
We as organic human beings have basic human rights. And by ‘we’ I at least mean people who enjoy the same privileges as I do in the area I was born in and also the one I live in. But what about uploads?
Through proprietary technology, and further through mergers and acquisitions, power in the tech world, and by some extension the physical world, has been steadily centralizing. There are now only a handful of big players with growing empires.
It seems a reasonable bet to me that if the technology for uploading a digital simulation and approximation of how your brain worked at a specific moment in time into a computer program, became a reality, that one of these big players would end up owning it, either because they developed it in-house, or because they bought out whoever did.
With their talk about metaverse these days, I could easily imagine Facebook providing a service that seems free, where you get into their high tech scanner that monitors and registers how your brain works or reacts to a wide variety of impulses, situations, scenarios, and so on, with a level of detail which we can now only imagine. Maybe it’s a sensory helmet that you have to wear as you go through your life for a while. Or maybe if someday they do get something workable going with this metaverse and their social networking becomes a virtual reality environment, the way to plug into that will provide for brain scanning. They do this scanning until enough data has been gathered for them to create a basically functional digital entity based on that data, plus of course all the other data they’ve been collecting on you for years. And this digital entity is then let loose in Faceworld, a SIM created for these characters, as a sidearm of the metaverse. Wouldn’t it make sense that in this scenario FB would own all of it, the SIM, the data, and the characters? At least I imagine that that’s how they’d want it.
They would provide this service for free because it fits perfectly into their business model. How that works is explained at length in the book ‘The age of surveillance capitalism’, by Shoshana Zuboff. Basically what Facebook does is collect data. (Of course, it’s not just FB. Google, Microsoft, Amazon, and the Chinese giants work pretty much the same way. Apple is a bit different I read. I’m just using FB as an example.) No person is sitting there observing what you do. It’s all algorithms. The process is automated, and everything is registered, what your networks are, what your likes are, what you engage with, how you respond to things, what catches your attention and for how long, and so on. The aim is to be able to predict behavior, through algorithms, in the service of advertisement. That’s where they get their income. The better they can predict behavior the more attuned the targeted advertising gets, the higher the income. So the more separate data streams the better.
Now imagine they had the above brain scanner, how much more efficient would that make the whole operation? It would be next level. Of course they wouldn’t expect you to pay money for it. Their main concern would be how to convince people to sign up, how to cultivate the blind spots that I wrote about in my previous articles (1, 2). They’d have to get people to accept automatically that by going through this process they will get to experience the amazing wonders of Faceworld with all its bells and whistles, and have a chance to live forever. At least that’s how it would go in a movie or TV show. This is basically the TV show Upload, just less religious and with a stronger focus on the power and control of big corporations.
In reality of course you’d still be you and there would be a digital entity based on you active in the SIM. People wouldn’t be fooled for long. There would have to be other benefits. Maybe services provided. Maybe you can watch your digital clone live its life. Maybe you can interact with it. Maybe you can get invested in its success in its life by providing stuff for it. There could be a whole market with things you could procure for it. Lots of added revenue. And the culture would move along with it. It could be promoted as an enrichment, with people writing books about the journey of their connection with their digital clone, and so on. Without preventive legislation, people could sign away the rights of their digital clone for a cash payment. A TV company could buy the rights to a digital character and then subject it to all sorts of situations or events, for entertainment purposes, in ways that organic humans are legally protected against, or because it’s the kind of thing that would be impossible to subject someone to without this technology, going into the fantastical and the absurd. You could mess with the very parameters of the character’s reality. Go full-on David Lynch on them.
It’s starting to seem like we’d need universal rights for digital humans as well, which would become a hairy discussion with how little we understand consciousness. There would be a lot of money in convincing people that simulated consciousness or what appears as such is not real, and therefore ethics do not apply and they’re free-game. Like in Westworld, there would be a whole new underclass to exploit. Can you even call it suffering if there’s no physical nervous system doing the suffering? If it’s simulated consciousness, is it then also simulated suffering? Would there be a digital human rights movement? Could digital clones rebel against their meat overlords while literally being their property? Could FB technicians just manipulate the code so the clones stay in line, like a technological limiter on the expression of their self-determination?
I digress.
To get people on board, the company would have to try to keep people blind to how much extra power they are giving FB over them by signing up for this. Or make it so alluring and addictive that people stop caring, so they stop asking the questions and it becomes the new normal. The more attuned the prediction models are, the more attuned and convenient the content will be, the more of a struggle, I imagine, it will be to resist it. Most people are not concerned with this. The more convenient, the better. Why fight it? Just say thank you. At some point, the company might know people so well, so much better than people could ever know themselves, that it could cater to them so perfectly that it could become impossible to resist, with physical reality seeming bland in comparison. At least that seems to be where what’s going on now, goes when followed to its conclusion. How much that is consciously the intention and the aim, I don’t know. Nor do I know how likely they would be to succeed in this. How many assumptions are they making about human nature?
Fatigue might be a thing, people feeling they need to get out of it all. It’s an open question if people would allow themselves to be chained to such an extent. Sadly, the extent of social media addiction seems a pretty strong indicator. How many would dive in headfirst without looking back, and how many would resist what seems like a project of subjugating human nature, by channeling and amplifying impulses? Brave New World again? Maybe those who are having difficulties, fatigue, stress, anxiety can take medication to keep going and stay well adjusted to the system?
There might of course be a massive issue with what this technology fails to capture. It is assumed that you are how you behave online, that the data captures you, that the picture that comes out of that is your identity, your essence even. It doesn’t take so much account of why people behave the way that they do, how many factors are in play, or how people change. Brain scanning to fill in the blanks?
Or perhaps it’s more about containment. If a very detailed model can be established, that is perhaps something you can get trapped inside of, just like the clone is trapped in the SIM, either blocking growth as a person or determining for you the direction of your growth with what’s being fed to you, in a way more suited the system’s interests. This could happen without human intention. I could imagine an AI coming up with a scheme like this without any planning by a human, just the system optimizing its efficiency. I would expect that the more AI comes to govern these processes, the more this will become an issue.
Here’s a direct quote, because he’s better at this, from a recent Douglas Rushkoff article:
Our digital fundamentalists see human beings as an engineering problem to be solved. Behaviors and thoughts that do not conform to our algorithmically generated profiles are to be eliminated, and humans shepherded into the reality tunnels that obey the laws of rationality alone. We are right now being programmed by (…) fundamental materialists (…).
I used Facebook of course just as an example. It could be any of the big ones. You might even get to choose between the competing Facebook, Google, Apple, Amazon, Microsoft, Baidu, etc. SIM’s as the home of the digital simulation and approximation of how your brain worked at a specific moment in time. You could even have multiple clones on different platforms, the quality of which would be determined by the quality of the tech. Maybe Microsoft’s uploads will be more buggy than Apple’s, who knows?
Will these companies be able to do with this what they want and how they want it, making their own rules, as is largely the case at present? It’s an argument you hear a lot in the discussion about censorship and de-platforming on social media, that these are private companies so they can do what they want. It’s a measure of their success, and that of neo-liberal capitalism, that so many people think this is a valid argument. You hear people say that only the government can censor, and as these are private companies, however extensive their global dominance, it’s not censorship. If they have power in determining the laws that govern them, then I think our traditional ideas of government, however illusory those already were, are seriously outdated.
If we want to avoid techno dystopia, however utopian it seems to some, we need to break up the power of these corporations and regulate them extensively, in line with the future we want as human beings. I know that what that entails is not something that is agreed upon, but it should be explored and continually be a work in progress. We can not just sit by and let market forces or corporate ideologies decide for us. It’s too important.