Imagine the world has moved to a state where LLM-based AI plays a significant role in the society and economics. It might not be a majestic AGI that solves all the humanity problems, but it still powers lots of industries and activities, and thus companies and governments that provide AI solutions have tremendous resources.
Now, the companies training new and more powerful models run into the problem: there is not enough data for this task.
So what do they do, besides synthesizing data for training and performing other clever tricks? They extend the sources of real, high quality data of course!
Now imagine that everyone can get a pair of new XR glasses. They are so functional, versatile and useful that you cannot even think of your life without them! All influencers wear these fancy devices, there are collaborations with clothing brands suitable for any audience. Also, they are dirty cheap.
What makes the glasses so cheap you could ask? Maybe ads that are shoveled right into your eyes? Or governmental Big Brother that sees you and controls what you say? Nope, though the second option is somewhat close.
The price comes with the condition to share all you see, hear and say while wearing the device with one of the Big Tech companies (in anonymized, absolutely safe, privacy-conscious way). They use your data to train the AIs and AIs become better to improve the society wellbeing. So you don’t only get a beautiful functional product and services for almost free, you also help the humanity! Together with 7 billion other people who wear glasses as well.
Do you like this vision of the future? I’m sure you do. So we can use some more imagination to ask ourselves additional questions about such world. For example, if most people wear XR glasses and minority does not, what does it mean? Are the ones who refuse wearing them are considered as just harmless weirdos, or are they evil criminals?
Or, another interesting one: are all the people equal in regards of their “usefulness” for data training? Is it possible that things many people see, hear and say are so generic and typical, that their “life logs” are almost useless for AI models? The models already trained on similar experiences and do not benefit from more of them. If this is the case, then naturally, there will be a category of “more useful” people who have unusual lives and share rare thoughts. Probably any company wants to have more such users and somehow incentivizes them to use particular devices. Do we actually see that these people are more special with some kind of open ILS (interesting/impactful life score)?
One more thing coming to mind is that not all experiences people share might be “good” or “moral”. But probably they can be very useful from the practical point of view. So, is there a black or gray market for such data?
And of course, considering the fact that over time our AI training data becomes bigger, there is an inflation of experiences. I.e. every year there are fewer things that companies consider unique and of high quality. People might get upset about this inflation especially if it affects their social status.
I guess there could be a nice short story written about such world as it’s interesting to think about life in it. And I also think it would be nice if we would stop at only imagining it rather than living it in the future. This world definitely looks better than The Matrix where people were just numb physical batteries for machines, but I would say still not very pleasant. Will see what happens next.
All images in the post are generated by meta.ai