this post was submitted on 15 Jun 2024
315 points (98.8% liked)
Technology
59217 readers
2726 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think that there are definitely issues with mass data-mining of that data. For generative AIs being trained on image data, I don't really care -- I think that concerns there are hugely overblown. But it's also possible to do things like build a mass facial recognition database with image data, and I'm pretty sure that text-processing is also an issue.
However.
The problem is that this applies to anyone. Like, I am confident that someone has gone out and scraped publicly-available data from Facebook and similar before. I know that someone has dumped Reddit comment and post history; you can download those. I am very confident that someone is either, right now, or if not now, will be if the Threadiverse gets big enough, dumping my comment data here and will be doing all kinds of processing on it.
That is, I don't think that Meta is the issue here. Meta would be the issue if processing private data were the issue, because only Meta and a limited set of users have access to that. The problem here is people posting publicly-accessible data that can potentially be used in ways that they might not want, potentially not understanding the implications of doing so. Meta's only responsible there in that they maybe encourage users to do so, have profile photos or whatnot.
And...I don't really have a great fix for that. Like, I think -- like most people here, obviously, as every single user I've seen on the Threadiverse uses a pseudonym -- that pseudonymity is at least a partial fix. There isn't a (direct) link to a real-world identity; someone would have to go to the work of deanonymizing account data. Few if any people on the Threadiverse seem to have a real profile photo. I use a swirl of water. So...that helps, because someone can't trivially link that data to data elsewhere.
I don't have any problem with someone training a model on information that I've posted publicly and just using it like a "better search engine", the way people are now. That's pretty low on my list of concerns.
But lemme give some concerns that might apply...and these aren't really primarily about generating chatbots. One thing that you can do with text classifiers -- which I think a lot of people out there don't realize -- is to search for and find some correlation in text. Like, the Federalist Papers, important documents about the US Constitution, were written by a few of the Founding Fathers under pseudonyms. Hundreds of years later, we went back and did statistical analysis -- IIRC using Markov chains, to deanonymize them. That might be possible to deanonymize people.
You can also extract a lot of information about someone from their text. Some of it humans can do, like "if someone uses inches, they're probably from the US". But you can do that en masse, get probably a pretty good location on someone, identify regional slang and local spellings and such.
There's software that can identify someone's gender and give a confidence estimate from someone's comments. You can probably -- and I'm sure people have -- train those classifiers to look for correlations on a lot of other things, like political views and such.
I had a buddy working in the video game industry who had a game that extracted a bunch of "employability" characteristics. Play for about ten minutes, and it logs a bunch of data about the gameplay. They trained a classifier to look for correlations in gameplay actions with IQ and a whole host of other things, so play the game, and you're transferring a lot of personal data about yourself. I would imagine that lots of video games could do that, and that that might let games be another form of revenue if information is sold to data-brokers. Probably can do the same thing with comments.
And I'm not so sure that people who are posting material attached to their identity are always necessarily realizing just how much they might actually be posting. Not necessarily information that Meta in particular might analyze, but information that they're handing to the world such that any organization that wants to do data-mining on it could analyze.
Long, but wrong :-)
It is not about your concerns, and it is not about concerns at all.
When they try to do forbidden things, then someone is going to tell them, and if they do it anyway (like that whole 'concerns' attitude seems to suggest), then someone is going to give them what they deserve.
What about is wrong?
Facebook built one years ago, but ended up destroying it. https://www.theverge.com/2021/11/2/22759613/meta-facebook-face-recognition-automatic-tagging-feature-shutdown
Thanks. That also kind of drives home the "I'm sure that third parties are scraping data and analyzing it too" thing: