this post was submitted on 17 Jun 2024
135 points (100.0% liked)
Technology
37713 readers
242 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@along_the_road
“These were mostly family photos uploaded to personal and parenting blogs […] as well as stills from YouTube videos"
So… people posted photos of their kids on public websites, common crawl scraped them, LAION-5B cleaned it up for training, and now there are models. This doesn’t seem evil to me… digital commons working as intended.
If anyone is surprised, the fault lies with the UX around “private URL” sharing, not devs using Common Crawl
#commoncrawl #AI #laiondatabase
Doesn't Digital Commons mean common ownership? A personal blog of family photos inherently owned by that photographer are surely not commonly owned. I see this as problematic.
@along_the_road what’s the alternative scenario here?
You could push to remove some public information from common crawl. How do you identify what public data is _unintentionally_ public?
Assume we solve that problem. Now the open datasets and models developed on them are weaker. They’re specifically weaker at identifying children as things that exist in the world. Do we want that? What if it reduces the performance of cars’ emergency breaking systems? CSAM filters? Family photo organization?
Parents could not upload pictures of their kids everywhere in a vain attempt to attract attention to themselves?
That would be good.
@kent_eh exactly.
The alternative is “if you want your content to be private, share it privately.”
If you transmit your content to anyone who sends you a GET request, you lose control of that content. The recipient has the bits.
It would be nice to extend the core technology to better reflect your intent. Perhaps embedding license metadata in the images, the way LICENSE.txt travels with source code. That’s still quite weak, as we saw with Do Not Track.