this post was submitted on 19 Sep 2023
4 points (100.0% liked)

Lemmy

12546 readers
27 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

There's another round of CSAM attacks and it's really disturbing to see those images. It was really bothering to see those and they weren't taken down immediately. There was even a disgusting shithead in the comments who thought it was funny?? the fuck

It's gone now but it was up for like an hour?? This really ruined my day and now I'm figuring out how to download tetris. It's really sickening.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (4 children)

AFAIK, it all falls down on moderators' shoulders. I don't envy their jobs one bit :(

[–] [email protected] 0 points 1 year ago (1 children)

How was it handled on Reddit? Did the moderators have to handle it there as well, or did Reddit filter it out beforehand?

[–] [email protected] 0 points 1 year ago (1 children)

Reddit uses a CSAM scanning tool to identify and block the content before it hits the site.

https://protectingchildren.google/#introduction is the one Reddit uses.

https://blog.cloudflare.com/the-csam-scanning-tool/ is another such tool.

[–] [email protected] 0 points 1 year ago (1 children)

Are any of the examples that your provided libre/free and open-source? I wasn't able to find any info for Google's, and Cloudflare seems to only offer theirs for free if you are already using Cloudflare's services. If not the examples that you provided, does there exist any tools that are libre/free and open-source?

[–] [email protected] 1 points 1 year ago

No.

The nature of the checksums and perceptual hashing is kept in confidence between the National Center for Missing and Exploited Children (NCMEC) and the provider. If the "is this classified as CSAM?" service was available as an open source project those attempting to circumvent the tool would be able to test it until the modifications were sufficient to get a false negative.

There are attempts to do "scan and delete" but this may add legal jeopardy to server admins even more than not scanning as server admins are required by law to report and preserve the images and log files associated with CSAM.

I'd strongly suggest anyone hosting a Lemmy instance to read https://www.eff.org/deeplinks/2022/12/user-generated-content-and-fediverse-legal-primer

The requirements for hosting providers are https://www.law.cornell.edu/uscode/text/18/2258A

(a) Duty To Report.—
(1) In general.—
(A) Duty.—In order to reduce the proliferation of online child sexual exploitation and to prevent the online sexual exploitation of children, a provider—
(i) shall, as soon as reasonably possible after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(A), take the actions described in subparagraph (B); and
(ii) may, after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(B), take the actions described in subparagraph (B).
(B) Actions described.—The actions described in this subparagraph are—
(i) providing to the CyberTipline of NCMEC, or any successor to the CyberTipline operated by NCMEC, the mailing address, telephone number, facsimile number, electronic mailing address of, and individual point of contact for, such provider; and
(ii) making a report of such facts or circumstances to the CyberTipline, or any successor to the CyberTipline operated by NCMEC.

...

(e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—
(1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
(2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

load more comments (2 replies)