this post was submitted on 26 Mar 2024
415 points (97.5% liked)
Technology
59152 readers
2441 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This might actually get struck down on constitutionality. How does one confront their accuser in court if the accuser is a trained neural net?
And that’s without even touching on the fact that ML is stochastic in nature, and should absolutely not be considered accurate enough to be an unsupervised and unmoderated single-point-of-failure decision engine in contexts like legal, medical, or other critical decision-making process. The fact that ML regularly and demonstrably hallucinates (or otherwise yields garbage output) is just not acceptable in a regulatory sense.
Source: software engineer in biotech; we are specifically disallowed from using ML at any level in our work for the above reasons, as well as potential HIPAA-related data mining issues.
I don’t know much about jurisprudence, but wouldn’t the neural net be a tool of the person that brought the lawsuit.
Like if you get brought in due to DNA, you don’t have to face the centrifuge that helped extract your DNA from the sample?
You’re ignoring the fact that using such a failure-prone system to initiate legal proceedings against a citizen is ABSOLUTELY going to overload an already overloaded system. And that’s not even going into the fact that it puts an unjust burden on those falsely accused, or the fact that it’s targeting a segment of the population that’s a lot more likely to go “fuck it, I don’t care, how could things possibly get worse” (read: serious depression, PTSD, other neurodivergences that often correlate with being unhoused). This is by-design.
This is an all-around grade-A shit policy. It’s also a policy designed to treat the symptom instead of the cause. It will make the streets around San Jose look a bit nicer, and in doing so it will harm a lot of people.
I think it's a stupid policy but I don't see how any of this is applicable. If the AI identifies an encampment, it's going to be police that come and scare them off. This isn't like a red light camera where you get mailed a ticket because there's no address to send a ticket to and the AI isn't going to be able to identify individuals occupying a tent.
I don’t think the idea is to bring criminal proceedings against people. Not sure what they do in San Jose but in cities I’ve lived, homeless people are essentially immune to fines or criminal charges because police know they can’t/wont pay anything. So they go force them to move and throw away their belongings if they can’t or don’t take them in time, but do not arrest or ticket these people.
I mean I’m not ignoring those facts. I prefaced by saying I don’t know much about jurisprudence.
Thanks for providing some insight though.
For what it’s worth, I didn’t intend to come off stabby or dismissive
No problem.
Appreciate you clarifying. Have a nice day random person.
The service described in the article has nothing to do with courts, regulation, enforcement, or the legal system. It is used by city maintenance to identify elements of public property in need of attention, such as abandoned vehicles and potholes in the asphalt. It is being adapted to identify accumulations of trash and other indicators of illegal camping which are important to maintainers of public spaces.