Ashelyn

joined 1 year ago
[–] [email protected] 1 points 2 weeks ago

Cheaper to design one seat and use it for both spots

[–] [email protected] 3 points 2 weeks ago (2 children)

I spent like 40 hours on XC2 and uh, idk I really liked the world design but wasn't a fan of the effectively gacha mechanics to unlock new fighters. The story seemed to have a really slow start (which I'm not necessarily against) but the combat wasn't my thing unfortunately. The Japanese voice acting is definitely a lot better than the English, and was worth waiting for the download on even though I didn't end up playing that far in.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.

Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.

Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

How about: Popularizing the idea of the wall in the first place, going mask-off calling illegal immigrants "murderers and rapists", the "Muslim Ban" on air travel, moving the US embassy to Jerusalem, employing white nationalists as staffers, packing the supreme court with extreme conservative justices, giving permanent tax cuts to the rich, expanding the presence of immigrant concentration camps, cozying up to foreign dictators, stating he wanted generals like Adolf Hitler's behind closed doors when his own generals refused to nuke North Korea and blame it on someone else, egging on a far-right insurrection attempt, directly pursuing strikes and assassination attempts against middle-Eastern military generals and diplomats, ending the Iran nuclear deal, calling climate change a Chinese hoax, calling Covid the "China virus", spreading vaccine disinformation until one was developed before the end of his term, trying to start a trade war with China, discrediting his chief medical advisor on factual statements about Covid, saying Black Lives Matter protestors were "burning down cities", wanting to designate Antifa as a terrorist organization, declaring "far left radical lunatics" part of his "enemy from within", being an avowed friend of Epstein, sexually assaulting over a dozen women and underage girls, being a generally abusive sleazebag, also funding a genocide (Israel has always been ethnically displacing Palestinians), also building the wall, also not implementing healthcare reform (and being against what we have), also not protecting abortion rights (+ setting up the conditions that led to their erosion; see supreme court point above), and also denigrating anti-genocide protestors (but not as harshly since he wasn't the one in charge when it happened).

I guess he's not a cop though, so there's that.

(minor edits made for grammar/spelling)

[–] [email protected] 30 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

On the whole, maybe LLMs do make these subjects more accessible in a way that's a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

The problem is that LLMs 'hallucinate' details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it's essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they're intelligent. They're very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.

[–] [email protected] 9 points 3 weeks ago (14 children)

Wow, it's almost as if someone being bad can be for multiple reasons!

[–] [email protected] 3 points 3 weeks ago

I always found the idea of stable Boltzmann brains fascinating. The idea that on an infinite enough universe, there must exist self-sustaining minds that function on an entirely circumstantial set of rules and logic based on whatever the quantum soup spit up.

[–] [email protected] 7 points 3 weeks ago

It's also hard to argue while also claiming your god is moral, which is why creationists usually scapegoat the task of planting fossils to Satan.

[–] [email protected] 7 points 3 weeks ago (1 children)

I always found it funny how they'll sometimes try to justify their claims scientifically to give it an air of legitimacy. If god created the stars close to one another and expanded them to fill the sky over a single day, the skies would be dark for billions of years. A YEC could easily say "oh well god put the light there to make the stars look like they've been in the sky for a long time" but very often they just don't have an answer because they didn't think of one. Unfortunately, there's almost that will stop them from doubling down on their beliefs and just becoming more prepared for the next person they talk to

[–] [email protected] 5 points 3 weeks ago

Ideally, I agree wholeheartedly. American gun culture multiplies the damage of every other issue we have by a lot

[–] [email protected] 14 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

One or more parents in denial that there's anything wrong with their kids and/or the idea they need to take gun storage seriously? That's the first thing that comes to mind, and it's not uncommon in the US. Especially when you consider that a lot of gun rhetoric revolves around self defense in an emergency/home invasion, not having at least one gun readily available defeats the main purpose in their minds.

edit: meant to respond to [email protected]

[–] [email protected] 4 points 3 weeks ago (3 children)

90 days to cycle private tokens/keys?

view more: ‹ prev next ›