teawrecks

joined 1 year ago
[–] [email protected] 4 points 6 months ago

For Hunt Showdown specifically, I have tried skipping pre-caching before and the load into a level took so long that I got disconnected from the match. I recommend keeping it enabled for multiplayer games for that reason.

[–] [email protected] 16 points 6 months ago* (last edited 6 months ago)

I have to think that nvidia isn't dumb enough to look around at their competitor's linux support, and look at the reliance on linux for compute in datacenters, and look at their pile of fancy new AI chips that they're going to try to sell to data centers, and think to themselves, "ahh, I know the best move, poach the best nvidia linux dev in the world so that ~2% of gamers are forced to use our proprietary driver!"

My guess is they're doing this to make money on AI, they couldn't care less about linux gaming. If we get an open source driver out of the deal, I won't complain, but I bet the consumer GPU driver has little to do with why they hired him.

[–] [email protected] 14 points 6 months ago

"Thanks for the advice, but this way is working out for me. If prefers I do it differently, they'll let me know."

I feel like this is just the right amount of curtness.

[–] [email protected] 6 points 6 months ago

And if we count protein research as nanotech, afaik folding research is having its heyday.

[–] [email protected] 4 points 6 months ago

This is business as usual for Rockstar development. Historically they wait until right after they ship, though.

[–] [email protected] 2 points 6 months ago (1 children)

Why is that? Does the motherboard effectively just not have enough inputs for all the disks, so that's why you need dedicated hardware that handles some kind of raid configuration, and in the end the motherboard just sees it all as one drive? I never really understood what SCSI was for. How do the drives connect, SATA/PATA/something else?

[–] [email protected] 1 points 6 months ago

Yeah, I suppose you're right. I incorrectly believed that a defining characteristic was the generation of natural language, but that's just one feature it's used for. TIL.

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago) (2 children)

Oh I see, you're saying the training set is exclusively with yes/no answers. That's called a classifier, not an LLM. But yeah, you might be able to make a reasonable "does this input and this output create a jailbreak for this set of instructions" classifier.

Edit: found this interesting relevant article

[–] [email protected] 1 points 6 months ago (4 children)

Because it's probibalistic and in this example the user's input has been specifically crafted as the best possible jailbreak to get the output we want.

Unless we have actually appended a non-LLM filter at the end to only allow yes/no through, the possibility for it to output something other than yes/no, even though it was explicitly instructed to, is always there. Just like how in the Gab example it was told in many different ways to never repeat the instructions, it still did.

[–] [email protected] 1 points 6 months ago (6 children)

Ah, TIL about instruction fine-tuning. Thanks, interesting thread.

Still, as I understand it, if the model has seen an input, then it always has a non-zero chance of reproducing it in the output.

[–] [email protected] 1 points 6 months ago (8 children)

Any input to the 2nd LLM is a prompt, so if it sees the user input, then it affects the probabilities of the output.

There's no such thing as "training an AI to follow instructions". The output is just a probibalistic function of the input. This is why a jailbreak is always possible, the probability of getting it to output something that was given as input is never 0.

[–] [email protected] 3 points 6 months ago

Yeah, I've had a cifs share in my fstab before, mounting it to a folder in my home, and I took the PC off-site for a lan party, and just trying to ls my home dir took forever for some reason. Commenting it out and restarting fixed it all.

Good luck with the new install!

view more: ‹ prev next ›