this post was submitted on 06 Oct 2023
165 points (99.4% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

53948 readers
479 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 11 months ago (3 children)

Where did corps get the idea that we want our software to be incredibly condescending?

[–] [email protected] 8 points 11 months ago (1 children)

It was trained on human text and interactions, so …

maybe that's a quite bad implication?

[–] [email protected] 3 points 11 months ago (1 children)

There’s a default invisible prompt that precedes every conversation that sets parameters like tone, style, and taboos. The AI was instructed to behave like this, at least somewhat.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago) (1 children)

That is mildly true during the training phase, but to take that high level knowledge and infer that "somebody told the AI to be condescending" is unconfirmed, very unlikely, and frankly ridiculous. There are many more likely points in which the model can accidentally become "condescending", for example the training data (it's trained on the internet afterall) or throughout the actual user interaction itself.

[–] [email protected] 2 points 11 months ago (1 children)

I didn’t say they specifically told it to be condescending. They probably told it to adopt something like a professional neutral tone and the trained model produced a mildly condescending tone because that’s what it associated with those adjectives. This is why I said it was only somewhat instructed to do this.

They almost certainly tweaked and tested it before releasing it to the public. So they knew what they were getting either way and this must be what they wanted or close enough.

[–] [email protected] -2 points 11 months ago (1 children)

Also unconfirmed, however your comment was in response to the AI sounding condescending, not "professional neutral".

[–] [email protected] 1 points 11 months ago (1 children)

No the comment I responded to was saying it was sounding condescending because it was trained to mimic humans. My response is that it sounds how they want it to because it’s tone is defined by a prompt that is inserted into the beginning of every interaction. A prompt they tailored to produce a tone they desired.

[–] [email protected] 0 points 11 months ago (1 children)

And that's not necessarily true either. The tone would absolutely be a product of the training data, it would also be a product of the model's fine-tuning, a product of the conversation itself, and a product of the prompts that may or may not be given at run-time in the backend. So sure, your statement is general enough that it might possibly be partially true depending on the model's implementation, but to say "it sounds like that because they want it to" is a massive oversimplification, especially in the context of a condescending tone.

[–] [email protected] 0 points 11 months ago (1 children)

They can tweak the prompt in order to make it sound how they want. Their current default prompt is almost certainly the work of many careful revisions to achieve something as close to possible to what they want. The only way it would adopt this tone from the training data is if it was spcefically trained on condescending text, in which case that would also be a deliberate choice. I don't know how to make this point any clearer.

[–] [email protected] -1 points 11 months ago (1 children)

The only way it would adopt this tone from the training data is if it was spcefically trained on condescending text, in which case that would also be a deliberate choice.

Do you know how much data these models are actually trained on? Do you really think it's all specifically parsed for tone?

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago) (1 children)

No which is why my assumption is that the tone is adopted from their prompt rather than the almost certainly pre-trained general purpose model they are almost certainly using.

[–] [email protected] -1 points 11 months ago (1 children)

Right, and that statement itself is a massive oversimplification of the process. I feel like I've explained that in detail many times already.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago) (1 children)

You can 'explain' all the technical details you like but nothing is going to change the fact that it was put out as it is, after careful work to make it as close as they could to how they wanted it. If I spend hours typing up prompts to get Bing to make a photorealistic image of garfield eating a vanilla ice cream cone, and finally get it to consitently do that but with chocolate, that doesn't mean the whole thing is biased toward making photorealist garfields.

[–] [email protected] -1 points 11 months ago (1 children)

Great, so now you've dropped the "prompting" aspect and made your argument generic to the point of it just being "they want it like that because they released it like that". Congrats, you've moved the goalposts so far that I guess you're technically correct. Good job?

[–] [email protected] 0 points 11 months ago (1 children)

I didn't drop the prompting. over half that comment is specifically an analogy about prompting. are you ok

[–] [email protected] -1 points 11 months ago* (last edited 11 months ago)

Your analogy has absolutely nothing to do with how LLMs are trained. You seem to think GPT is just prompt engineering...

[–] [email protected] 1 points 11 months ago
[–] [email protected] -2 points 11 months ago (1 children)

AIs are almost always built to be feminine and this is how women talk to devs.

[–] [email protected] 1 points 11 months ago (1 children)
[–] [email protected] 0 points 11 months ago (2 children)
[–] [email protected] 1 points 11 months ago

That doesn't prove their point, it states that customers prefer the safer sound of a female voice in voice controlled AI assistants, and that there's more training data for female voices due to this.

This has nothing to do with AI chat talking in a condescending manner.

[–] [email protected] 1 points 11 months ago (1 children)

I don’t know about your reading comprehension skills, but sure that explains why AI voices are trained on feminine voices (more recordings, old phone operators, false theories on sounding more distinct).

However, this has nothing to do with “the way women talk to devs”. Women are not a monolith, they literally make up half our species and have just as much variance as men.

[–] [email protected] 0 points 11 months ago (1 children)

Thanks for the education on women. That part was the joke! I don't know about your understanding of comedy but it plays upon stereotypes which typically hold truths about median behaviors and obviously can't be applied at individual levels. this was playing on both stereotypes of women and upon a male dominated occupation. Of course you can sit there and pick apart any joke with this arugement. "hey that's not true, not all lawyers are heartless bastards." if that's your mission, sail on I guess. That kind of vapid behavior just brings one even closer to talking like an AI though frankly.

[–] [email protected] 1 points 11 months ago

“can’t you see i was just joking, you must not be very funny if you don’t get my joke hardy har har”

The classic defense of someone that’s just using humor as a shield for being an asshole. There are w plenty of ways to be funny that don’t involve punching down in the same old tired ways.

You can do better with your comedy career, I believe in you.