this post was submitted on 05 Nov 2023
166 points (93.7% liked)

World News

39004 readers
2581 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 38 points 1 year ago (2 children)

so, like... the Chat GPt model isn't exactly able to do anything it wasn't trained to do. and it's not able to get information from sources it's not programmed to get. So.

whoever set it up... they're the ones responsible.

[–] [email protected] 29 points 1 year ago (2 children)

Almost like calling advanced algorithms "AI" is a cover

[–] [email protected] 14 points 1 year ago (1 children)

It's autocomplete with a fancy title.

[–] [email protected] 7 points 1 year ago

I prefer to call it Autoassume

[–] [email protected] 8 points 1 year ago (2 children)

naw. Its just a different definition than what most people know/use.

Pop culture sci fi introduces the concept of general AI- Data (star trek), R2-D2 etc (star wars), T-800 (terminator), Kryten (Red Dwarf). but in the scientific field there's a concept of narrow AI- which would be more like the idiot-savant versions of the sentient robots. they can't do anything outside of their coding etc, but they're code is complicated enough to be very good at what it does.

like, chat GPT doesn't know what the words mean- but it's very good at stringing words together to create natural-seeming language. What whoever has done here, is to use the ChatGPT language model to create an AI that talks and sounds like stock broker, and trained to recognize patterns in data to generate stock tips

but like... if it's sourcing data from inside sources.... that's on who ever included said sources.

[–] [email protected] 8 points 1 year ago (1 children)

That was my question. Insider trading necessarily requires insider knowledge. So where'd it come from or was it just a sensationalist title?

[–] [email protected] 2 points 1 year ago

It's probably a sensational title but... if they were smart... they'd source it from the people crafting the prompts- people will tell AI even more things than they'd tell their priest at confession.

[–] [email protected] 3 points 1 year ago

Kryten is on your list. Rad.

[–] [email protected] 1 points 1 year ago (1 children)

it was never trained to do insider trading, or any kind of trading. it was trained to predict the most likely next word given a bunch of previous words as input, then trained to not do that when the next word would be racist/destructive/etc. it turns out that's super versatile, and can be used to approximate a lot of other functions, like trading on the stock market.

as for sources of information, kinda the big problem with it is how unselective openai were when picking training data. they just loaded all of reddit and wikipedia into it, then dumped a ton of other random shit in there as well.

what i'm getting at is chatgpt is really powerful (duh) but it wasn't created with nearly the intentionality most people think it was, and it doesn't have a lot of the power that people think it does.

[–] [email protected] 1 points 1 year ago

So chat GPT 3/4 is one set of training data.

This particular model uses the chat gpt algorithm (probably 4), but it’s own set of training data. Who knows where they sourced material. As for the knowledge sources being used to generate stock tips… who knows where that comes from- but it’s almost certainly not Reddit.