DR_Hero

joined 1 year ago
[–] [email protected] 8 points 2 months ago

There's a much more accurate stat... and it's disgusting

[–] [email protected] 0 points 2 months ago (2 children)

At least the same company developed both in that case. As soon as a new open source AI model released, Elon just slapped it on wholesale and started charging for it

[–] [email protected] 4 points 9 months ago (1 children)

Excuse me but, the fuck is wrong with you?

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

The reason that makes the most sense in one of the articles I've read is that they fired him after he tried to push out one of the board members.

Replacing that board member with an ally would have cemented control over the board for a time. They might not have felt his was being honest in his motives for the ousting, so it was basically fire now, or lose the option to fire him in the future.

Edit: https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html

[–] [email protected] 1 points 1 year ago

I've definitely experienced this.

I used ChatGPT to write cover letters based on my resume before, and other tasks.

I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.

Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.

In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.

For a while now, I have had to feed it more context and tailored prompts than I previously had to.