this post was submitted on 22 Dec 2024
1360 points (97.4% liked)

Technology

60055 readers
3235 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

It's all made from our data, anyway, so it should be ours to use as we want

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 50 points 17 hours ago (1 children)

It's not punishment, LLM do not belong to them, they belong to all of humanity. Tear down the enclosing fences.

This is our common heritage, not OpenAI's private property

[–] [email protected] 1 points 7 hours ago

It doesn't matter anyway, we still need the big companies to bankroll AI. So it effectively does belong to them whatever we do.

Hopefully at some point people can get the processor requirements to something sane and AI development opens up to us all.

[–] [email protected] 10 points 13 hours ago

Another clown dick article by someone who knows fuck all about ai

[–] [email protected] 60 points 19 hours ago (5 children)

A similar argument can be made about nationalizing corporations which break various laws, betray public trust, etc etc.

I'm not commenting on the virtues of such an approach, but I think it is fair to say that it is unrealistic, especially for countries like the US which fetishize profit at any cost.

[–] [email protected] 8 points 17 hours ago

Yes, mining companies should all be nationalised for digging up the country's ground and putting carbon in the country's air.

load more comments (4 replies)
[–] [email protected] 80 points 20 hours ago (5 children)

So banks will be public domain when they're bailed out with taxpayer funds, too, right?

[–] [email protected] 56 points 20 hours ago (1 children)

They should be, but currently it depends on the type of bailout, I suppose.

For instance, if a bank completely fails and goes under, the FDIC usually is named Receiver of the bank's assets, and now effectively owns the bank.

[–] [email protected] 7 points 19 hours ago (1 children)

At the same time, if a bank goes under, that means they owe more than they own, so "ownership" of that entity is basically worthless. In those cases, a bailout of the customers does nothing for the owners, because the owners still get wiped out.

The GM bailout in 2009 also involved wiping out all the shareholders, the government taking ownership of the new company, and the government spinning off the newly issued stock.

AIG required the company basically issue new stock to dilute owners down to 20% of the company, while the government owned the other 80%, and the government made a big profit when they exited that transaction and sold the stock off to the public.

So it's not super unusual. Government can take ownership of companies as a condition of a bailout. What we generally don't necessarily want is the government owning a company long term, because there's some conflict of interest between its role as regulator and its interest as a shareholder.

load more comments (1 replies)
[–] [email protected] 10 points 19 hours ago* (last edited 19 hours ago) (1 children)

Public domain wouldn't be the right term for banks being publicly owned. At least for the normal usage of Public Domain in copyright. You can copy text and data, you can't copy a company with unique customers and physical property.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 22 points 16 hours ago* (last edited 4 hours ago) (1 children)

"Given they were trained on our data, it makes sense that it should be public commons – that way we all benefit from the processing of our data"

I wonder how many people besides the author of this article are upset solely about the profit-from-copyright-infringement aspect of automated plagiarism and bullshit generation, and thus would be satisfied by the models being made more widely available.

The inherent plagiarism aspect of LLMs seems far more offensive to me than the copyright infringement, but both of those problems pale in comparison to the effects on humanity of masses of people relying on bullshit generators with outputs that are convincingly-plausible-yet-totally-wrong (and/or subtly wrong) far more often than anyone notices.

I liked the author's earlier very-unlikely-to-be-met-demand activism last year better:

I just sent @OpenAI a cease and desist demanding they delete their GPT 3.5 and GPT 4 models in their entirety and remove all of my personal data from their training data sets before re-training in order to prevent #ChatGPT telling people I am dead.

...which at least yielded the amusingly misleading headline OpenAI ordered to delete ChatGPT over false death claims (it's technically true - a court didn't order it, but a guy who goes by the name "That One Privacy Guy" while blogging on linkedin did).

load more comments (1 replies)
[–] [email protected] 122 points 22 hours ago* (last edited 19 hours ago) (23 children)

It won't really do anything though. The model itself is whatever. The training tools, data and resulting generations of weights are where the meat is. Unless you can prove they are using unlicensed data from those three pieces, open sourcing it is kind of moot.

What we need is legislation to stop it from happening in perpetuity. Maybe just ONE civil case win to make them think twice about training on unlicensed data, but they'll drag that out for years until people go broke fighting, or stop giving a shit.

They pulled a very public and out in the open data heist and got away with it. Stopping it from continuously happening is the only way to win here.

[–] [email protected] 2 points 9 hours ago

Just a little note about the word "model", in the article it's used in a way that actually includes the weights, and I think this is the usual way of using it! If you change the weights, you get a different model, though the two models will have the same structure.

Anyway, you make good points!

[–] [email protected] 32 points 20 hours ago (1 children)

They pulled a very pubic and out in the open data heist

Oh no, not the pubes! Get those curlies outta here!

[–] [email protected] 10 points 19 hours ago

Best correction ever. Fixed. ♥️

[–] [email protected] 25 points 20 hours ago (15 children)

Legislation that prohibits publicly-viewable information from being analyzed without permission from the copyright holder would have some pretty dramatic and dire unintended consequences.

load more comments (15 replies)
[–] [email protected] 5 points 19 hours ago

It's already illegal in some form. Via piracy of the works and regurgitating protected data.

The issue is mega Corp with many rich investors vs everyone else. If this were some university student their life would probably be ruined like with what happened to Aaron Swartz.

The US justice system is different for different people.

load more comments (19 replies)
[–] [email protected] 3 points 12 hours ago* (last edited 12 hours ago) (1 children)

To speak of AI models being "made public domain" is to presuppose that the AI models in question are covered by some branch of intellectual property. Has it been established whether AI models (even those trained on properly licensed content) even are covered by some branch of intellectual property in any particular jurisdiction(s)? Or maybe by "public domain" the author means that they should be required to publish the weights and also that they shouldn't get any trade secret protections related to those weights?

load more comments (1 replies)
[–] [email protected] 38 points 22 hours ago (4 children)

It could also contain non-public domain data, and you can't declare someone else's intellectual property as public domain just like that, otherwise a malicious actor could just train a model with a bunch of misappropriated data, get caught (intentionally or not) and then force all that data into public domain.

Laws are never simple.

[–] [email protected] 17 points 22 hours ago (8 children)

Forcing a bunch of neural weights into the public domain doesn't make the data they were trained on also public domain, in fact it doesn't even reveal what they were trained on.

load more comments (8 replies)
[–] [email protected] 13 points 22 hours ago (13 children)

So what you're saying is that there's no way to make it legal and it simply needs to be deleted entirely.

I agree.

load more comments (13 replies)
[–] [email protected] 7 points 20 hours ago

It wouldn't contain any public-domain data though. That's the thing with LLMs, once they're trained on data the data is gone and just added to the series of weights in the model somewhere. If it ingested something private like your tax data, it couldn't re-create your tax data on command, that data is now gone, but if it's seen enough private tax data it could give something that looked a lot like a tax return to someone with an untrained eye. But, a tax accountant would easily see flaws in it.

load more comments (1 replies)
[–] [email protected] 24 points 20 hours ago (1 children)

Imaginary property has always been a tricky concept, but the law always ends up just protecting the large corporations at the expense of the people who actually create things. I assume the end result here will be large corporations getting royalties from AI model usage or measures put in place to prevent generating content infringing on their imaginary properties and everyone else can get fucked.

[–] [email protected] 13 points 20 hours ago (1 children)

It's like what happened with Spotify. The artists and the labels were unhappy with the copyright infringement of music happening with Napster, Limewire, Kazaa, etc. They wanted the music model to be the same "buy an album from a record store" model that they knew and had worked for decades. But, users liked digital music and not having to buy a whole album for just one song, etc.

Spotify's solution was easy: cut the record labels in. Let them invest and then any profits Spotify generated were shared with them. This made the record labels happy because they got money from their investment, even though their "buy an album" business model was now gone. It was ok for big artists because they had the power to negotiate with the labels and get something out of the deal. But, it absolutely screwed the small artists because now Spotify gives them essentially nothing.

I just hope that the law that nothing created by an LLM is copyrightable proves to be enough of a speed bump to slow things down.

[–] [email protected] 6 points 19 hours ago (1 children)

Bandcamp still runs on this mode though, and quite well

[–] [email protected] 8 points 19 hours ago (1 children)

It's also one of the few places that have lossless audio files available for download. I'm a big fan of Bandcamp. I like having all my music local.

load more comments (1 replies)
[–] [email protected] 0 points 7 hours ago
[–] [email protected] 2 points 13 hours ago
[–] [email protected] 8 points 21 hours ago* (last edited 20 hours ago) (3 children)

The environmental cost of training is a bit of a meme. The details are spread around, but basically, Alibaba trained a GPT-4 level-ish model on a relatively small number of GPUs... probably on par with a steel mill running for a long time, a comparative drop in the bucket compared to industrial processes. OpenAI is extremely inefficient, probably because they don't have much pressure to optimize GPU usage.

Inference cost is more of a concern with crazy stuff like o3, but this could dramatically change if (hopefully when) bitnet models come to frutition.

Still, I 100% agree with this. Closed LLM weights should be public domain, as many good models already are.

load more comments (3 replies)
[–] [email protected] 5 points 19 hours ago (4 children)

Delete them. Wipe their databases. Make the companies start from scratch with new, ethically acquired training data.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›