JonEFive

joined 1 year ago
[–] [email protected] 1 points 10 months ago (1 children)

Just getting back around to this.

My main reasoning is simply that authors and artists should be fairly credited and compensated for their work. If I create something and share it on the internet, I don't necessarily want a company to make money on that thing, especially if they're making money to my exclusion.

So while I belive that IP as we know it today is probably not be the best way to handle things, I still think creators should have some say over how their works are used and should receive some reasonable share when their works are used for profit. Without creators, those works wouldn't exist in the first place.

Are there other jobs where it would be okay to take a person's services without paying them? What would motivate people to continue providing those services?

[–] [email protected] 1 points 10 months ago (1 children)

Prompting for a source wouldn't satisfy me until I could trust that the AI wasn't hallucinating. After all, if GPT can make up facts about things like legal precedent or well documented events, why would I trust that its citations are legitimate?

And if the suggestion is that the person asking for the information double check the cited sources, maybe that's reasonable to request, but it somewhat defeats the original purpose.

Bing might be doing things differently though, so you might be right in your assessment on that front. I haven't played with their AI yet.

[–] [email protected] 2 points 10 months ago

I tend to agree with your last point, especially because of the way the system has been bastardized over the years. What started out as well intentioned legislation to ensure that authors and artists maintain control over their work has become a contentious and litigious minefield that barely protects creators.

[–] [email protected] 9 points 10 months ago (4 children)

Curious about something, maybe you know since you work at a theater. I seem to remember hearing that a theater has to pay royalties each time they show a movie and that newer technology can track and report this automatically. Does the latest technology automatically track this as I recall? And if so, would playing a movie as a test count as a showing?

[–] [email protected] 1 points 10 months ago (3 children)

Your argument poses an interesting thought. Do machines have a right to fair use?

Humans can consume for the sake of enjoyment. Humans can consume without a specific purpose of compiling and delivering that information. Humans can do all this without having a specific goal of monetary gain. Software created by a for-profit privately held company is inherently created to consume data with the explicit purpose of generating monetary value. If that is the specific intent and design then all contributors should be compensated.

Then again, we can look no further than Google (the search engine, not the company) for an example that's a closely related to the current situation. Google can host excerpts of data from billions of websites and serve that data up upon request without compensating those site owners in any way. I would argue that Google is different though because it literally cites every single source. A search result isn't useful if we don't know what site the result came from.

And my final thought - are works that AI generates is truly transformative? I can see arguments that go either way.

[–] [email protected] 14 points 10 months ago (5 children)

Let me ask you this: when have you ever seen ChatGPT cite its sources and give appropriate credit to the original author?

If I were to just read the NYT and make money by simply summarizing articles and posting those summaries on my own website without adding anything to it like my own commentary and without giving credit to the author, that would rightfully be considered plagiarism.

This is a really interesting conundrum though. I would argue that AI isn't capable of original thought the way that humans are and therefore AI creators must provide due compensation to the authors and artists whose data they used.

AI is only giving back some amalgamation of words and concepts that it has been trained on. You might say that humans do the same, but that isn't exactly true. The human brain is a funny thing. It can forget, it can misremember. It can manipulate. It can exaggerate. It can plan. It can have irrational or emotional responses. AI can't really do those things on its own. It's just mimicking human behavior at best.

Most importantly to me though, AI is not capable of spontaneous thought. It is only capable of providing information that it has been trained on and only when prompted.

[–] [email protected] 2 points 10 months ago (1 children)

It's honestly difficult for me to say because there are so many different ways to train AI. It really depends more on what the trainers configure to be a data point. Volume of files vs size of a single file aren't as important as what the AI believes is a data point and how the data points are weighted.

Just as a simple example, a data point may be considered a row on a spreadsheet without regard for how that data was split up across files. So ten files with 5 rows each might have the same weight as one file with 50 rows. But there's also a penalty concept in some models, so the trainer can set it so that data that all comes from one file may be penalized. Or the opposite could be true if data coming from the same file is deemed to be more important in some way.

In terms of how AIs make their decisions, that can also vary. But generally speaking, if 1000 pieces of data are used that are all similar in some way and one of them is somewhat different from the others, it is less likely that that one-off data will be used. It's much more likely to have an effect If 100 of the 1000 pieces of data have that same information. There's always the possibility of using that 1/1000 data, it's just less likely to have a noticeable effect.

AIs build confidence in responses based on how much a concept is reinforced, so you'd have to know something about the training algorithm to be able to intentionally impact the results.

[–] [email protected] 4 points 10 months ago

It's more about where the host server is located. Server owners in the US can lose protection from law suits if they don't actively moderate.

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago)

I'm a technically savvy average consumer. I've just been accepting the enshitification. It feels like every month a different company is raising prices.

I'm about ready to put on an eye patch and fly the Jolly Roger at this point.

[–] [email protected] 1 points 10 months ago (1 children)

Yes, within reason. I'm actually not sure where that line is drawn though. Like whether sending a pre-paid shipping label and asking you to drop it off at a nearby UPS store is enough or if they actually have to have someone pick it up from your home or wherever it was shipped to.

You might already know this, but be mindful that if a company sent you the wrong thing and it wasn't a gift or solicitation, (i.e. an error - even if it was a preventable error) you do legally have to give it back if asked. Which is fair IMO. If I'm sending something expensive and fat finger the address, I'd want it back too.

[–] [email protected] 5 points 10 months ago (1 children)

Aye matey! In fact, they're are entire communities right here in the Lemmyverse all about sailing the digital seas!

Kind of interesting what can be done without the oversight of corporate overlords and monied interests.

[–] [email protected] 3 points 11 months ago (3 children)

It really depends on what the AI training is looking for. You can potentially poison an AI training model, but you'll likely have to add enough data to be statistically relevant.

view more: ‹ prev next ›