Trantarius

joined 8 months ago
[–] [email protected] 1 points 1 day ago (1 children)

You have that backwards. Roko's basilisk would punish anyone who didn't help create it.

[–] [email protected] 42 points 5 days ago (5 children)

I kind of assumed that it's some kind of brain-scanning tech that can extract meaning directly from the language processing part of the brain, and it just needs some calibration for each language. If two random ships can synchronize a communication frequency and video format, they can probably also have some standard brain-scan info dump, so the scan could be done by the speaker.

[–] [email protected] 5 points 1 week ago (1 children)

You are misrepresenting a lot of stuff here.

it's behavior is unpredictable

This entirely depends on the quality of the AI and the task at hand. A well made AI can be relatively predictable. However, most tasks that AI excels at are tasks which themselves do not have a predictable solution. For instance, handwriting recognition can be solved by a neural network with much better than human accuracy. That task does not have a perfect solution, and there is not an ideal answer for each possible input (one person's 'a' could look exactly the same as another's 'o'). The same can be said for almost all games, especially those involving a human player.

and therefore cannot be tested

Unpredictable things can be tested. That's pretty much what the entire field of statistics and probability is about. Also, testability is a fundamental requirement for any kind of machine learning. It isn't just a good practice kind of thing; if you can't test your model, you don't even have a model in the first place. The whole point is to create many candidate models and test them to find the best one.

It would cheat and find ways to know things about the game state that it's not supposed to know

A neural network only knows what you tell it. If you don't tell it where the player is, it's not going to magically deduce it from nothing. Also, it's output has to be interpreted to even be used. The raw output is a vector of numbers. How this is transformed into usable actions is entirely up to the developer. If that transformation allows violating the rules, that's the developers fault, not the networks. The same can be said of human input; it is the developers responsibility to transform that into permissable actions in game.

it would hide in a corner as far away from the player as possible because it's parameters is to avoid death

That is possible. Which is why you should make a performance metric that reflects what you actually want it to try to do. This is a very common issue and is just part of the process of making an AI. It is not an insurmountable problem.

Neural networks have been used to play countless games before. It's probably one of the most studied use cases simply because it is so easy to do.

[–] [email protected] 2 points 1 week ago (1 children)

That's not how copyright works (at least not in the US). when a corporation creates a copyrighted work (by way of paying the person(s) that actually made it), the duration is set as 120 years after creation or 95 years after publication. The lifetime of any employee is not taken into account. When a copyright is made by a person, it lasts until 70 years after that person dies. You cannot swap out that person for someone else, even if the owner of the copyright changes.

You are probably thinking of a method that is used to make private agreements last basically forever. A private contract technically isn't allowed to last forever, there has to be some point of expiration. To make a contract last forever anyway, they pick some condition that probably won't happen for a ridiculous amount of time, such as when the last descendant of the king of England dies (I assume they use this because the royal family keeps good genealogy records). If a currently living person is required, they might pick some infant relative to make it last as long as possible.

[–] [email protected] 4 points 1 month ago

I'm pretty sure he said " the rules were that you were going to fact check, this isn't fact checking" or something to that effect. He was accusing the moderators of being argumentative.

[–] [email protected] -2 points 1 month ago (2 children)

AI is actually deterministic, a random input is usually included to let you get multiple outputs for generative tasks. And anyway, you could just save the "random" output when you get a good one.

[–] [email protected] 7 points 1 month ago (1 children)

I think "making history" has just become one of those phrases media uses all the time now. Kind of like how any dispute is now "slamming" someone, apparently. Or how anyone you think is wrong is "unhinged".

[–] [email protected] 12 points 1 month ago

It already was. The Ohio SC upheld almost all of the phrasing.

[–] [email protected] 2 points 1 month ago

Do you have a source for this? This sounds like fine-tuning a model, which doesn't prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don't think that's how any of these models work.

[–] [email protected] 5 points 1 month ago (2 children)

Other than citing the entire training data set, how would this be possible?

[–] [email protected] 10 points 2 months ago

Embed the image using markdown: ![some text](image URL)

[–] [email protected] 5 points 2 months ago (2 children)

When does that even happen? If you have nano installed, wouldn't it work too?

view more: next ›