OmnipotentEntity

joined 1 year ago
[–] [email protected] 1 points 1 day ago

Which is why the slang for diamonds is "ice." They feel quite cold when you touch them because they have such high thermal conductivity.

[–] [email protected] 23 points 3 weeks ago

LLMs are bad for the uses they've been recently pushed for, yes. But this is legitimately a very good use of them. This is natural language processing, within a narrow scope with a specific intention. This is exactly what it can be good at. Even if does have a high false negative rate, that's still thousands and thousands of true positive cases that were addressed quickly and cheaply, and that a human auditor no longer needs to touch.

[–] [email protected] 1 points 3 weeks ago

"You should willing expose yourself to danger to protect the profits and business models of corporations who are attempting to monetize your attention and personal information."

I really don't think I'd lose any sleep if suddenly YouTube, Facebook, etc, became unsustainable. I remember what the Internet was like before every dumbass MBA decided to try to wring as much money as possible out of it, and I preferred it that way.

[–] [email protected] 12 points 1 month ago (2 children)

Ad block is the number one thing you can do on the Internet to reduce your risk to exploits, phishing, etc. The US government recommends the use of ad block specifically for this reason. Usage of ad block is basic internet security hygiene.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

An environmental posadist. Not a stance I've normally seen. Imo, if nothing came out of deep water horizon, there's no oil accident big enough to matter.

Transocean received an early partial insurance settlement for total loss of the Deepwater Horizon of US$401 million about 5 May 2010.[60] Financial analysts noted that the insurance recovery was likely to be more than the value of the rig (although not necessarily its replacement value) and any liabilities – the latter estimated at as much as US$200 million.

[–] [email protected] 8 points 1 month ago (1 children)

If only there were other things that a person could do outside of voting once every four years to participate in the political process.

[–] [email protected] 37 points 1 month ago (5 children)

Hey, look at that. It's the inevitable consequence of the game theory of first past the post voting. Voting system reform is my #1 issue, and if you actually care about the fact that "99% of voters" are locked into voting for someone they dislike to avert disaster every 4 years, it should be yours as well.

There is no meaningful future for third parties until and unless this occurs. IRV is a good first step, but Score voting is better. Multimember districts are also important. Getting rid of the electoral college is a no-brainer.

[–] [email protected] 34 points 1 month ago (2 children)
[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (3 children)
[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

Oh, I'll try to describe Euler's formula in a way that is intuitive, and maybe you could have come up with it too.

So one way to think about complex numbers, and perhaps an intuitive one, is as a generalization of "positiveness" and "negativeness" from a binary to a continuous thing. Notice that if we multiply -1 with -1 we get 1, so we might think that maybe we don't have a straight line of positiveness and negativeness, but perhaps it is periodic in some manner.

We can envision that perhaps the imaginary unit, i, is "halfway between" positive and negative, because if we think about what √(-1) could possibly be, the only thing that makes sense is it's some form of 1 where you have to use it twice to make something negative instead of just once. Then it stands to reason that √i is "halfway between" i and 1 in this scale of positive and negative.

If we figure out what number √i we get √2/2 + √2/2 i

(We can find this by saying (a + bi)^(2) = i, which gives us (a^(2) - b^(2) = 0 and 2ab = 1) we get a = b from the first, and a^(2) = 1/2)

The keen eyed observer might notice that this value is also equal to sin(45°) and we start to get some ideas about how all of the complex numbers with radius 1 might be somewhat special and carry their own amount of "positiveness" or "negativeness" that is somehow unique to it.

So let's represent these values with R ∠ θ where the θ represents the amount of positiveness or negativeness in some way.

Since we've observed that √i is located at the point 45° from the positive real axis, and i is on the imaginary axis, 90° from the positive real axis, and -1 is 180° from the positive real axis, and if we examine each of these we find that if we use cos to represent the real axis and sin to represent the imaginary axis. That's really neat. It means we can represent any complex number as R ∠ θ = cos θ + i sin θ.

What happens if we multiply two complex numbers in this form? Well, it turns out if you remember your trigonometry, you exactly get the angle addition formulas for sin and cos. So R ∠ θ * S ∠ φ = RS ∠ θ + φ. But wait a second. That's turning multiplication into an addition? Where have we seen something like this before? Exponent rules.

We have a^(n) * a^(m) = a^(n+m) what if, somehow, this angle formula is also an exponent in disguise?

Then you're learning calculus and you come across Taylor Series and you learn a funny thing, the Taylor series of e^x looks a lot like the Taylor series of sine and cosine.

And actually, if we look at the Taylor series for e^(ix) is exactly matches the Taylor series for cos x + i sin x. So our supposition was correct, it was an exponent in disguise. How wild. Finally we get:

R ∠ θ = Re^(iθ) = cos θ + i sin θ

[–] [email protected] 3 points 2 months ago (3 children)

What god formula?

 

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

 

You might know the game under the name Star Control 2. It's a wonderful game that involves wandering around deep space, meeting aliens, and navigating a sprawling galaxy while trying to save the people of Earth, who are being kept under a planetary shield.

view more: next ›