scratchee

joined 1 year ago
[–] [email protected] 15 points 3 months ago

That guy was running his own study. “How many times can I shock myself before I breach the ethical limits of the study and they cut the session short”.

He underestimated their resolve though, clearly.

[–] [email protected] 4 points 3 months ago

“Divorced from the context that brought them about” Ahh, so you’re complaining about all the Germanic words in English, or the Latin words? The whole point of their diatribe is that the “brain rot” words you hate are little different from most words. It’s just that for some words the “in group” is Latin speakers, and for some words it’s some group nerding out about their own topic that spread their word to the rest of us… actually, I’m still talking about Latin speakers.

[–] [email protected] 1 points 4 months ago

Reasoning is obviously useful, not convinced it’s required to be a good driver. In fact most driving decisions must be done rapidly, I doubt humans can be described as “reasoning” when we’re just reacting to events. Decisions that take long enough could be handed to a human (“should we rush for the ferry, or divert for the bridge?”). It’s only the middling bit between where we will maintain this big advantage (“that truck ahead is bouncing around, I don’t like how the load is secured so I’m going to back off”). that’s a big advantage, but how much of our time is spent with our minds fully focused and engaged anyway? Once we’re on autopilot, is there much reasoning going on?

Not that I think this will be quick, I expect at least another couple of decades before self driving cars can even start to compete with us outside of specific curated situations. And once they do they’ll continue to fuck up royally whenever the situation is weird and outside their training, causing big news stories. The key question will be whether they can compete with humans on average by outperforming us in quick responses and in consistently not getting distracted/tired/drunk.

[–] [email protected] 2 points 4 months ago (2 children)

They don’t have to be any good, they just have to be significantly better than humans. Right now they’re… probably about average, there’s plenty of drunk or stupid humans bringing the average down.

It’s true that isn’t good enough, unlike humans, self driving cars are will be judged together, so people will focus on their dumbest antics, but once their average is significantly better than human average, that will start to overrule the individual examples.

[–] [email protected] 5 points 4 months ago

Probably whoever gave them the sofas

[–] [email protected] 158 points 5 months ago* (last edited 5 months ago) (3 children)

I always liked the extended version:

extended version with distant future where we see it again

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

It’s also been used for hundreds of years, it’s not a post-internet concept.

It might be a youtube title, or it might be quoted Greek or Latin text. Or various other uses in between.

[–] [email protected] 11 points 5 months ago

They shrank by weight and volume for sure.

Not by screen area though.

[–] [email protected] 5 points 6 months ago

I don’t disagree with your views on Boeing, but this incident is quite likely not related to Boeings problems, (other than their hard-earned public perception problem). Plane engines shouldn’t catch fire, but they do, whether that is rare bad luck or somebody screwed up is yet to be decided, but it sounds like this is not a newly minted plane, Boeing probably hasn’t touched it in years.

Not that Boeing hasn’t earned their public perception problem, but accidents happened before Boeing lost their mojo, and will continue to happen even if Boeing regain it. This incident may well turn out to have lessons once the investigation is done, and some might be directed at Boeing, but that’s not where I’d put my money this time around, it sounds unlikely that they caused this particular incident.

[–] [email protected] 48 points 7 months ago

Well that sucks. My favourite moment in a hidden role game was when a player won by misreading their card and convincing both of us that we were allies at the start. They ended up the only evil player for most of the game and then in the last round after we’d worked together to systematically kill everyone else (all weirdly innocents, we were both feeling guilty by this point), when they finally realised they knew there was no evil player they checked and… killed me. Total madness and a glorious victory for them. How can you be mad at that?!

[–] [email protected] 16 points 7 months ago

Yeah, the switch has an entire core locked off and everything is downclocked to improve battery life and control temperatures. No doubt this emulation gives everything more clock cycles (and perhaps an extra core?). Probably very short on battery and possibly very hot too.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

Whilst I agree that universal consuming nanobots are a bit far fetched, I’m not sure I’m sold on the replication problem.

Life has replication errors on purpose because we’re dependent on it for mid to long term survival.

It’s easy to write program code with arbitrarily high error protection. You could make a program that will produce 1 unhandled error for every 100000 consumed universes, and it wouldn’t be particularly hard, you just need enough spare space.

Mutation and cancer are potential problems for technology, but they’re decidedly solvable problems.

Life only makes it hard because life is chaotic and complex, there’s not an error correcting code ratio we can bump from 5 to 20 and call it a day.

view more: ‹ prev next ›