loobkoob

joined 1 year ago
[–] [email protected] 9 points 10 months ago* (last edited 10 months ago) (1 children)
[–] [email protected] 8 points 10 months ago (9 children)

Yeah, I don't get it. I guess I can see the appeal of some "Internet Of Things" connected appliances, like smart fridges suggesting recipes and keeping track of stock and auto-populating shopping lists for you. I don't need that personally, but I can see why it could appeal to some people.

But things like washing machines and dishwashers? You need to be there in person to fill them up just before they're ready to go on, and to empty them when they're done. And when they're not turned on, they're sat there doing nothing. What "smart" functions can they even offer?

[–] [email protected] 1 points 10 months ago

"Trump" is synonymous with "fart" in British English. Plenty of Americans already did celebrate and vote for a fart.

[–] [email protected] 6 points 10 months ago (3 children)

"Mystery box" storytelling is the name for it and, yeah, Lost, especially, is the poster child for not executing on it particularly well. It can be exciting, and it does a good job of making following a story feel like a communal experience that everyone can participate in - speculating on where things will go next, for instance - but it also often feels like shows using it end up over-promising and under-delivering (and often leaves viewers feeling a little soured at the end).

I feel like Dark was a good example of it being well-executed, and proves it certainly can be done well. But yeah, BSG definitely didn't end up paying off for me either.

[–] [email protected] 3 points 10 months ago

I agree completely. I think AI can be a valuable tool if you use it correctly, but it requires you to be able to prompt it properly and to be able to use its output in the right way - and knowing what it's good at and what it's not. Like you said, for things like brainstorming or looking for inspiration, it's great. And while its artistic output is very derivative - both because it's literally derived from all the art it's been trained on and simply because there's enough other AI art out there that it doesn't really have a unique "voice" most of the time - you could easily use it as a foundation to create your own art.

To expand on my asking it questions: the kind of questions I find it useful for are ones like "what are some reasons why people may do x?" or "what are some of the differences between y and z?". Or an actual question I asked ChatGPT a couple of months ago based on a conversation I'd been having with a few people: "what is an example of a font I could use that looks somewhat professional but that would make readers feel slightly uncomfortable?" (After a little back and forth, it ended up suggesting a perfect font.)

Basically, it's good for divergent questions, evaluative questions, inferent questions, etc. - open-ended questions - where you can either use its response to simulate asking a variety of people (or to save yourself from looking through old AskReddit and Quora posts...) or just to give you different ideas to consider, and it's good for suggestions. And then, of course, you decide which answers are useful/appropriate. I definitely wouldn't take anything "factual" it says as correct, although it can be good for giving you additional things to look into.

As for writing code: I've only used it for simple-ish scripts so far. I can't write code, but I'm just about knowledgeable enough to read code to see what it's doing, and I can make my own basic edits. I'm perfectly okay at following the logic of most code, it's just that I don't know the syntax. So I'm able to explain to ChatGPT exactly what I want my code to do, how it should work, etc, and it can write it for me. I've had some issues, but I've (so far) always been able to troubleshoot and eventually find a solution to them. I'm aware that if want to do anything more complex then I'll need to expand my coding knowledge, though! But so far, I've been able to use it to write scripts that are already beyond my own personal coding capabilities which I think is impressive.

I generally see LLMs as similar to predictive text or Google searches, in that they're a tool where the user needs to:

  1. have an idea of the output they want
  2. know what to input in order to reach that output (or something close to that output)
  3. know how to use or adapt the LLM's output

And just like how people having access to predictive text or Google doesn't make everyone's spelling/grammar/punctuation/sentence structure perfect or make everyone really knowledgeable, AIs/LLMs aren't going to magically make everyone good at everything either. But if people use them correctly, they can absolutely enhance that person's own output (be it their productivity, their creativity, their presentation or something else).

[–] [email protected] 8 points 10 months ago (1 children)

My thoughts exactly. I wish it'd been stolen instead, or something else just generally less environmentally damaging.

[–] [email protected] 34 points 10 months ago (9 children)

I don't think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there's somewhat of a hype bubble surrounding AI, though - it's the hot, new buzzword that a lot of companies are mentioning to bring investors on board. "We're planning to use some kind of AI in some way in the future (but we don't know how yet). Make cheques out to ________ please"

I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a "solution looking for a problem". Like, I'm a fairly normal person and I've found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can't write code myself), etc. Whereas I've never touched anything to do with crypto.

AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don't see AI going away.

[–] [email protected] 2 points 10 months ago

I'm not sure if The Expanse (TV series) ruined Foundation (TV) for me, if it's just not a good adaptation, or if the books are just not particularly adaptable (or all three), but I agree. I only made it through the first two episodes before I gave up. I've heard the second season is better, but I don't know if it's worth it to force myself to sit through season 1 for.

The Expanse is just spectacular when it comes to realising its world but also, with how much depth there is to the characters and politics, Foundation immediately felt very shallow in comparison. Obviously The Expanse books lay a lot of the foundations for the TV series to build on, but I think the TV series did a great job of adapting it to a new medium without much being lost in translation, and it even added to it in its own ways. Foundation's world-building, characterisation and politics all kind of just felt like it was going through the motions and showing surface-level stuff because it felt it had to rather than because it actually had any substance to work with. Which wasn't helped by the fact that the books don't provide much in that regard to work with.

Ultimately, I don't think the Foundation books aren't particularly well-suited to being adapted to the screen. It's so focused on the "bigger picture" - on civilisations rather than characters, on philosophical and sociological concepts rather than particular plot points, on macro-narrative - while TV needs characters and micro-narrative.

I will say that the TV series' idea to use three different-aged clones of Emperor Cleon, and to keep the actors persistent through the ages, seemed like a great addition. It's good to try to keep some recognisable faces while jumping across such long time periods.

[–] [email protected] 15 points 10 months ago (3 children)

I think it's good that they asked here. The way the fediverse is structured means there can be plenty of people who use an instance - posting to it, browsing posts from it, etc - without being registered with that instance. If Beehaw says they're contemplating leaving, only to be met with a "NO, DON'T GO" response from the rest of the fediverse, then that might give them reason to rethink their position. And if everyone just says "eh, whatever" or "yeah, go away" then it may reinforce their position.

Obviously the opinions of the people who've registered there should hold more weight, but I think putting the question to everyone is a good move.

[–] [email protected] 12 points 10 months ago (1 children)

Option 1 also isn't necessarily as bad as they make it seem. If you suddenly gained super speed and your perception of time altered over night then, yes, you're suddenly going be spending a lot more time in your own head relative to before and it's going to take some adjustment as best, a lot of therapy at worst. But if you're born like that, surely it'd just be normal for you and you wouldn't necessarily know anything different?

The other option that wasn't mentioned is that you can "turn on" your powers and the world feels like it goes into slow motion around you, and then you turn them off again afterwards and it's all back to normal.

[–] [email protected] 13 points 11 months ago

That one appears to have turned out to not be the case, fortunately. He wrote up a very long post with a lot of receipts (which is worth a read). He filed a libel suit against his two accusers, which ended up being settled out of court with a large payout in his favour and with the accusers retracting their accusations, categorically saying he had never sexually abused either of them (or any other women to their knowledge).

It was satisfactory enough for me, personally.

[–] [email protected] 21 points 11 months ago* (last edited 11 months ago) (3 children)

You felt much more strongly about it than me then. I just found myself not caring about it in the slightest; the only thing I really felt was boredom. Which is arguably the worst possible outcome for any work of art.

view more: ‹ prev next ›