this post was submitted on 06 Jan 2024
47 points (100.0% liked)

Technology

37690 readers
316 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

With a fair amount of system integration (no wake word available) missing, of course. Which rather sounds like a feature.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 10 months ago (1 children)

You can already somewhat do that with iOS and Shortcuts if you have the chatgpt app. But as OP says, it’s only to talk to. Can’t use it to set a timer or reminder. It’s neat but a lot of my voice assistant stuff is “call X person” or “reply to X”. If I want to talk to chatgpt, I usually open the app and turn on voice for a session.

If ChatGPT can weasel itself into a true assistant with the ability to perform certain actions, then it might be a game changer for the voice assistant space. It’s so much better at understanding context than current assistants on your local device.

[–] [email protected] 4 points 10 months ago (1 children)

its one of the use-cases that AI truly makes sense in to me, because it feels like voice assistant technology has really plateau'd, and an LLM seems like a good way to process natural language

[–] [email protected] 3 points 10 months ago (1 children)

I somewhat bought into the hype early and convinced work to pay for ChatGPT plus. At first I struggled to use it. One day I somewhat went “I bet it can’t help with X”, it did. Now I’m at the point where I default to it. There is this odd assumption that it will only be right some of the time. To me it’s rare where it’s wrong. Usually it mainly misunderstood the direction I was trying to go in and once I fix it with follow-up prompt I get what I want.

I don’t think I do prompt engineering per se. It’s like google fu though. You need to learn to be descriptive to the point where the LLM can infer some context then even a year later it feels surreal. So far GPT-4 is the top for me. llama does well and a lot of the open models are nice. But if I want code or think through some work problem, GPT-4 gets me where I want to get amazingly fast. I make it do online research for me and then I have it validate my thoughts. I have to keep in mind “hey, it’s mainly predicting the next word”. But I rarely go “wow it was truly off here”. Trust but verify is where I’m at.

I’m at the point where I feel like I do my 40 hour work week in 25 or so. I have a ton more free time. I have to be careful not to share any direct work related info, but that’s easy. I give it generic info then fill in the blanks myself.

[–] [email protected] 5 points 10 months ago (1 children)

I make it do online research for me and then I have it validate my thoughts.

That's precisely the issue. The words sound convincing, but this way of thinking leads to it becoming a yes-man. Either it confirms what you think, or your prompt is wrong.

[–] [email protected] 2 points 10 months ago

Honestly, I confirm it because I use it for work. I had it do some research on comparing bunch of VDI solutions (the VMware/Broadcom thing has forced us to rethink things). It did a really good job summarizing things. I used to work in consulting, so I already knew what the comparison. It saved me hours of having to write that report. I usually verify in the term that “does it make sense”. I would do the same with a stackoverflow post before posting the code and so on.