this post was submitted on 22 Nov 2023
158 points (98.2% liked)

Technology

59675 readers
3246 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 32 points 1 year ago* (last edited 1 year ago) (4 children)

That would be the goal. The tricky part is matching intents that align with some API integration to whatever psychobabble the LLM spits out.

In other words, the LLM is just predicting the next word, but how do you know when to take an action like turning on the lights, ordering a pizza, setting a timer, etc. The way that was done with Alexa needs to be adapted to fit with the way LLMs work.

[–] [email protected] 7 points 1 year ago (1 children)

Eh just ask the LLM to format requests in a way that can be parsed to a function.

Its pretty trivial to get an llm to do that.

[–] [email protected] 7 points 1 year ago

in fact it’s literally the basis for the “tools” functionality in the new openai/chatgpt stuff!

that “browse the web”, “execute code”, etc is all the LLM formatting things in a specific way

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

Microsoft seems to be attempting this with the new Copilot in Windows. You can ask it to open applications, etc., and also chat with it. But it is still pretty clunky when it comes to the assistant part (e.g. I asked it to open my power settings and after a bit of to and fro it managed to open the Settings app, after which I had to find the power settings for myself). And they're planning to charge for it, starting at an outrageous $30 per month. I just don't see that it's worth that to the average user.

[–] [email protected] 3 points 1 year ago

It's actually fairly easy. "I'm a computer. From now on only communicate with me in valid JSON in the format of {"command": "name", "parameters": []}. Possible commands are "toggle_lights", "pizza", "set_timer". And so on and so on. Current models are remarkably good at responding with valid JSON, I didn't have any issues with that. They will still hallucinate about details (like what it would do if you try to set up a timer for pizza?) but I'm sure you can train those models to address those issues. I was thinking about doing a OpenAI/google assistant bridge myself for spotify. Like "Play me that Michael Jackson song with that videoclip with monsters". Current assistant can't handle that but you can just ask chatGPT for the name of the song and then pass it to the assistant. This is what they have to do but on a bigger scale.