Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Absolutely, I even have a dedicated section "Trying to insure combinatoriality/compositionality" in my notes on the topic https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence
Still, while keeping this in mind we also must remain mindful of what each system can actually do, not conflate with what we WANT it do yet it can not do yet, and might never will.
Sure we have to be realistic about capabilities of different systems. Thing is that we don't know what the actual limitations are yet. In the past few years we've seen huge progress in terms of making language models mode efficient, and more capable.
My expectation is that language models, and the whole GPT algorithm, will end up being a building block in more sophisticated systems. We're already seeing research shift from simply making models bigger to having models do reasoning about the output. I suspect that we'll start seeing people rediscovering a lot of symbolic logic research that was done back in the 80s.
The overall point here is that we don't know what the limits of this tech are, and the only way to find out is to continue researching it, and trying new things. So, it's clearly not a waste of resources to pursue this. What makes this the most important race isn't what it's delivered so far, but what it has potential to deliver.
If we can make AI systems that are capable of doing reasoning tasks in a sufficiently useful fashion that would be a game changer because it would allow automating tasks that fundamentally could not be automated before. It's also worth noting that reasoning isn't a binary thing where it's either correct or wrong. Humans are notorious for making logical errors, and most can't do formal logic to save their lives. Yet, most humans can reason about tasks they need to complete in their daily lives sufficiently well to function. We should be applying the same standard to AI systems. The system just needs to be able to function well enough to accomplish tasks within the domain it's being used in.