I've trained mine to emulate a LLM. So far the hallucination feature works perfectly. Basic grammar still lacks a bit.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
The idea that LLMs are just like how the brain works, except limited by running in a CPU, comes from software engineers - not neuroscientists.
Although there are many analogies that could be made between how CPUs do work and how the brain integrates information, they're actually fundamentally different and use completely different logic.
You could, theoretically, create a computing language to work using neurons. And therefore you could also train machine learning algorithms. But that's like using calculators to sum 2+2 by buying 4 calculators and putting them all together, rather than actually using what a calculator does to get the result, if you get what I mean.
Afaik, an actual neuron is computationally more powerful than a perceptron, so in theory yeah, for sure.
If you're a subscriber to the Chinese Room thought problem, we are already just a bunch of really good "LLMs".
First time I've come across the Chinese Room, but it's pretty obviously flawed. It's not hard to see that collectively the contents of the room may understand Chinese in both scenarios. The argument boils down to "it's not true understanding unless some component part understands it on its own" which is rubbish - you can't expect to still understand a language after removing part of your brain
Hah, tbh, I didn't realize it was originally formulated to argue against consciousness in the room. When I originally heard it it was presented as a proper thought problem with no "right" answer. So I honestly remembered it as a sort of illustration of the illusion that is consciousness. But it's been a while since I've discussed it with others, mostly I've just thought about it in the context of recent AI advancements.
I've always thought we have something resembling an LLM as one components of our brains, and the brain has the ability to train new models by itsself for solving new problems
Actually we do, the cerebellum is what the neural networks in LLMs were partially based off. It's essentially a huge collection of input/output modules that the other parts of the brain are wired into which preforms various computations. It also handles motor control for the body and figures out how to do this through reinforcement learning. (The way the reinforcement learning works is different to LLMs though because it's a biological process) So when you throw a ball, for example, various modules in the cerebellum take in inputs from the visual centers, arm muscles, etc and compute the outputs needed to produce the throwing motion to reach your target.
We also have the cerebrum though, which along with the rest of the brain is the magic voodoo that creates our consciousness and self awareness and we can't recreate with a computer.
With the way current LLMs operate? The short answer is no. Most machine learning models can learn the probability distribution by performing backward propagation, which involves "trickling down" errors from the output node all the way back to the input. More specifically, the computer calculates the derivatives of each layer and uses that to slowly nudge the model towards the correct answer by updating the values in each neural layer. Of course, things like the attention mechanism resemble the way humans pay attention, but the underlying processes are vastly different.
In the brain, things don't really work like that. Neurons don't perform backpropagation, and, if I remember correctly, instead build proteins to improve the conductivity along the axons. This allows us to improve connectivity in a neuron the more current passes through it. Similarly, when multiple neurons in a close region fire together, they sort of wire together. New connections between neurons can appear from this process, which neuroscientists refer to as neuroplasticity.
When it comes to the Doom example you've given, that approach relies on the fact that you can encode the visual information to signals. It is a reinforcement learning problem where the action space is small, and the reward function is pretty straight forward. When it comes to LLMs, the usual vocabulary size of the more popular models is between 30-60k tokens (these are small parts of a word, for example "#ing" in "writing"). That means, you would need a way to encode the input of each to feed to the biological neural net, and unless you encode it as a phonetic representation of the word, you're going to need a lot of neurons to mimic the behaviour of the computer-version of LLMs, which is not really feasible. Oh, and let's not forget that you would need to formalize the output of the network and find a way to measure that! How would we know which neuron produces the output for a specific part of a sentence?
We humans are capable of learning language, mainly due to this skill being encoded in our DNA. It is a very complex problem that requires the interaction between multiple specialized areas: e.g. Broca's (for speech), Wernicke's (understanding and producing language), certain bits in the lower temporal cortex that handle categorization of words and other tasks, plus a way to encode memories using the hippocampus. The body generates these areas using the genetic code, which has been iteratively improved over many millennia. If you dive really deep into this subject, you'll start seeing some scientists that argue that consciousness is not really a thing and that we are a product of our genes and the surrounding environment, that we act in predefined ways.
Therefore, you wouldn't be able to call a small neuron array conscious. It only elicits a simple chemical process, which appears when you supply enough current for a few neurons to reach the threshold potential of -55 mV. To have things like emotion, body autonomy and many other things that one would think of when talking about consciousness, you would need a lot more components.
Thats an interesting explanation. Thanks! :)
Go home, Elon, you're drunk.
But if we can train neurons to emulate human emotions and then put them into the neurolink, I can finally know what emotions are
The concept of ML comes from neurons/the brain. If we could use the neurons we'd be way ahead, and that's basically the hard part. If it will ever be feasible I don't know.
Brains have a lot more connections and meaningful ways of communicating compared to our silly signals and weights. This may be the barrier to AGI
We can use neurons. I'm not sure we're very good at it but people have used them for small tasks
You could put neurons in a box and wire it up, and implant a partial personality into it and call it a Magi
Servitor.
Cortical Labs certainly hope so: https://wired.me/science/this-startup-grows-brain-cells-on-ai-chips/
But outside of the context of computing on devices: yes, as others have noted, the neurons we're trying to simulate in machine learning models aren't much different than our own. So, just look at any person to see how well neurons are suited to language/etc. workloads (or not, depending how clever the people around you are π)
As to ethics, consciousness is an "emergent phenomenon". It seems to arise, near as we can tell, from the interaction of many simple systems. No single cell or cluster thereof in a brain is conscious, but get them all working nearby one another and suddenly... π
Our current ML Neural Networks work (simplified) like this: A neuron emits a number and the next neuron calculates a new number to emit based on all the values given to it by other neurons as inputs. Our brain can't fire numbers in this way. So there's a fundamental difference. Bridging this difference to create NNs that are more similar to our brains is the basis of the study of Spiking Neural Networks. Their performance so far isn't great, but it's an interesting topic of research.
Ethically at this point is this neuron array considered conscious in any way?
Itβs really a matter of taste, as in how do they taste?
Salty.
At least in my case.
Bastards.
Can we train the neuron LLM to participate in a CoD lobby, that's the real question here
Best not to tempt fate too much, unless we want robot overlords with the temperament of a 13-year-old white kid from Pennsylvania talkin' shit like a gangsta.
Calling Cordwainer Smith...
Neurons can't NaN so it would be a very bad use of the technology.
Honestly I've wondered this about shining a laser through some kind of laser-etched glass. Only problem is, I have no idea how to represent something like an activation function using only reflection and such.
Think you might've commented on the wrong post
Haha naw, it's the same basic idea, just using something inorganic (like glass) to represent a neural network instead of something like biological neurons.
Cool idea, though existing computers are also an inorganic way to representing a neural net
Well, yes, but something like an etched glass would be better in basically every way, if it could be done. (See my other comment in this thread if you want more details)
What on earth are you talking about?
A neural network is an array of layered nodes, where each node contains some kind of activation function, and each connection represents some weight multiplier. Importantly, once the model is trained, it's stateless, meaning we don't need to store any extra data to use it - just inputs and outputs.
If we could take some sort of material, like a glass, and modify it so that if you shone a light through one end, the light would bounce in such a way as to emulate these functions and weights, you could create an extremely cheap, compact, fast, and power efficient neural network. In theory, at least.
So just ML on an optical computer, or some sort of baseless sci-fi thing you made up?
A mix of both, but keep in mind that I'm commenting on a post about a related made up sci-fi idea.
It most certainly is not: https://www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/
Neural organoids have been a thing for a few years now.