Oh "great", more crap between Ctrl and Alt.
[Grumpy grandpa] In my times, the space row only had five keys! And we did more than those youngsters do with eight, now nine keys!
Oh "great", more crap between Ctrl and Alt.
[Grumpy grandpa] In my times, the space row only had five keys! And we did more than those youngsters do with eight, now nine keys!
Thank you! It's working now.
It's giving me an error, "Error Finding Entity // Make sure you spelled the entity correctly and that it exists!", when I use my username for lemmy.ml; curiously it works well when I do it for my beehaw.org account.
Create your account through old.reddit.com; when it asks you for an email, simply press "next". And, if you need an e-mail provider for some other reason, protonmail.com doesn't ask you for your phone number.
That said do you really need a reddit account?
[Note: this is my personal take, not Chomsky's]
We can recognise colours and things even without properly labelling them. (Colour example: I have no clue on how to call the colour of my cat's fur, but I'm fairly certain to remember thus recognise it.) However, it's hard to handle them logically this way.
And at least for me this is the main role of the internal monologue. It isn't just about repeating the state of the things, it's about connecting pieces of info together, as if I was explaining the link to another person.
Perhaps those without verbal internal monologue/dialogue have a more persistent innate language, that is not overwritten by common external language?
Possible; I don't know, really. It's also possible that the "innate language" doesn't really exist, only the innate ability to learn a language; but that ability is already enough to structure simple reasoning.
Got someone in my family with diabetes type I, and we've been hearing about the "magical" solution coming "soon" since she was diagnosed with it, in her childhood, around 30 years ago.
As such I'll keep what I see as a healthy amount of scepticism towards this piece of news.
I don't understand, why are you calling the other poster racist? I'm so confused... everything that he said is true. Source: I'm a gratch.
Apparently my method is a mix of those listed in the text.
I'm in a similar situation as OP, some of my income is irregular. So my monthly budget isn't directly based on the last month income, I use the average of the last six months, relying on a checking account for that. (I keep it with enough money to last me one or two months.)
Then I split that budget into four categories:
Then here's how I address some complexities:
Notes:
Chomsky's concept of UG (universal grammar) is able to handle this. Since there would be a chunk of language that is innate (universal), that feral child would share it. So, as a conclusion from that, even if the feral child isn't expressing it through vocalisation, since they lack an "application" of the UG (like Nahuatl, Mandarin, Quechua, English, Kikongo etc.), they'd still have some rather simple internal monologue.
...that said I think that Chomsky's UG is full of shit. I do agree with him that the faculty of language might have developed first to structure thought; but my reasoning resembles a bit more yours, the role of language would be to formalise thought. Thinking without language is possible in the same way as moving across a village without roads - it's doable but clunky, and you'll likely take far more effort than with proper roads/ a language.
Not to challenge Chomsky on his own turf
Don't worry. Everyone and their dog challenges him. Including himself, he's often contradicting his own earlier statements.
Got it - mostly politics, then. That explains a lot why you guys are seeing far more toxicity than I do, I don't generally join political discussions. (And when I do, since I'm myself communist, perhaps I don't even notice it.)
Not even a body pillow, Anon is a master tulpamancer and made a tulpa of some MLP character.
It's a bit off-topic, but what I really want is a language model that assigns semantic values to the tokens, and handles those values instead of directly working with the tokens themselves. That would be probably far less complex than current state-of-art LLMs, but way more sophisticated, and require far less data for "training".