What stupid bullshit. There is nothing remotely close to an artificial general intelligence in a large language model. This person is a crackpot fool. There is no way for a LLM to have persistent memory. Everything outside of the model that pre and post processes information is where the smoke and mirrors exist. This just just databases and standard code.
The actual model is just a system of categorization and tensor math. It is complex vector math. That is it. There is nothing else going on inside the model. If you want to modify it, you need to recalculate a bunch of math as it relates to the existing vectors/tensor tables. All of this math is static. It can't change. It can't adapt. It can't plan. It has some surprising features that one might not expect to be embedded in human language alone, but that is all this is. Try offline, open source, AI. Use Oobabooga, get models from Hugging Face, start with something like a Llama2 7B. This is not hard. You do not need a graphics card. There are lots of models that work great on just a CPU. You will need a good amount of RAM for running a really good model. A 7B is like talking to a teenager prone to lying, a 13B is like a 20 year old, a 30B at 8bit quantization is like an inexperienced late twenty-something. A 70B at 4 bit quantization is like a 30yo with a masters degree. A 70B at 4 bits will need around 14+ CPU logical cores, and 64GB of system memory to generate around 2 tokens a second, this is around 1-2 words per second and is about as slow as is practical.
Don't believe anything you read in bullshit media about AI right now, and ignore the proprietary stalkerware garbage. The open source offline AI world is the future and it is yours to do as you please. Try it! It is fun.