Speaking for LLMs, given that they operate on a next-token basis, there will be some statistical likelihood of spitting out original training data that can't be avoided. The normal counter-argument being that in theory, the odds of a particular piece of training data coming back out intact for more than a handful of words should be extremely low.
Of course, in this case, Google's researchers took advantage of the repeat discouragement mechanism to make that unlikelihood occur reliably, showing that there are indeed flaws to make it happen.
Doing a quick skim on my phone, your microphone quality is fine. I would probably lower the game audio in post a bit to make the sound more distinct, but it's only noticeable when the game does loud stuff.