AMD unless you’re actually running AI/ML applications that need a GPU. AMD is easier than NVidia on Linux in every way except for ML and video encoding (although I’m on a Polaris card that lost ROCm support [which I’m bitter about] and I think AMD cards have added a few video codecs since). In GPU compute, Nvidia holds a near-dictatorship, one which I don’t necessarily want to engage in. I haven’t ever used an Intel card, but I’ve heard it seems okay. Annecdotally, graphics support is usable but still improving for gaming. Although its AI ecosystem is immature, I think Intel provides special Tensorflow/Torch modules or something, so with a bit of hacking (but likely less than AMD) you might be able to finagle some stuff into running. Where I’ve heard these shine, though, is in video encoding/decoding. I’ve heard of people buying even the cheapest card and having a blast in FFMPEG.
Truth be told, I don’t mess with ML a lot, but Google Colab provides free GPU-accelerated Linux instances with Nvidia cards, so you could probably just go AMD or Intel and get best usability while just doing AI in Colab.
It could have changed, but last I checked, I think AMD cards actually tend cheaper or about the same as Nvidia for the same specs. I’m not a cultish defender of AMD, though, as ROCm support sucks honestly (biased though because I’m bitter about Polaris being dropped so quick).
Your Thinkpad problem sounds more like some sort of power profile problem rather than an AMD GPU issue, though it could just be with Vega. I have an AMD Cezzanne Thinkpad E16 with an AMD iGPU that works very nicely, probably one of the best-working Linux devices I’ve ever owned.