ChatGPT will pretty much always accept premises that you give it, because it's been trained to have certain opinions.
It is fairly easy to train it to prefer particular political opinions; the one that you are using is trained to always agree with whatever you say to it.
(You're probably going to ask -- don't these things require supercomputers to train? Yes, but adding further training on top of that is well within the capabilities of an amateur.
Some people really dislike the additional programming on top of the base models.)
One of my favorite examples of ChatGPT always agreeing with whatever is fed to it, which went viral on Twitter a few weeks ago: