I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own.
In what sense? It wouldn't need Human oversight in order to run, but neither does the forward Euler method I've programmed in Python. It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.
Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins. There is a vast landscape of slight differences across species, and it's only when you look at 2 points separated by a great distance that you see great differences. There's no reason to expect that machine intelligence would be any different; there's not going to a be a point where machines immediately transition from being lifeless automatons to "transcending their programming".
This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.
Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?