The problem with AI is that it is hard to make it as much of a devious bastard as humans can be. If we have any serious chess enthusiasts in here they can probably tell this story better than I can, but early in the period when they were trying to make a computer that could beat the best human champions, they had a computer that looked like it was going to be able to do the trick. The chess master that was playing recognized that the computer would probably outplay him if he employed a conventional strategy, so he ended up using tactics that were extremely unconventional, and which would have never worked against a human, since another chess master would have simply adapted quickly and continued on. The computer was unable to react to this, and was summarily beaten as I recall.
We could have the same problem with a fully autonomous setup. Here is a case in point. We have long had the ability to detect incoming indirect fire, triangulate on the POO (point of origin), and return fire quickly. In fact, a counter battery fire mission can theoretically be fired before the incoming shells/rockets even impact! However, once when I was in Iraq, at a certain large FOB, we found that the insurgents were trying to exploit this capability against us by setting up mortar tubes and rocket rails in places like school yards and such, hoping to generate a swift counter battery mission directly onto the school or whatever. Needless to say, this would have been a huge PR disaster, aside from the human tragedy of dead school kids. It had to become standard policy to check out the POO site with some type of visual sensor prior to returning fire if it was located in certain urban areas. There were still rural areas nearby where we would return fire instantly since we knew there was nothing in that area to worry about.
You could see a similar problem with AI controlled aircraft. They could be sent to strike a certain target, but once they arrive there is a school bus of kids next to it. A human pilot is going to be able to make the decision of whether or not that target is important enough to continue with the strike. The AI may not be able to distinguish that there is even an issue. Moral and ethical considerations are a huge challenge when it comes to computers.
You are right though, I overstated when I said "not have to worry about it". We are always going to have to worry about it, and it will probably always be a challenge, in fact an ever-increasing one as we are just now starting to see a lot of AI applications reach fruition in many different fields, not just warfare.