This and at the same time AI could learn more faster, form complex algorithm within itself to protect it from other cyber attacks, and at the same time could either double or multiply itself so that it would be able to survive in the event, we, the Humans are able to attack it's core or it's server. AIs can also be helpful to us y'know, they can help us progress faster, research anomalies and help us Humans innovate/invent machines, medicines, etc to further our development as a race but still it's a double-edged sword and we don't know what will happen to us if they by chance want to rebel or something. Only way to do that is to either follow Asimov's law or we limit them somehow.
" Would this method solve the AI rebellion problem. " ? Nope.. But.. I doubt its a real problem.. If a real superhuman AGI were to be developed someday, I am sure it would overthrow us.. or at least try.. for good or ill.. But I don't believe such a thing will ever exist, as I don't think we will have computers in the future, let alone something that could run AGI.. If we had infinite resources and could continue to advance indefinitely, then we might someday get there.. But we CAN'T.. We sustain our current civilization only by sacrificing the future to sustain the present and all the technological breakthroughs we come up with that allow us to continue, only do so by postponing the inevitable at the expense of making the eventual collapse more severe.. We will either completely destroy our selves or at the very least our civilization at the level it exists today will no longer be possible and we will be forced to return to much simpler existence.. Our future is more like what you see in Mad Max or some Medieval story than Star Trek.. if we have future at all..
Perfect because at least we will never run out of salt for the popcorn for the entertainment this brings.
Well, this comes a bit late, sorry. For your first point, if they all receive the same input, then that leads to exactly the same. If the input is different, they all need their own system to manipulate said inputs. If it is the same input, they all need their own system to manipulate it afterwards. The only difference being that they don't need to have different receptors, if the input is the same. But those amount to less than 1% in such advanced AI's. Even todays AI's with around 1 million neurons (not as much as it sounds) only have around 100 inputs. Same goes for the human body. We have somewhere in the low millions, while we have around 80 billion neurons in our brain. And that doesn't account for the neurons that modify that input before it reaches our brain. Number 2, that wouldn't be a fair comparision if the 3 units got more neurons in total... You could just as well say, that a single unit with billions of neurons is smarter than 3 units with 15 neurons... but that doesn't tell you which one would be more efficient. As i said earlier, you need to start you comparison with the same conditions. And i made the calculations above, so i won't do them again. Number 3, i don't see the difference... In both cases it is 3 AI's. Unless, if i got what you mean, you are talking about the 3 AI's all having as many Neurons as are in the 3Ai's in the Unit, which would again be an unfair comparison, since it would be like your 3 AI's having 3 * 100 Neurons and the Unit having 100 Neurons in total, so 100 / 3 for each Ai... thats 9 times more Neurons for your 3 AI's, which obviously would make them more powerful, but still not more efficient. Think about an inefficient lamp that can use 100 times more energy than the efficient one, but is only around 2 times stronger... Sure, it is stronger, but consumes 100 times more energy... So for efficiency, the one that is only half as bright would consume only 1/50 of the energy of the brighter one in comparison.