If we invented AI androids and we wanted to deploy them effectively, what safeguards might we consider? Say such androids were designed to the jobs and tasks we didn’t, because they were unpleasant or dangerous, but that their effectiveness made them a potential danger to us, their creators. How would we mitigate the risks that the androids might rebel, or be turned or neutralised by an enemy? Central command and a single AI controlling multiple bots might be efficient but not robust. Co-opting a single bot might compromise the entire system. Individuality would be a more robust framework. So, decentralised AI it is. How about a kill switch? This might be captured by the enemy so is also not robust. A better mechanism might be a finite lifespan within which there was no kill switch nor centralised command, where each individual bot acted independently but coordinated of their own accord, cooperating when it was sensible and acting independently when it made sense. A finite lifespan means that a rogue unit would eventually, well, die. Such a mechanism would be built into a bot because ultimately, we feared our own creation, which we made in our own image.


