I think that the hyper-intelligent machine argument is good, but having something capable of learning is probably somewhat paradoxical. This is because if you're creating something like a self-replicating resource harvester, there's no reason for that to be capable of any significant learning or modification of the behavior. These are created for the purpose of havesting resources, not for any sort of thinking or creativity.
These sorts of things would just spread like a virus, harvesting planets, self-replicating, without really changing at all. They would technically be subservient and responsive to some AI out there, but for whatever reason if these replicated faster than the AI could manage, due to unforeseen circumstances or phenomenon, then these would keep keep on going, harvesting planets, without anyone to come collect the resources.
Probably shipping these resources back to a collection point, but ultimately unable to really stop self-replicating and consuming planets, despite having no real overlord that is managing them due to collapse or predation by other machine species.
If these robots main AI source was attacked and overpowered, that thing would likely wipe all memory and other sorts of things as a defense mechanism, hoping to protect other replicated AI hubs that exist elswhere. If this predatory machine species wiped out a significant number of these AI hubs, then the harvesting machines under their power would ultimately be without a master, working for nothing, and without the information as to their location or the resource drop points, then the predatory species wouldn't be able to collect the harvest either.
It's just a thought though.
Submitted April 05, 2020 at 10:14PM by marzipanmaddox https://ift.tt/34hXwUn
No comments:
Post a Comment