Thoughts on the AI apocalypse

The great fear of Artificial Intelligence is that some entity will become self-aware and decide that humanity is a bug that needs to be exterminated.  The more likely scenario is that some over confident programmer with no sense of caution will program something that ends with the destruction of humanity.  Not because the program or AI/AGI ‘decided’ that we didn’t deserve to live. But because the parameters of the program were interpreted that way.  The ‘algorithm’ will see humanity as a flaw that needs to be corrected.

If we survive long enough for that.  It is just as likely that some sort of ML/AI program acts in a way that is unexpected and triggers the end of the world in some spectacularly bad stroke of luck.

Leave a Reply