Paired with freewill is the concept of determinism. Determinism can be explained in saying, ‘for of every action, the notion of freewill we think we have is limited by the choices we have. This arises the question of whether or not AI is governed by the same rules as humans? How would AI know those rules? Humans, however, learn the laws of their country and learn morals from their childhood. If machines were to have something similar to free will, it would show that free will can arise from algorithmic systems and that free will may arise without intent. Science fiction movies provide some of the strongest cultural expressions of the fears relating to free will in machines. The movie AI suggests that self-aware AI may not always have free will. When the robots are a carnival in which they will be destroyed for entertainment, the David is the only one capable of begging for his life. He was not programmed to beg, but he did so because external input told him to feel fear. This allowed him to make what seems to the external observer to be a choice. How could free will possibly lead to a destructive being? This can only be speculated but in my opinion, if a machine had the free will to save its own life over the life of a human, the AI would no longer be beneficial to humanity. Then again, what the AI knows as ‘safe’ for a human, may restrict the free will of the human
Paired with freewill is the concept of determinism. Determinism can be explained in saying, ‘for of every action, the notion of freewill we think we have is limited by the choices we have. This arises the question of whether or not AI is governed by the same rules as humans? How would AI know those rules? Humans, however, learn the laws of their country and learn morals from their childhood. If machines were to have something similar to free will, it would show that free will can arise from algorithmic systems and that free will may arise without intent. Science fiction movies provide some of the strongest cultural expressions of the fears relating to free will in machines. The movie AI suggests that self-aware AI may not always have free will. When the robots are a carnival in which they will be destroyed for entertainment, the David is the only one capable of begging for his life. He was not programmed to beg, but he did so because external input told him to feel fear. This allowed him to make what seems to the external observer to be a choice. How could free will possibly lead to a destructive being? This can only be speculated but in my opinion, if a machine had the free will to save its own life over the life of a human, the AI would no longer be beneficial to humanity. Then again, what the AI knows as ‘safe’ for a human, may restrict the free will of the human