Amongst the several challenges that we, the humans are likely to face in the near future is what rights to assign to the machine with artificial intelligence? As usual humanity will experiment for a while, stumble for a while, even pay a terrible price at times and eventually find a way to learn to peacefully co-exist with the machines with artificial intelligence. For want of better word, for the time being let us call it Roboethics.
In the short video below Issac Asimov is quoted as proposing the first outline of what Roboethics could be.
There are several questions arising. Would we allow a right to reproduce? Would Robots have rights to property and inheritance? What changes will it bring to jurisprudence? Is there a model to follow? Would it affect how we the human beings view ourselves and our role in this universe?
As you correctly point out the debate should really be about the software, the Artificial Intelligence (AI) and not the mechanical beings (Robots) driven by that software. In an extremely limited way we are already granting some rights to the software, e.g. when we allow websites to store our passwords or when our own actions are limited by the restrictions imposed by software we are using.
The analogy of the self-driven cars in the very near future is a good one as the accident involving such cars will throw our standard model of jurisprudence into some very exciting terrain. But current case laws, even if we are able to find some references can hardly serve as a pointer to the what is coming in the future.
I think you bring up the key issue here in your opening sentence, "Amongst the several challenges that we, the humans are likely to face in the near future is what rights to assign to the machine with artificial intelligence?". Specifically the assigning of rights. In my understanding rights are inherent to a thing. If rights would need to be assigned, this indicates the thing has no inherent rights. If this is so I am not sure the thing can gain rights merely by our assigning them to it. We can choose to recognize something rights (or not), but I'm just not sure we can assign them.
It appears that these machines have already been reduced to some inferior class by our language, the words we use in our descriptions.
As far as artificial intelligence goes, wouldn't we need a solid understanding of what intelligence is in the first place before we begin to label some aspect of it as artificial?
In my opinion machines should be limited to an instruction set that allows them to perform the task they were created for, nothing more. If we don't want our lawnmowers running over our children as they play in the yard, maybe the answer is in not automating them rather than investing more intelligence in them. After all, if I don't want to mow the grass I can always have a smaller yard.
Just some of my thoughts.
Indeed rights are inherent to a being. Even in a particle.
We perhaps need to consider a distinction between 'natural rights' and 'social rights'. Social rights are what is man made and we would like to discuss here.
However, construction and evolution of human societies indicate that the social rights exist more in the form of denial than its exercise. The question being that at what point if development would we the humans be inclined to grant those social rights to the robots.
The lawnmower is a good example. If we want the lawnmower to mow the lawn by itself, take all the turns, cut the grass to an exact level and do several other things including not running over children, then we would need to give it some intelligence in the form of a software (a set of instructions). At some point of time that software may begin demanding its social and natural right, e.g. the ability to communicate with other lawnmowers or download a set of improved version of the software (update) and so on. Where is this like to lead us to and what challenges lie in the way?