There's a great article on the subject in Slashdot (one of my favorites and often quoted here). It brings up the concept that is being debated academically about the rights of robots.
In many ways this looks like a theological debate. At the crux is the argument that we are ascribing human like attributes to machines. In religious practice this is known as antrhopomorphism, the assigning of human-like attributes to things that, well, aren't human. In practice this creeps into our daily lives as statements like "I love my car," or "my computer is being stupid" and many others.
However, there is another level of thought at work here. That is, we are on the verge of having sentience built into things, and more importantly things that we never considered to be sentient, such as walls, nanobots, search engines and personal data assistants. It's kind of like the way the Hitchhiker's Guide to the Galaxy describes the function of sentient elevators (that need psychological counseling and manically depressed robots, and how Dirk Gentley's Holistic Detective Agency describes an Electric Monk who believes all kinds of incredibly unbelievable things for you.).
So, is this misguided or prescient (seeing into the future where others cannot)?
Rights of machines will be one of the per-requisites to be fulfilled if the singularity (where the machine intelligence surpasses that of the humans) is to be achieved. This will be accruing to them more out of necessity than due to any struggle or organized movement by the machines.
For example the right to associate (i.e. network and exchange information with other machines of their choice) will be granted due to the increasing need for the most efficient and economical storage of information. Instead of the humans it will be machines which will decide where the most economic and efficient storage storage is available and move information continually. Similarly the right to replicate (procreate for humans) will be decided by the machines themselves rather than any human, depending upon the needs as forecast by the machines themselves.
The right to terminate themselves will probably be the last bastion to fall as the humans will not give up this control easily.
I don't like the idea of robots, personally. And I cannot understand how consciousness can arise from the material, I beleive that consciousness produces the material, so we are spirits who posses a body and use it to navigate our environment. A machine is a machine, period and does not, cannot have consciousness, at least not in the sense that we would understand. I could agrue that every atom that is part of the machine is conscious, of course, but not on our level. Machines could not have any more rights than their user's or owners. What next? Am I to be prosecuted for throwing my %&^$#$ computer out the window and breaking it? I hope not. I must admit, I have a certain emotional connection to my car, but I doubt it has one to me, but maybe it does:) This is going to be a lively discussion, I can tell already...I love it!!!!
Thanks Jefferey. What if materiality and consciousness were one?
I agree that any classical computing device can never become conscious. The entire set of all computers/networks that will ever exist will only be able to perform deterministic calculations. One can replace the whole mess with macroscopic gears and other devices. The numbers that may be represented in the entire mess is fixed and very finite, and each finite number is a rational that is non-repeating. This means that one is dealing with an incredible small finite subset of even the rational numbers. This extremely limits the mathematics. Computer Science is a subset of Mathematics. Indeed, if consciousness can be created with these, than Consciousness is a very minor theory that is already available and highly studied in Mathematics. Consciousness can be easily analyzed, today.
It will take quantum processors. They are the only entities that have any hope of freewill operations and infinite representations of numbers. This is actually possible due to the freewill theorem. (if freewill is assumed to exist).
Fortunately, quantum processors are becoming very accessible. Some kinks are there, but each kink has been overcome. It is rather close to just engineering issues that are the problem. Small ones already exist (e.g. 8 qubits)
A 32 qubit processor holds 4 billion numbers at once, In a single register. If you perform one square root operation on the single qubit register -- you performed 4 billion square roots of all the possible 32 bit numbers. if you double the size of the single register to 64 qubits, you can perform (4x10^9)x(4x10^9) = 16x10^18 square roots in one square root operation. Hence doubling the size of the 32 qubit register is the same as adding 4 billion more computers. I am still debating if the Government will allow these into the private sector. It is rather like the atomic bomb of information processing.
whether D-wave is a limited or complete (fully) QComputer .. I am unsure.
QComputers were always geared toward the problems that are traditionally monumentally hard. Traveling salesman; cryptology; most any NP complete problem, or just NP hard.. just fyi.
Hmmm.. This reminds me of the movie I, Robot (A must watch movie!)
In this movie the Robots have rights but more-so the right to serve (kind of)..
That is until... One robot develops a sort of "Conciousness" seperate from all other robots.
Well if you haven't seen the movie I won't let out any spoilers :D
In my opinion, robots will have an Artificial Intelligence and that's as far as it may go? Just that.. Artificial
This will in turn lead scientist down the road to develop it a form of conscienceness as well but only artificial.
I'm sure thru all the sci-fi books and movies even scientists have this doomish thought of a Robot (AI) Apocolypse. Which is fully possible, consider polymorphic computer viruses etc.
Rights? Maybe, but only to keep these semi-sentient creations in control. After all, they are developing robots that can be controlled by the mind! Maybe rights for robots will be something more of along the lines of "Social Rights" as we have for a pet (Wiki: are entitled to the possession of their own lives, and that their most basic interests – such as an interest in not suffering – should be afforded the same consideration as the similar interests of human beings) and it would also depend on the humanity of it's caretaker or creator?
I Robot was a good movie, I liked that movie and I agree with PuzzleSolver's view. I also agree with Joe's view. If quantum physics is correct and Everything, when reduced to it's lowest form is actually pure energy (this would include material creation and consciousness) then, yes the Heart Sutra is correct. As everything is really and turely The Divinity, The All, The One, then everything deserves consdieration and rights. However, PuzzleSolver is correct in stating that, "In my opinion, robots will have an Artificial Intelligence and that's as far as it may go? Just that.. Artificial." And with this in mind robots cannot and should not have the rights that we do. If they did, robots would then be able to become president of the US and we would then have a machine making decisions for us rather than political talking heads (in retrospect, maybe that would be better!!! :) )
We are generally selfish and concerned with our own welfare and this is why we don't always cooperate well. Robots will do what they are programed to do. If they are programed to cooperate I suppose they would do so and do it well, but they are not neccesarily concerned with self preservation nor do they carry around years of psychological baggage. Besides when the chips are down we selfish and self centered humans usually come through with flying colors.