Considering the expansion of computer technology, in the very foreseeable future, it will be possible to not only have a parallel processing system with the same level of complexity as the human brain, but for it to be inexpensive enough that any university can have at least one. About 10 years ago, I estimated that it would happen within 20 years, and I am still optimistic about that number (the next 10 years, now).

Is there agreement or disagreement on that point? And, secondarily, would it be possible for such a device to have a human level "monad"?

Please note that Anna "The Wholeness Principle" Lemkow told me that it was "untheosophical to even ask such a question." Which, of course, makes it all the more worth asking.

Views: 151

Reply to This

Replies to This Discussion

:D i agree that any question that seems untheosophical to ask is the most important question a theosophist can ask!

it wouldnt surprise me after the creation of the rat-brain controlled robot (the first rodent cyborg) http://www.youtube.com/watch?v=1QPiF4-iu6g and Cb2. i think its the implications of AI that will take another 50 years to create.

personally in regards to the monad, i dont see a need for a monad in the leibniz sense of the word. and in regards to spritual evolution, i dont feel there is such a thing to support the idea of a human monad in HPB's sense. so for me all that is required is the fashioning of matter in the correct way to bring about AI that can match human. this sounds heathenistic at one point, but if you consider matter in the quantum sense rather then the newtonian, it takes on a more 'spirit/energy' subjectivist point of view in relation to consciousness. just my view :)
What about the Buddhist "skandhas" ("bundles" that make up a human), as well as Hume's "Treatise" where he refers to a multiform personality for humans (Oxford Press pb, page 252)? Actually, no real need for "monads" at all, in the Leibniz sense. We are "electro-chemical machines" (in the Descartes sense). How does this all get into an AI being?
i suppose how does it not? if the premise is true that we are only 'electro-chemical machines' then there is not really any difference between a man and a machine aside from its software. i think thats where our difficulty lies at present. there are so many functions the brain does that we take for granted. things that have evolved since the beginning of the universe, that incorporating that into software will definitely take time. its quite fascinating some of the problems that can arise in the mind when certain functions are inhibited. such as the perception of self and other, something that actually only develops in children at the age of 4-5.
very interesting, i think my view on karma differs and i probably dont understand your view to really make a comment, but from my view karma is the sum of the qualities/nature/conditioning of all the relations you presently have (whether between the very particles in your body, your car, the past and beliefs on the future, the people on the street, family and loved ones and the planets circling you). the quality and nature of all these relations make you who you are right now, and also determines to some probability - where time degrades this probability - how you will act. to take it back to a ptolemaic universe, your 'I' is the centre of the universe and its karma that determines the nature of all of it - the 'i' and the universe- karma for me is the very nature of relationship itself. so to redefine, i would imagine (in a very flawed manner) an enlightened being would be one that was one with karma, one that could allow these relationships to pass through while remaining 'unattached'

so from a robot perspective, they could be 'human' so long as there was an 'I' that could become conditioned to take part in these relationships.
Well - I'm disappointed at Anna Lemkow. I don't think there's a single scientific question that could be asked that's untheosophical in my point of view.

But from my connections with a few people in the Artificial Intelligence world - they seem no closer to creating human intelligence than they were ten years ago.

Of course you put your question a bit different - you said 'with the same level of complexity'. I'd say the internet already has a higher level of complexity than the human brain, and more people have access to it than just universities. But it doesn't replace human intelligence, instead it's an expression of it.
Tangential to this topic (AI), if someone dies on, for instance, Mars, they still leave a body, so to speak, and go into other "dimensions of life" (according to HPB and Wm. Q. Judge). What would an AI leave besides a body?

BTW I like the comments before mine (above---Ms. Hesselink).
this view would be dependent on that there is 'something' that leaves the body and reincarnates/rebirths. from a theosophical perspective if this was the monad, we would have to look at exactly how the process works from monad to newborn? what is the monad? does it attach itself to a body at the instant a sperm meets an egg or later during the pregnancy? why and how does the monad choose a body? is the universe to the monad actually subjectivist or objectivist, (because this will help to understand karma and the selection process of bodies)? and what exactly happens at death?
"from a theosophical perspective if this was the monad, we would have to look at exactly how the process works from monad to newborn? what is the monad? does it attach itself to a body at the instant a sperm meets an egg or later during the pregnancy? why and how does the monad choose a body?"

And what if a monad chooses an AI to "inhabit".
Terse question! But, I do feel that HPB and other Theosophists have covered the subject of death-and-after pretty well! I'm not sure a "monad" (in the Leibniz sense of things, or even in Theosophy) "chooses" anything. Karma (kamma) covers that subject fairly well, too. See "The Ocean of Theosophy" by Wm. Q. Judge, Chapters III, IV, and V.
Good question. What if AI does offer enough opportunity for learning by consciousness for a monad to inhabit it (whether it chooses to or not is besides the point)? I guess that would be mean that from a theosophical perspective that AI is alive - in the evolutionary sense. Which means that it has become part of the learning process of the universe - which I'm sure is true anyhow - whether or not specific monads inhabit it or not.

Blavatsky said (In the Key to Theosophy I think) that the monad (as in the divine trinity) doesn't attach itself to the child till it's seventh year. That is: it doesn't become responsible till then - up to that point it only expresses family karma (or something).

This sort of higher monad can obviously only inhabit something that has responsibility, which implies choice. As long as our artificial intelligence creations follow strict rules of logic - I don't think they can be said to have choice. But when fuzzy logic and self-learning become involved it becomes harder to say whether they're 'responsible'. Asimov explored the ethical aspects of computers (well robots, but there's no fundamental difference) in his robots series.

RSS

Search Theosophy.Net!

Loading

What to do...

Join Theosophy.Net Blogs Forum Live Chat Invite Facebook Facebook Group

A New View of Theosophy


About
FAQ

Theosophy References


Wiki Characteristics History Spirituality Esotericism Mysticism RotR ToS

Our Friends

© 2024   Created by Theosophy Network.   Powered by

Badges  |  Report an Issue  |  Terms of Service