PDA

View Full Version : Very cool - Pre-Life



Devaclis
Fri Jun 12th, 2009, 11:01 AM
http://www.wired.com/wiredscience/2009/06/tpna/

I am an avid reader of futurist predictions and some of them are stating that artificial life will surpass human intelligence and reasoning as early as 2050.

I am thinking that date may be further off than when it really happens. Advancements in technology are helping advancements in science at a rate that is almost impossible for scientists to calculate any more.

Scary cool man. Very scary cool.

dirkterrell
Fri Jun 12th, 2009, 11:04 AM
I am an avid reader of futurist predictions and some of them are stating that artificial life will surpass human intelligence and reasoning as early as 2050.


Actually, that doesn't sound all that difficult. :|

Dirk

MetaLord 9
Fri Jun 12th, 2009, 11:11 AM
Hopefully I'll be dead by then.

Devaclis
Fri Jun 12th, 2009, 11:17 AM
VERY cool reads here. Check it out and blow your mind, man. /Chong


http://www.aleph.se/Trans/Global/Singularity/

Devaclis
Fri Jun 12th, 2009, 11:17 AM
The Law of Accelerating Returns
by Ray Kurzweil (http://www.kurzweilai.net/bios/frame.html?main=/bios/bio0005.html)

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.


Published on KurzweilAI.net March 7, 2001.

Devaclis
Fri Jun 12th, 2009, 11:19 AM
N. Korea does not scare me.

Swine Flu does not scare me

Steve Jobs does not scare me

What the "Singularity" means to being a living, walking, thriving human being scares the hell out of me.

Seriously, I can see all of my dreams and nightmares being something that I can realize, or that anyone can IMPOSE on me, if that was their will.

Devaclis
Fri Jun 12th, 2009, 11:27 AM
Check out "Altered Carbon" by Richard K. Morgan.

In his Takeshi Kovachs novels he tells of traveling from galaxy to galexy on a "needle cast" A beam of light that places your mental being into a waiting organic body for you to use while you are out of your own body. There is no need to die, for real, unless your religion or morals dictate that you must do so. You are essentially "backed up" and can be restored into any body or form you wish. OR, you can live fully virtual as part of an elaborate computer program where you can live 1 million years virtually in just minutes or real world time.

Devaclis
Fri Jun 12th, 2009, 11:35 AM
http://www.wired.com/wired/archive/8.04/joy.html

Sleev
Fri Jun 12th, 2009, 11:41 AM
Check out "Altered Carbon" by Richard K. Morgan.

In his Takeshi Kovachs novels he tells of traveling from galaxy to galexy on a "needle cast" A beam of light that places your mental being into a waiting organic body for you to use while you are out of your own body. There is no need to die, for real, unless your religion or morals dictate that you must do so. You are essentially "backed up" and can be restored into any body or form you wish. OR, you can live fully virtual as part of an elaborate computer program where you can live 1 million years virtually in just minutes or real world time.
"I'd like to rent Jessica Albas body, please"

zetaetatheta
Fri Jun 12th, 2009, 11:43 AM
You've done it Dr. Frankenstein! "My name is pronounced Fronk-en-steen!"

Shea
Fri Jun 12th, 2009, 11:54 AM
One of my many musings, the future of humanity. If we don't obliterate ourselves in a nuclear firestorm, wipe ourselves out in some bathtub created bioplague or simply sterilize the planet through our inability to coexist with it, what will become of us. Shall we become deft bio-manipulators and gene-engineer our future or embrace the machine and merge with it? How would both of those possibilities be used to control us? Set us free?

Cool link Dana, I will have to spend some quality work time reading through it...:)

Devaclis
Fri Jun 12th, 2009, 12:04 PM
I think Theadore Kazniski said it pretty well - Even if he was a murderer:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

Shea
Fri Jun 12th, 2009, 12:10 PM
Certainly both outcomes are a possibility. Hopefully not but...

Devaclis
Fri Jun 12th, 2009, 12:14 PM
I really don't see it any other way man. Look around. How lazy has the population of this planet become? We are more than content to allow machines do any work for us that they can possibly do. We are constantly looking for ways to make new machines to do more for us, faster.

I am of the opinion that the only ones who survive are those who stay in control. Stop thinking and you lose. physical size and shape will not be a determining factor for living in the near future. Size will not matter. Control is key. The Elitist theory is the one I subscribe to.

Snowman
Fri Jun 12th, 2009, 12:24 PM
I don’t see a war between human and machines. I believe the third option is more likely.

Human and Machine will merge into a new species. You can already see the start of this walking down the street. How many times do you see people with blutooth headsets on? This is only the first stage of technology; the second will be implants, which already exist and with each new technology humans will be more willingly allow themselves to become part of machines and the worlds that can be created in them. Just look how much time people like Dana spend with game like World of Warcraft.

Those that reject the technology will be the ones considered ignorant and misguided.

Devaclis
Fri Jun 12th, 2009, 12:27 PM
If you can find a person with morals, who can replicate those morals into machines, you may have a chance. Otherwise, when standalone machines with no sense of good of one vs good of many becomes intelligent, it will survive, however it can. Asmiovs rules are not flawless.

Shea
Fri Jun 12th, 2009, 12:36 PM
I can see your point, but you know how I feel about systems. They inevitably fail. Put your faith in something to take care of you and you will be disappointed (or live a life of servitude).

Yes the power elite will use the new technology to control (look at TV) but a counter to that I see is technology. Look how the internet has created a flood of information you just don't get on the nightly news?

I just don't see us being the sheeple on the Big and Large barge. No doubt many will actively seek that out but not all of humanity. Call me optimistic.

Snowman
Fri Jun 12th, 2009, 12:39 PM
Good or evil are human created concepts. No machine will ever understand these concepts or can be programmed with them.

However, I think machines can reach a point where that can tell the difference between advantage and disadvantage to whatever thier function is. This could lead to competing functions that may cause conflict between different machines.

This is another reason why humans will have to merge with machines, in order to give them a global purpose of survival. Humans have lived with the ability to wipe their species out of existence for more than 60 years and have not, why? Because one basic instinct we all share is the want to mutually survive.

We will have to tie the machines survival to ours for both to survive. Otherwise we both will go extinct.

Shea
Fri Jun 12th, 2009, 12:47 PM
"If" the machines develop a need for survival. That is a tough sell. Requires emotion and a belief in something greater then yourself.

Devaclis
Fri Jun 12th, 2009, 12:50 PM
"Racist" will have a new meaning.

McVaaahhh
Fri Jun 12th, 2009, 12:54 PM
Welcome to the matrix...

Snowman
Fri Jun 12th, 2009, 12:59 PM
"If" the machines develop a need for survival. That is a tough sell. Requires emotion and a belief in something greater then yourself.That’s what I’m saying, machines on their own will never figure out concepts like survival. These will have to be programmed into them.

However, I believe they will have the ability to determine what would be an advantage to their stated function and act on it. What that entails depends on the function.

If its function is to wipe out all white humans, it may see the advantage to wiping everyone out to save time and resources. And once its function is completed it will no longer have a function to carry out, thus becoming useless.

So in order to survive, humans need to express to machines these concepts, and I believe the easiest and safest way is to download human brains into them. Once this happens the above scenario would be impossible because the machine will not see itself any different than the humans its trying to kill.

Horsman
Fri Jun 12th, 2009, 01:11 PM
After reading that _ I can only think of the 5th element
http://milla.wedjat.ru/Fifth_element/0710.jpg

Hopefully they never replicate us...
http://i142.photobucket.com/albums/r107/4Horsman/fifth_element_04.jpg

Horsman
Fri Jun 12th, 2009, 01:13 PM
However, I think machines can reach a point where that can tell the difference between advantage and disadvantage to whatever thier function is. This could lead to competing functions that may cause conflict between different machines.
My Office Computer already does that!!!:cry:

Snowman
Fri Jun 12th, 2009, 01:19 PM
My Office Computer already does that!!!:cry:That’s just Windows Vista...
We are talking about Intelligent Machines. :)

Canuck
Fri Jun 12th, 2009, 01:39 PM
That’s just Windows Vista...
We are talking about Intelligent Machines. :)

:lol:

Thanks to the Edge-a-macation system in this country, Artificial Intellegence and the Chinese will soon own all.

rforsythe
Fri Jun 12th, 2009, 02:24 PM
"If" the machines develop a need for survival. That is a tough sell. Requires emotion and a belief in something greater then yourself.

Not really, and you imply that the machines will care about it at an individual level, as we do. More likely, were a self-aware machine to arise in numbers, it would understand that for the good of all, it is sometimes necessary for some to "die". The machines' lack of emotional response in and of itself will allow this to work, because they will feel no remorse or need to please a higher power. They will simply ensure survival of their kind, even if it means a few must be killed off. Those will be recycled into more.

Being self- and like-kind-aware and understanding propagation/survival does not in any way imply emotional or spiritual development. More likely, we'd all be seen as impediments to their well being and eradication would be a likely attempted outcome.

Shea
Fri Jun 12th, 2009, 02:31 PM
No Ralph, what I'm saying is that in order for them to realize that we need to band together, in order to survive, would require an emotional leap that I don't believe they will achieve. That need for survival requires an emotional response to being alive. Lacking that, as you put it, they would just see themselves as irrelevant cogs in a greater machine (if even that) that are trivial and therefore couldn't care less if their existence was snuffed out in a second.

Snowman
Fri Jun 12th, 2009, 02:36 PM
You assume machines will evolve through the course of natural selection. This is where humans get their need for survival, in order to procreate the species. Machines however do not require evolution to survive. They can just exist as long as they can maintain themselves.

They do however need evolution to become better machines. But this requires a goal and a purpose to become better at. These things I believe can only be attained from those that originally created the machines. Us.

InlineSIX24
Fri Jun 12th, 2009, 03:06 PM
What was Skynet's goal/plan after humans were gone? Solitaire?

Snowman
Fri Jun 12th, 2009, 03:10 PM
What was Skynet's goal/plan after humans were gone? Solitaire?Exactly, at some point Skynet had to be programmed with a propose that allow it to decide that humans where an impediment. It would not have done it on its own no matter how smart it became.

Skynet is serving a function. It does not seem to have any need to survive beyond its need to kill humans. Once it determines the threat has be neutralized (ie all humans dead) it will stop functioning.

Now if you gave Skynet human desires, it would have to confider the possibility of it not existing anymore. This would allow it to find other ways to exist, even with humans.

rforsythe
Fri Jun 12th, 2009, 03:27 PM
No Ralph, what I'm saying is that in order for them to realize that we need to band together, in order to survive, would require an emotional leap that I don't believe they will achieve. That need for survival requires an emotional response to being alive. Lacking that, as you put it, they would just see themselves as irrelevant cogs in a greater machine (if even that) that are trivial and therefore couldn't care less if their existence was snuffed out in a second.

Emotional connection to life and a programmed requirement to keep one's species in existence, even if that means sacrifice of self or others, are two vastly different things. Realization of the "safety in numbers" thing does not, nor will it require any emotional leap whatsoever. In fact I believe it is quite the opposite: Humankind's own emotional connection to life and each other actually stumps true biological evolution of our own species at the rate it might otherwise take place, because we have a very deep desire as a species to place life of self over life of all.

They could understand their need to propagate and evolve quite easily without any biological emotional constraint placed on it. Whether they are emotionally attached to the concept of life or not, that does not mean they won't defend themselves by whatever means necessary.


You assume machines will evolve through the course of natural selection. This is where humans get their need for survival, in order to procreate the species. Machines however do not require evolution to survive. They can just exist as long as they can maintain themselves.

To a point, but a self-aware machine must also then realize that it too will wear out and eventually fail beyond repair, which creates a need for expansion of its kind, likely with a goal of lasting longer than the one before it.


They do however need evolution to become better machines. But this requires a goal and a purpose to become better at. These things I believe can only be attained from those that originally created the machines. Us.Wrong. Machines are already designing better iterations of themselves, at paces that far exceed what humans can do. Their only goal needs to be expansion of their kind, and evolution to become better/stronger/faster. We are most likely an impingement on that whole process, and therefore unnecessary, which makes us a threat to attaining those goals, and therefore expendable.

Shea
Fri Jun 12th, 2009, 03:40 PM
Well are we talking desire for survival or programming? Desire requires emotion, programming requires a few 1's and 0's. Now if some fool programs a robot to survive "no matter what" and leaves it that ambiguous, we're doomed.

Snowman
Fri Jun 12th, 2009, 03:42 PM
To a point, but a self-aware machine must also then realize that it too will wear out and eventually fail beyond repair, which creates a need for expansion of its kind, likely with a goal of lasting longer than the one before it.You will have to define how to create a self-aware machine. I do not believe this is possible through just raw processing power. You can have a planet wide computer hooked through billions of processors and it still not be any more self-aware than a hand held calculator.

I believe a self-aware machine would require human traits that can only be given by a human to machine connection. And once you have made this connection the machine will assume the traits of a human including its desire to survive and have a propose.

Wiping out the human species as a whole would not be any more plausible for it than any one of us being capable of nuking the planet.



Wrong. Machines are already designing better iterations of themselves, at paces that far exceed what humans can do. Their only goal needs to be expansion of their kind, and evolution to become better/stronger/faster. We are most likely an impingement on that whole process, and therefore unnecessary, which makes us a threat to attaining those goals, and therefore expendable.
Yes, but who is telling these machine what is better? Machine will not know what is considered “better” and an advantage unless humans are in the loop to tell them which way to go.

The goal of expansion is a programmed goal, which wasn’t created by the machines themselves. Now they can interpret the goal given to them to create new ways of solving the problem. However, once the problem is solved to the humans stated goals the machine will be without a purpose again, and stop.

Shea
Fri Jun 12th, 2009, 03:44 PM
Wiping out the human species as a whole would not be any more plausible for it than any one of us being capable of nuking the planet.


I have some friends in the middle east I'd like you to meet :)

Snowman
Fri Jun 12th, 2009, 03:47 PM
I have some friends in the middle east I'd like you to meet :)Yes, but they have a level of survival they are willing to accept, or they would have killed themselves off long ago.

InlineSIX24
Fri Jun 12th, 2009, 03:47 PM
We'll have to worry about the zombies long before the machines I think.