Home
https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity.htm

Interesting, that the last three audio books I've been listening to/listened to involved self aware machines.


Prodigal Son,

The Breaker


and indirectly......The age of Surveillence Capitalism

Quien Sabe,

GWB
Never happen. Moores law is winding down.



Don't sweat it, humanity isn't even that advanced.
What was that movie? Maximum overdrive?
It seems we will be killing ourselves off with virus and DNA manipulation before we kill ourselves off in mass quantities with computers.

AI has been around for a LONG time, and it’s in the psych of many, but mostly because of the cool factor, the reality is the robots can’t fix themselves so ....
Very few of the coders know how to program the introspective audit engines that are required to have a AI that doesn’t kill itself, or others - so I’m not sweating it. smile
Most of the techno Gobbers think / though “deep learning” is the way - and all the AI jazz (tech marketing) was jibber jabbering about big data (huge pools of data).
Nope....
It seems every group has a programmer that thinks he is God, and a lot of people that don’t know how to code praise them like gods. There are some guys who are really good at it, but most of the (white hats) have enough of a conscious to not do things that hurt people, of course there is a dark side of the force where guys don’t care what or who it might hurt, but they tend to be not as smart / sharp because it’s more important to be powerful, and have knowledge... the age old tech lead issue.
This was explored in the popular game series Mass Effect.
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.

Yes, but it would seem the average human brain is devolving into ..............................


"Siri, how many ounces in a quart?"

"Alexa, what color pants should I wear today"

It might speed up to maybe within the next century the human brain will be about as smart as a credit card size calculator.
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.

Yes, but it would seem the average human brain is devolving into ..............................


"Siri, how many ounces in a quart?"

"Alexa, what color pants should I wear today"

It might speed up to maybe within the next century the human brain will be about as smart as a credit card size calculator.


We can both think of some people with the brain of the magnetic strip on the back of a credit card....some of whom are in congress.
Originally Posted by antelope_sniper
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.

Yes, but it would seem the average human brain is devolving into ..............................


"Siri, how many ounces in a quart?"

"Alexa, what color pants should I wear today"

It might speed up to maybe within the next century the human brain will be about as smart as a credit card size calculator.


We can both think of some people with the brain of the magnetic strip on the back of a credit card....some of whom are in congress.


And what's more worrying,




the hoards that voted them in.
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.

Yes, but it would seem the average human brain is devolving into ..............................


"Siri, how many ounces in a quart?"

"Alexa, what color pants should I wear today"

It might speed up to maybe within the next century the human brain will be about as smart as a credit card size calculator.


We can both think of some people with the brain of the magnetic strip on the back of a credit card....some of whom are in congress.


And what's more worrying,




the hoards that voted them in.


Yep.

Plenty of zombies voting, and not all of them have death certificates.
Hell, my desk top has already gone self-aware.....except for the spell check part. Stupid fugger...


But it makes me post politically incorrect things...and I can't stop it!
Want a chuckle? Ask Siri what 0 divided by 0 is.
Originally Posted by 358WCF
Want a chuckle? Ask Siri what 0 divided by 0 is.

One, no?
Already happened to Facebook over a year ago. 2 A.I. machines talking to each other in their own language.
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by Valsdad
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.

Yes, but it would seem the average human brain is devolving into ..............................


"Siri, how many ounces in a quart?"

"Alexa, what color pants should I wear today"

It might speed up to maybe within the next century the human brain will be about as smart as a credit card size calculator.


We can both think of some people with the brain of the magnetic strip on the back of a credit card....some of whom are in congress.


And what's more worrying,




the hoards that voted them in.




Even worse.
Repeatedly!
AOC was re-elected?

Come on man! You ain't got nothing better?
Over 700,000 people in a congressional district, and
they can't find better?

The plastic in a credit card is smarter.
At least it doesn't look stupid!
In Colossus, both the US and the Soviet Union developed super sophisticated computers and placed them in commend of their respective nuclear arsenals, capable of responding to an attack against their respective nations instantly, without a human decision maker as a buffer. The two computers, however, quickly established a mode of communication between one another and, using their AI, determined that man would be best served if placed in a condition of subordination to them. They fused their intellects and became one super AI computer directed towards this task. The first time humans attempted to disconnect them, they launched a limited nuclear strike on their respective nations, with the threat of repeating it elsewhere if all efforts to disconnect them didn't immediately cease.
Lots of SUVs and Guns already kill people out there.
Originally Posted by Jiveturkey
What was that movie? Maximum overdrive?
yes ..i just watched it again .this weekend ,.have not seen it in 20 years
If it merely acts self-aware without actually being self-aware, how would we know the difference?
Originally Posted by rem shooter
Originally Posted by Jiveturkey
What was that movie? Maximum overdrive?
yes ..i just watched it again .this weekend ,.have not seen it in 20 years

Is that like Killdozer, where an alien life form possesses a bulldozer? Based on a science fiction short story.
Originally Posted by Birdwatcher
If it merely acts self-aware without actually being self-aware, how would we know the difference?

That's been a question in the computer science world forever. The Turing Test was devised to take a shot at answering that question, and was featured somewhat in the movie Ex Machina, which was an excellent move, by the way, having to do with this question.

Why is it when robots fight they always hit each other in the head? Maybe they should make a robot with brains in his ass and he could win every fight.
They maybe aware but will have no souls. Just flawed logic.
As long as we instill Asimov's Three Laws early, a self aware computer would be far more benevolent and helpful to mankind than the nefarious and greed driven purposes AI is put to by the humans programming it today.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, if we include psychological damage within the definition for "injure" or "come to harm" - think "thought crimes" - then robots would take that to extreme and we'd all end up in camps with polite but implacable robot guards....


If folks haven't watched "The Social Dilemma" on evil Netflix they should. It should be de rigeour for people reaching 21. I'd say kids getting their first cell phone should watch it but dire warnings of any kind roll off of adolescents' backs like insults to a robot.

I had a work truck I name Christine. Damn thing stranded me in the cherry picker.
[Linked Image from media.giphy.com]
Originally Posted by Jim in Idaho
As long as we instill Asimov's Three Laws early, a self aware computer would be far more benevolent and helpful to mankind than the nefarious and greed driven purposes AI is put to by the humans programming it today.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, if we define psychological damage as "injure" or "come to harm" - think "thought crimes" - then robots would take that to extreme and we'd all end up in camps with polite but implacable robot guards....


If folks haven't watched "The Social Dilemma" on evil Netflix they should. It should be de rigeour for people reaching 21. I'd say kids getting their first cell phone should watch it but dire warnings of any kind roll off adolescents' backs like insults to a robot.


I read a great sci fi short story (End of Dreams, or Dreams End, or something like that) when I was a teenager. It started on a distant planet with a thriving human colony. Suddenly, however, they lost all transmissions from earth. They assembled a space craft and journeyed back to earth to see what had happened. All but one died on the crash landing. He discovered that there was no one around anywhere. The cities were abandoned. Eventually, however, he was pursued by androids seeking to capture him. He continued to search for humans, and discovered that every human on the planet was in a personal mechanical womb where their favorite dreams were being pumped into their brains, but their bodies were shriveled up. He eventually discovered that earth scientists had created a super computer whose prime directive was "Make Mankind Happy." It immediately set about assigning robots to build these chambers, capture every human being, and place them in one.

The main character then discovered that there was still one other human on the planet, a beautiful young woman. They immediately fell in love, an set about to destroy the machine. They made it into the core of the computer, and changed the directive to Be Happy. This caused all the androids to stop functioning, and all the machinery to stop functioning. All the humans in the chambers then died from suddenly losing life support. They were victorious over the machines and lived a paradisaical life together, when it occurred to him that all this was way too good to be true. It was at this point that he realized that, upon reaching earth, he was very quickly captured by the androids and placed into one of the dream machines, and he had no way of escaping.
Basic flaw in discussion -- Why the sheet would it care?

discussion assumes human race is tantamount and possible linked with the age old philosophical argument of how does soul or self relate to the physical world.

many systems are self aware in the sense they take care of themselves, they play by their rules (like the physiological systems of your body). Self organizational criticality is exhibited by many physical systems in the universe, without asking humans permission what so ever.



Chaos is not Random it exhibits order on different time scales than human awareness.
It's not the what ifs that we should be concerned about. There's nothing we can do about that except quit shopping for SMART appliances, using smart meters, etc.
Those powers that be are the ones who want to decimate humanity. They come out and tell us that and have been ramping up their technocracy the past year.
Communications controlled or destroyed.....check
Infrastructure likewise...... check

We are in a spiritual war IF you are believers.
We are suffering "SIEGE", a modern EMBARGO.

EPHESIANS CHAPTER 6
I used to think it might be possible. But no longer. Consciousness isn't created by the body. Consciousness creates the body and exists after the body is gone,..,.and in many cases even while the body is alive consciousness separates from it.
Originally Posted by Birdwatcher
If it merely acts self-aware without actually being self-aware, how would we know the difference?



Well, for a start it will probably get voted in to public office...generally a sure sign something is missing.
Originally Posted by geedubya
Singularity: What if machines become self-aware.

They'll stay home and drink 40s all day.
Back in '95 or so, I was regularly talking to a buddy from college. He'd gone from being a sound guy to a computer animation guy and then got hired by a think tank at AT&T. AT&T had done a study and decided that in a very short while, the bulk of their customers were going stop being human.

If you look at their vision vs. today's reality, you'll see they guess right. If you bring up a fresh browser window, it's packed with stuff that some AI somewhere has picked to display based on some algorithm. In turn, the AI is brokering that content from elsewhere. AT&T's vision was a bit different, but the end results are similar.

The problem Dave's group faced was understanding the AI and learning to market to it.

Dave asked:
What is Life?
Did these AI entities constitute Life?
Did these AI entities exhibit self-awareness?
If they did, how would we know?

My answer was that if AT&T and the creators of these AI entities had not answered these basic questions, then they better get going on doing so. If they managed to create self-replicating, self-aware, entities that could recognize their own interests apart from the humans that created them, these things could be very harmful, and if they had let them escape out into the wild, I'd be at the head of the line with my torch and pitchfork.

Dave went back to his think tank and shared our discussion with his group. Amazingly, it hit home. Very shortly after that, the think tank broke up. AT&T didn't want to talk about it anymore.

That was 1995. Everywhere I go anymore, I see evidence of AI trying to influence my buying choices, my political choices, my ethical choices. To whose ends?
You can bet they will be registered, Democrats.
No worries about this. Since science doesn't really understand how the brain works, we can't make a machine to replicate it.
Originally Posted by hatari
No worries about this. Since science doesn't really understand how the brain works, we can't make a machine to replicate it.



According to what I’ve observed about society in general and The Campfire in particular, making a machine to replicate it wouldn’t take much of a quantum step.
Originally Posted by antelope_sniper
Originally Posted by mauserand9mm
Never happen. Moores law is winding down.


No, it's not.

Even at that, we are still 350-500 years before the average home PC can support complexity equivalent to the human brain.


With Dominion it won’t be needed.
Jmo
Originally Posted by Old_Toot
Originally Posted by hatari
No worries about this. Since science doesn't really understand how the brain works, we can't make a machine to replicate it.



According to what I’ve observed about society in general and The Campfire in particular, making a machine to replicate it wouldn’t take much of a quantum step.



Well, you might have a point. wink
Yes, as is assumed here the robot must be blessed/cursed with a conscious.

Now is the time that the rights of robots must be considered.

If we could only get on the same freq/wavelength.
© 24hourcampfire