View Single Post
 
Old 09-22-2021, 07:07 AM
GrumpyOldMan GrumpyOldMan is offline
Soaring Eagle member
Join Date: Jul 2019
Posts: 2,016
Thanks: 333
Thanked 2,477 Times in 753 Posts
Default

Quote:
Originally Posted by Blueblaze View Post
I'm still looking for the "stop doing that" button on my Windows 7 computer. My Amazon Firestick wanders off doing god-knows-what on the internet so often that I have to click "Home" about 50 times every time I change the channel, just to get its attention.

I wouldn't be so worried about AI if the guys who wrote Windows and Linux had ever read Azimov. I would be satisfied if coders ever just followed #2.

First Law of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law of Robotics
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law of Robotics
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I agree that would be "good". But, sadly it real AI doesn't work that way, just like it would be so nice if those laws could be implemented in humans, but that would make serious changes in our behavior - LOL.

Also, in full disclosure, this is NOT an AI, GPT-3 and GPT 4 are called deep learning programs. (In fact, almost everything called AI today is not, but the media loves that term but sells clicks.) They analyze massive amounts of data to find trends and then apply those trends to other circumstances. There is a debate whether the GPT-3/4 are actually "learning" still. But, it is a step in that direction.